eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

206
active users

#algorithmicbias

1 post1 participant0 posts today

Do you work in HR and want to prevent #discrimination through #AI hiring tools? Your contribution is key! 🗝️

The increasing use of AI-based recruitment systems, #AlgorithmicHiring, promises to save time, but also carries risks. As part of the EU-Horizon Project #FINDHR, we’ve developed practical recommendations to help tackle #AlgorithmicBias in hiring.

For more recommendations, take a look at our FINDHR-Toolkit for HR professionals: findhr.eu/toolkit/hr-professio

Replied in thread

@ErikJonker @geopolitics This is such an important observation about how platform design shapes public discourse!
It’s both fascinating and concerning to see how the same factual information can spark constructive conversation on one platform and devolve into disinformation on another. The contrast between Bluesky and X really underscores how algorithms and moderation policies influence the quality of dialogue.

For me, the fediverse—especially Mastodon—has been a breath of fresh air in this regard. It feels like a space where facts and evidence-based discussions can thrive, rooted in a shared reality rather than outrage or misinformation. But I’m curious: Is this just my personal experience, or do others share the impression that Mastodon fosters a more fact-based discussion environment? Have you explored other platforms beyond Bluesky that prioritize constructive dialogue?

It’s disheartening to see how platforms that prioritize engagement over accuracy can drown out meaningful conversations. While I’m fortunate enough to avoid Twitter/X and Meta, I recognize that transitioning to open-source, decentralized social networks isn’t feasible for everyone. This makes me wonder: How can we encourage more platforms to adopt models that foster informed debate rather than outrage? Supporting not-for-profit or decentralized alternatives might be part of the solution, but it’s a challenge that requires broader awareness and action.

Thanks for sharing this—it’s a powerful reminder of how critical platform design is to the health of our digital public spaces!

#DigitalLiteracy #PlatformDesign #EvidenceBasedDiscourse #Fediverse #Mastodon #TechEthics #ConstructiveDialogue #AlgorithmicBias #FactOverFiction
#twitter #Bluesky

What does it actually mean when we say that generative AI raises ethical questions?
🔵 Dr. Thilo Hagendorff, our research group leader at IRIS3D, has taken this question seriously and systematically. With his interactive Ethics Tree, he has created one of the most comprehensive overviews of ethical problem areas in generative AI: lnkd.in/ebzZYaU7
More than 300 clearly defined issues – ranging from discrimination and disinformation to ecological impacts – demonstrate the depth and scope of the ethical landscape. This “tree” does not merely highlight risks, but structures a field that is increasingly under pressure politically, technologically, and socially.
Mapping these questions so systematically underlines the need for ethical reflection as a core competence in AI research – not after the fact, but as part of the epistemic and technical process.

#GenerativeAI
#AIethics
#ResponsibleAI
#EthicsInAI
#TechEthics
#AIresearch
#MachineLearning
#AIgovernance
#DigitalEthics
#AlgorithmicBias
#Disinformation
#SustainableAI
#InterdisciplinaryResearch
#ScienceAndSociety
#IRIS3D

Replied in thread

THE ALGORITHM VS. THE HUMAN MIND: A LOSING BATTLE
¯

_
NO RECOGNITION FOR THE AUTHOR

YouTube does not reward consistency, insight, or author reputation. A comment may become a “top comment” for a day, only to vanish the next. There’s no memory, no history of editorial value. The platform doesn’t surface authors who contribute regularly with structured, relevant input. There's no path for authorship to emerge or be noticed. The “like” system favors early commenters — the infamous firsts — who write “first,” “early,” or “30 seconds in” just after a video drops. These are the comments that rise to the top. Readers interact with the text, not the person behind it. This is by design. YouTube wants engagement to stay contained within the content creator’s channel, not spread toward the audience. A well-written comment should not amplify a small creator’s reach — that would disrupt the platform’s control over audience flow.
¯

_
USERS WHO’VE STOPPED THINKING

The algorithm trains people to wait for suggestions. Most users no longer take the initiative to explore or support anyone unless pushed by the system. Even when someone says something exceptional, the response remains cold. The author is just a font — not a presence. A familiar avatar doesn’t trigger curiosity. On these platforms, people follow only the already-famous. Anonymity is devalued by default. Most users would rather post their own comment (that no one will ever read) than reply to others. Interaction is solitary. YouTube, by design, encourages people to think only about themselves.
¯

_
ZERO MODERATION FOR SMALL CREATORS

Small creators have no support when it comes to moderation. In low-traffic streams, there's no way to filter harassment or mockery. Trolls can show up just to enjoy someone else's failure — and nothing stops them. Unlike big streamers who can appoint moderators, smaller channels lack both the tools and the visibility to protect themselves. YouTube provides no built-in safety net, even though these creators are often the most exposed.
¯

_
EXTERNAL LINKS ARE SABOTAGED

Trying to drive traffic to your own website? In the “About” section, YouTube adds a warning label to every external link: “You’re about to leave YouTube. This site may be unsafe.” It looks like an antivirus alert — not a routine redirect. It scares away casual users. And even if someone knows better, they still have to click again to confirm. That’s not protection — it’s manufactured discouragement. This cheap shot, disguised as safety, serves a single purpose: preventing viewers from leaving the ecosystem. YouTube has no authority to determine what is or isn’t a “safe” site beyond its own platform.
¯

_
HUMANS CAN’T OUTPERFORM THE MACHINE

At every level, the human loses. You can’t outsmart an algorithm that filters, sorts, buries. You can’t even decide who you want to support: the system always intervenes. Talent alone isn’t enough. Courage isn’t enough. You need to break through a machine built to elevate the dominant and bury the rest. YouTube claims to be a platform for expression. But what it really offers is a simulated discovery engine — locked down and heavily policed.
¯

_
||#HSLdiary #HSLmichael

"After the entry into force of the Artificial Intelligence (AI) Act in August 2024, an open question is its interplay with the General Data Protection Regulation (GDPR). The AI Act aims to promote human-centric, trustworthy and sustainable AI, while respecting individuals' fundamental rights and freedoms, including their right to the protection of personal data. One of the AI Act's main objectives is to mitigate discrimination and bias in the development, deployment and use of 'high-risk AI systems'. To achieve this, the act allows 'special categories of personal data' to be processed, based on a set of conditions (e.g. privacy-preserving measures) designed to identify and to avoid discrimination that might occur when using such new technology. The GDPR, however, seems more restrictive in that respect. The legal uncertainty this creates might need to be addressed through legislative reform or further guidance."

europarl.europa.eu/thinktank/e

www.europarl.europa.euAlgorithmic discrimination under the AI Act and the GDPR | Think Tank | European ParliamentAlgorithmic discrimination under the AI Act and the GDPR
#EU#AI#AIAct

"In October 2021, we sent a freedom-of-information request to the Social Insurance Agency attempting to find out more. It immediately rejected our request. Over the next three years, we exchanged hundreds of emails and sent dozens of freedom-of-information requests, nearly all of which were rejected. We went to court, twice, and spoke to half a dozen public authorities.

Lighthouse Reports and Svenska Dagbladet obtained an unpublished dataset containing thousands of applicants to Sweden’s temporary child support scheme, which supports parents taking care of sick children. Each of them had been flagged as suspicious by a predictive algorithm deployed by the Social Insurance Agency. Analysis of the dataset revealed that the agency’s fraud prediction algorithm discriminated against women, migrants, low-income earners and people without a university education.

Months of reporting — including conversations with confidential sources — demonstrate how the agency has deployed these systems without scrutiny despite objections from regulatory authorities and even its own data protection officer."

lighthousereports.com/investig

Lighthouse ReportsSweden’s Suspicion MachineBehind a veil of secrecy, the social security agency deploys discriminatory algorithms searching for fraud epidemic it has invented

"Categorizing the types of algorithmic harms delineates the legal boundaries of AI regulation and presents possible legal reforms to bridge this accountability gap. Changes I believe would help include mandatory algorithmic impact assessments that require companies to document and address the immediate and cumulative harms of an AI application to privacy, autonomy, equality and safety – before and after it’s deployed. For instance, firms using facial recognition systems would need to evaluate these systems’ impacts throughout their life cycle.

Another helpful change would be stronger individual rights around the use of AI systems, allowing people to opt out of harmful practices and making certain AI applications opt in. For example, requiring an opt-in regime for data processing by firms’ use of facial recognition systems and allowing users to opt out at any time.

Lastly, I suggest requiring companies to disclose the use of AI technology and its anticipated harms. To illustrate, this may include notifying customers about the use of facial recognition systems and the anticipated harms across the domains outlined in the typology."

theconversation.com/ai-harm-is

The ConversationAI harm is often behind the scenes and builds over time – a legal scholar explains how the law can adapt to respondThe damage AI algorithms cause is not easily remedied. Breaking algorithmic harms into four categories results in pieces that better align with the law and points the way to better regulation.

"Categorizing the types of algorithmic harms delineates the legal boundaries of AI regulation and presents possible legal reforms to bridge this accountability gap. Changes I believe would help include mandatory algorithmic impact assessments that require companies to document and address the immediate and cumulative harms of an AI application to privacy, autonomy, equality and safety – before and after it’s deployed. For instance, firms using facial recognition systems would need to evaluate these systems’ impacts throughout their life cycle.

Another helpful change would be stronger individual rights around the use of AI systems, allowing people to opt out of harmful practices and making certain AI applications opt in. For example, requiring an opt-in regime for data processing by firms’ use of facial recognition systems and allowing users to opt out at any time.

Lastly, I suggest requiring companies to disclose the use of AI technology and its anticipated harms. To illustrate, this may include notifying customers about the use of facial recognition systems and the anticipated harms across the domains outlined in the typology."

theconversation.com/ai-harm-is

The ConversationAI harm is often behind the scenes and builds over time – a legal scholar explains how the law can adapt to respondThe damage AI algorithms cause is not easily remedied. Breaking algorithmic harms into four categories results in pieces that better align with the law and points the way to better regulation.

"This technical report presents findings from a two-phase analysis investigating potential algorithmic bias in engagement metrics on X (formerly Twitter) by examining Elon Musk’s account against a group of prominent users and subsequently comparing Republican-leaning versus Democrat-leaning accounts. The analysis reveals a structural engagement shift around mid-July 2024, suggesting platform-level changes that influenced engagement metrics for all accounts under examination. The date at which the structural break (spike) in engagement occurs coincides with Elon Musk’s formal endorsement of Donald Trump on 13th July 2024.

In Phase One, focused on Elon Musk’s account, the analysis identified a marked differential uplift across all engagement metrics (view counts, retweet counts, and favourite counts) following the detected change point. Musk’s account not only started with a higher baseline compared to the other accounts in the analysis but also received a significant additional boost post-change, indicating a potential algorithmic adjustment that preferentially enhanced visibility and interaction for Musk’s posts.

In Phase Two, comparing Republican-leaning and Democrat-leaning accounts, we again observed an engagement shift around the same date, affecting all metrics. However, only view counts showed evidence of a group-specific boost, with Republican-leaning accounts exhibiting a significant post-change increase relative to Democrat-leaning accounts. This finding suggests a possible recommendation bias favouring Republican content in terms of visibility, potentially via recommendation mechanisms such as the "For You" feed. Conversely, retweet and favourite counts did not display the same group-specific boost, indicating a more balanced distribution of engagement across political alignments."

eprints.qut.edu.au/253211/

eprints.qut.edu.auA computational analysis of potential algorithmic bias on platform X during the 2024 US election | QUT ePrintsGraham, Timothy & Andrejevic, Mark (2024) A computational analysis of potential algorithmic bias on platform X during the 2024 US election. [Working Paper] (Unpublished)

""Some of the starkest examples looked at how Google treats certain health questions. Google often pulls information from the web and shows it at the top of results to provide a quick answer, which it calls a Featured Snippet. Presch searched for "link between coffee and hypertension". The Featured Snippet quoted an article from the Mayo Clinic, highlighting the words "Caffeine may cause a short, but dramatic increase in your blood pressure." But when she looked up "no link between coffee and hypertension", the Featured Snippet cited a contradictory line from the very same Mayo Clinic article: "Caffeine doesn't have a long-term effect on blood pressure and is not linked with a higher risk of high blood pressure".

The same thing happened when Presch searched for "is ADHD caused by sugar" and "ADHD not caused by sugar". Google pulled up Featured Snippets that argue support both sides of the question, again taken from the same article. (In reality, there's little evidence that sugar affects ADHD symptoms, and it certainly doesn't cause the disorder.)""

bbc.com/future/article/2024103

BBC · The 'bias machine': How Google tells you what you want to hearBy Thomas Germain