eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

214
active users

#algorithmicbias

1 post1 participant1 post today

The Conversation: Women’s sports are fighting an uphill battle against our social media algorithms. “Algorithms, trained to maximise engagement and profits, are deciding what appears in your feed, which video auto-plays next, and which highlights are pushed to the top of your screen. But here is the problem: algorithms prioritise content that is already popular. That usually means men’s […]

https://rbfirehose.com/2025/05/12/the-conversation-womens-sports-are-fighting-an-uphill-battle-against-our-social-media-algorithms/

ResearchBuzz: Firehose | Individual posts from ResearchBuzz · The Conversation: Women’s sports are fighting an uphill battle against our social media algorithms | ResearchBuzz: Firehose
More from ResearchBuzz: Firehose
Replied in thread

THE ALGORITHM VS. THE HUMAN MIND: A LOSING BATTLE
¯

_
NO RECOGNITION FOR THE AUTHOR

YouTube does not reward consistency, insight, or author reputation. A comment may become a “top comment” for a day, only to vanish the next. There’s no memory, no history of editorial value. The platform doesn’t surface authors who contribute regularly with structured, relevant input. There's no path for authorship to emerge or be noticed. The “like” system favors early commenters — the infamous firsts — who write “first,” “early,” or “30 seconds in” just after a video drops. These are the comments that rise to the top. Readers interact with the text, not the person behind it. This is by design. YouTube wants engagement to stay contained within the content creator’s channel, not spread toward the audience. A well-written comment should not amplify a small creator’s reach — that would disrupt the platform’s control over audience flow.
¯

_
USERS WHO’VE STOPPED THINKING

The algorithm trains people to wait for suggestions. Most users no longer take the initiative to explore or support anyone unless pushed by the system. Even when someone says something exceptional, the response remains cold. The author is just a font — not a presence. A familiar avatar doesn’t trigger curiosity. On these platforms, people follow only the already-famous. Anonymity is devalued by default. Most users would rather post their own comment (that no one will ever read) than reply to others. Interaction is solitary. YouTube, by design, encourages people to think only about themselves.
¯

_
ZERO MODERATION FOR SMALL CREATORS

Small creators have no support when it comes to moderation. In low-traffic streams, there's no way to filter harassment or mockery. Trolls can show up just to enjoy someone else's failure — and nothing stops them. Unlike big streamers who can appoint moderators, smaller channels lack both the tools and the visibility to protect themselves. YouTube provides no built-in safety net, even though these creators are often the most exposed.
¯

_
EXTERNAL LINKS ARE SABOTAGED

Trying to drive traffic to your own website? In the “About” section, YouTube adds a warning label to every external link: “You’re about to leave YouTube. This site may be unsafe.” It looks like an antivirus alert — not a routine redirect. It scares away casual users. And even if someone knows better, they still have to click again to confirm. That’s not protection — it’s manufactured discouragement. This cheap shot, disguised as safety, serves a single purpose: preventing viewers from leaving the ecosystem. YouTube has no authority to determine what is or isn’t a “safe” site beyond its own platform.
¯

_
HUMANS CAN’T OUTPERFORM THE MACHINE

At every level, the human loses. You can’t outsmart an algorithm that filters, sorts, buries. You can’t even decide who you want to support: the system always intervenes. Talent alone isn’t enough. Courage isn’t enough. You need to break through a machine built to elevate the dominant and bury the rest. YouTube claims to be a platform for expression. But what it really offers is a simulated discovery engine — locked down and heavily policed.
¯

_
||#HSLdiary #HSLmichael

"After the entry into force of the Artificial Intelligence (AI) Act in August 2024, an open question is its interplay with the General Data Protection Regulation (GDPR). The AI Act aims to promote human-centric, trustworthy and sustainable AI, while respecting individuals' fundamental rights and freedoms, including their right to the protection of personal data. One of the AI Act's main objectives is to mitigate discrimination and bias in the development, deployment and use of 'high-risk AI systems'. To achieve this, the act allows 'special categories of personal data' to be processed, based on a set of conditions (e.g. privacy-preserving measures) designed to identify and to avoid discrimination that might occur when using such new technology. The GDPR, however, seems more restrictive in that respect. The legal uncertainty this creates might need to be addressed through legislative reform or further guidance."

europarl.europa.eu/thinktank/e

www.europarl.europa.euAlgorithmic discrimination under the AI Act and the GDPR | Think Tank | European ParliamentAlgorithmic discrimination under the AI Act and the GDPR
#EU#AI#AIAct

The Conversation: Unrest in Bangladesh is revealing the bias at the heart of Google’s search engine. “…while Google’s search results are shaped by ostensibly neutral rules and processes, research has shown these algorithms often produce biased results. This problem of algorithmic bias is again being highlighted by recent escalating tensions between India and Bangladesh and cases of […]

https://rbfirehose.com/2025/02/17/the-conversation-unrest-in-bangladesh-is-revealing-the-bias-at-the-heart-of-googles-search-engine/

ResearchBuzz: Firehose | Individual posts from ResearchBuzz · The Conversation: Unrest in Bangladesh is revealing the bias at the heart of Google’s search engine | ResearchBuzz: Firehose
More from ResearchBuzz: Firehose

"In October 2021, we sent a freedom-of-information request to the Social Insurance Agency attempting to find out more. It immediately rejected our request. Over the next three years, we exchanged hundreds of emails and sent dozens of freedom-of-information requests, nearly all of which were rejected. We went to court, twice, and spoke to half a dozen public authorities.

Lighthouse Reports and Svenska Dagbladet obtained an unpublished dataset containing thousands of applicants to Sweden’s temporary child support scheme, which supports parents taking care of sick children. Each of them had been flagged as suspicious by a predictive algorithm deployed by the Social Insurance Agency. Analysis of the dataset revealed that the agency’s fraud prediction algorithm discriminated against women, migrants, low-income earners and people without a university education.

Months of reporting — including conversations with confidential sources — demonstrate how the agency has deployed these systems without scrutiny despite objections from regulatory authorities and even its own data protection officer."

lighthousereports.com/investig

Lighthouse ReportsSweden’s Suspicion MachineBehind a veil of secrecy, the social security agency deploys discriminatory algorithms searching for fraud epidemic it has invented

"Categorizing the types of algorithmic harms delineates the legal boundaries of AI regulation and presents possible legal reforms to bridge this accountability gap. Changes I believe would help include mandatory algorithmic impact assessments that require companies to document and address the immediate and cumulative harms of an AI application to privacy, autonomy, equality and safety – before and after it’s deployed. For instance, firms using facial recognition systems would need to evaluate these systems’ impacts throughout their life cycle.

Another helpful change would be stronger individual rights around the use of AI systems, allowing people to opt out of harmful practices and making certain AI applications opt in. For example, requiring an opt-in regime for data processing by firms’ use of facial recognition systems and allowing users to opt out at any time.

Lastly, I suggest requiring companies to disclose the use of AI technology and its anticipated harms. To illustrate, this may include notifying customers about the use of facial recognition systems and the anticipated harms across the domains outlined in the typology."

theconversation.com/ai-harm-is

The ConversationAI harm is often behind the scenes and builds over time – a legal scholar explains how the law can adapt to respondThe damage AI algorithms cause is not easily remedied. Breaking algorithmic harms into four categories results in pieces that better align with the law and points the way to better regulation.

"Categorizing the types of algorithmic harms delineates the legal boundaries of AI regulation and presents possible legal reforms to bridge this accountability gap. Changes I believe would help include mandatory algorithmic impact assessments that require companies to document and address the immediate and cumulative harms of an AI application to privacy, autonomy, equality and safety – before and after it’s deployed. For instance, firms using facial recognition systems would need to evaluate these systems’ impacts throughout their life cycle.

Another helpful change would be stronger individual rights around the use of AI systems, allowing people to opt out of harmful practices and making certain AI applications opt in. For example, requiring an opt-in regime for data processing by firms’ use of facial recognition systems and allowing users to opt out at any time.

Lastly, I suggest requiring companies to disclose the use of AI technology and its anticipated harms. To illustrate, this may include notifying customers about the use of facial recognition systems and the anticipated harms across the domains outlined in the typology."

theconversation.com/ai-harm-is

The ConversationAI harm is often behind the scenes and builds over time – a legal scholar explains how the law can adapt to respondThe damage AI algorithms cause is not easily remedied. Breaking algorithmic harms into four categories results in pieces that better align with the law and points the way to better regulation.

🚨NEW study, from Dr Graham & Dr. Andrejevic from @qutdmrc with eye-opening! 👀 findings!
The computational analysis of engagement found that X's algorithm was changed in July 2024 to boost Republican-leaning & Elon Musk's accounts during US election.
Elon Musk's Engagement 🚀 They found a significant boost in Musk's view, retweet, and like counts around July 13th, 2024. This coincides with his Trump endorsement! 🤔

#AlgorithmicBias #USElection2024 #Twitter #X
eprints.qut.edu.au/253211/

"This technical report presents findings from a two-phase analysis investigating potential algorithmic bias in engagement metrics on X (formerly Twitter) by examining Elon Musk’s account against a group of prominent users and subsequently comparing Republican-leaning versus Democrat-leaning accounts. The analysis reveals a structural engagement shift around mid-July 2024, suggesting platform-level changes that influenced engagement metrics for all accounts under examination. The date at which the structural break (spike) in engagement occurs coincides with Elon Musk’s formal endorsement of Donald Trump on 13th July 2024.

In Phase One, focused on Elon Musk’s account, the analysis identified a marked differential uplift across all engagement metrics (view counts, retweet counts, and favourite counts) following the detected change point. Musk’s account not only started with a higher baseline compared to the other accounts in the analysis but also received a significant additional boost post-change, indicating a potential algorithmic adjustment that preferentially enhanced visibility and interaction for Musk’s posts.

In Phase Two, comparing Republican-leaning and Democrat-leaning accounts, we again observed an engagement shift around the same date, affecting all metrics. However, only view counts showed evidence of a group-specific boost, with Republican-leaning accounts exhibiting a significant post-change increase relative to Democrat-leaning accounts. This finding suggests a possible recommendation bias favouring Republican content in terms of visibility, potentially via recommendation mechanisms such as the "For You" feed. Conversely, retweet and favourite counts did not display the same group-specific boost, indicating a more balanced distribution of engagement across political alignments."

eprints.qut.edu.au/253211/

eprints.qut.edu.auA computational analysis of potential algorithmic bias on platform X during the 2024 US election | QUT ePrintsGraham, Timothy & Andrejevic, Mark (2024) A computational analysis of potential algorithmic bias on platform X during the 2024 US election. [Working Paper] (Unpublished)

""Some of the starkest examples looked at how Google treats certain health questions. Google often pulls information from the web and shows it at the top of results to provide a quick answer, which it calls a Featured Snippet. Presch searched for "link between coffee and hypertension". The Featured Snippet quoted an article from the Mayo Clinic, highlighting the words "Caffeine may cause a short, but dramatic increase in your blood pressure." But when she looked up "no link between coffee and hypertension", the Featured Snippet cited a contradictory line from the very same Mayo Clinic article: "Caffeine doesn't have a long-term effect on blood pressure and is not linked with a higher risk of high blood pressure".

The same thing happened when Presch searched for "is ADHD caused by sugar" and "ADHD not caused by sugar". Google pulled up Featured Snippets that argue support both sides of the question, again taken from the same article. (In reality, there's little evidence that sugar affects ADHD symptoms, and it certainly doesn't cause the disorder.)""

bbc.com/future/article/2024103

BBC · The 'bias machine': How Google tells you what you want to hearBy Thomas Germain