eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

241
active users

Last week I was invited to discuss and in the context of the 's Services Act .

Not sure the panel recording will be public, so here are my notes in a long 🧵 thread:

▶️ The DSA is rather good at respecting of expression, it gives platforms some new tools to carefully consider and improve their moderation practices and users to defend their rights. [1/8]

▶️ That of course requires heavy, heavy investment from the largest : They must acknowledge that their cost of operation per user will increase considerably, reflecting the real societal costs and including the externalities of running global, centralised social media platforms. [2/8]

▶️ Media pluralism and a strong are a different matter: Publishers are pushing for news media to be exempted from social media moderation because their content is supposedly of better quality and higher value. But what (online) journalism is suffering from most is not the systematic or large-scale removal of legitimate news content (although that might happen and be annoying). The media exemption from content moderation… [3/8]

Jan Penfrat

...demanded for in the EU’s are not going to help. They are setting dangerous precedents for a two-tier system that is going to benefit some very prolific producers of disinformation and hatred online: Think in , the in the , or any state-affiliated outlet in and . There's a reason why even temporary ‘must carry’ obligations have been rejected in the and they should absolutely not be added to . [4/8]

▶️ The real problem is the staggering dependency of online on corporate social media platforms to reach their audience and the loss of massive amounts of ad revenue that is sucked up by middlemen for supposedly well-targeted . It’s the classic problem: The harms inflicted by these gatekeepers to our attention can be directly measured by the loss in independence of—and income for—the very same that is supposed to protect. [5/8]

▶️ On top of that, free expression in the EU would hugely benefit from more in social media and that can only be effectively achieved through its . The way it’s done by -based services in the . That’s of course not something the can achieve alone (the could have with social media ), but it’s a worthwhile political goal that needs both legislative and support. [6/8]

▶️ So, what does this mean for the elephant in the room, the risk assessments under the ? They should absolutely look at how are improving their content practices, incl employing qualified moderators & providing top-notch working conditions for them. Good content mod cannot simply be measured by how many posts have been removed or accounts blocked. This is stuff VLOP love because with 3 billion users those numbers will always appear big, even if they are meaningless. [7/8]

▶️ Risk assessments around content should of course also cover paid content (aka ). ads still enable wealthy people to buy ad space to flood the (digital) public space with their agenda, including targeted disinformation. But risk assessments also need to look at the role VLOP and VLOSE play in recommending so-called organic content to users and the personal data processing that underpins it. [8/8]

OK this thread was so long, it should have been a blog post, sorry :)

The end: Offering various recommender algorithms with different optimisation goals for different audiences—including from third party providers—would be one very interesting way to mitigate the risks that stem from the algorithmic monopoly situation we’re currently in on all VLOP. [the real end]