eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

215
active users

#internetregulation

0 posts0 participants0 posts today
Weizenbaum-Institut<p><span class="h-card" translate="no"><a href="https://mas.to/@claraigk" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>claraigk</span></a></span> <a href="https://social.bund.de/tags/DigitalServicesAct" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DigitalServicesAct</span></a> <a href="https://social.bund.de/tags/DSA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DSA</span></a> <a href="https://social.bund.de/tags/PlatformGovernance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PlatformGovernance</span></a> <a href="https://social.bund.de/tags/SystemicRisk" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SystemicRisk</span></a> <a href="https://social.bund.de/tags/TechPolicy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechPolicy</span></a> <a href="https://social.bund.de/tags/InternetRegulation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>InternetRegulation</span></a> <a href="https://social.bund.de/tags/DigitalRights" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DigitalRights</span></a> <a href="https://social.bund.de/tags/PublicPolicy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PublicPolicy</span></a> <a href="https://social.bund.de/tags/PowerDynamics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PowerDynamics</span></a> <a href="https://social.bund.de/tags/EUlaw" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EUlaw</span></a> <span class="h-card" translate="no"><a href="https://chaos.social/@lkseiling" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>lkseiling</span></a></span></p>
Matt Hodgkinson<p>"On 8 May, 2025, the <span class="h-card" translate="no"><a href="https://wikimedia.social/@wikimediafoundation" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>wikimediafoundation</span></a></span>, the nonprofit that hosts <span class="h-card" translate="no"><a href="https://wikis.world/@wikipedia" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>wikipedia</span></a></span>, announced that it is challenging the lawfulness of the UK’s Online Safety Act (OSA)’s Categorisation Regulations. We are arguing that they place Wikipedia and its users at unacceptable risk of being subjected to the OSA’s toughest “Category 1” duties, which were originally designed to target some of the UK’s riskiest websites."</p><p><a href="https://medium.com/wikimedia-policy/wikipedias-nonprofit-host-brings-legal-challenge-to-new-online-safety-act-osa-regulations-0f9153102f29" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">medium.com/wikimedia-policy/wi</span><span class="invisible">kipedias-nonprofit-host-brings-legal-challenge-to-new-online-safety-act-osa-regulations-0f9153102f29</span></a></p><p><a href="https://scicomm.xyz/tags/Wikipedia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Wikipedia</span></a> <a href="https://scicomm.xyz/tags/WikimediaFoundation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>WikimediaFoundation</span></a> <a href="https://scicomm.xyz/tags/OnlineSafetyAct" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OnlineSafetyAct</span></a> <a href="https://scicomm.xyz/tags/OnlineSafety" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OnlineSafety</span></a> <a href="https://scicomm.xyz/tags/UKlaw" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>UKlaw</span></a> <a href="https://scicomm.xyz/tags/InternetRegulation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>InternetRegulation</span></a></p>
Miguel Afonso Caetano<p><a href="https://tldr.nettime.org/tags/USA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>USA</span></a> <a href="https://tldr.nettime.org/tags/EU" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EU</span></a> <a href="https://tldr.nettime.org/tags/SocialMedia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SocialMedia</span></a> <a href="https://tldr.nettime.org/tags/Copyright" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Copyright</span></a> <a href="https://tldr.nettime.org/tags/InternetRegulation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>InternetRegulation</span></a> <a href="https://tldr.nettime.org/tags/ContentModeration" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ContentModeration</span></a>: "Around the world, lawmakers are enacting laws that require platforms to change their operations, and that use language like “design,” “risk mitigation,” or “systems.” All too often, these are transparently euphemisms for mandates that target legally protected speech and information. This misdirection keeps lawmakers and their constituents from having honest discussions about the laws. It also muddies the waters for laws that actually do regulate platform design without regulating users’ speech. Both speech-restrictive and non-speech-restrictive design laws exist. It can be hard to tell them apart. </p><p>I recently stumbled across a very interesting tool for assessing what these laws actually mean: I asked ChatGPT. Specifically, I asked a customized version of ChatGPT called the “Trust &amp; Safety Regulation expert” about laws like the EU’s Digital Services Act (DSA) and the U.S.’s draft Kids Online Safety Act (KOSA). I was surprised by the answers I got. While lawyers may debate the finer nuances of such laws, ChatGPT says the quiet part out loud. It clearly and bluntly tells platforms that the laws require them to suppress legal expression. </p><p>The annotated transcripts showing what ChatGPT told me are here for the US and here for Europe. They include discussion of topics I won’t discuss here--about things like EU “Right to Be Forgotten” law and copyright filters, and Texas’s social media law. The transcripts are are fascinating, and I won’t be offended if you go straight to the transcripts instead of reading the rest of this post. The post is about how we arrived at laws regulating things like "design features," what ChatGPT said, and why it matters."</p><p><a href="https://cyberlaw.stanford.edu/blog/2024/08/regulating-platform-risk-and-design-chatgpt-says-the-quiet-part-out-loud/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">cyberlaw.stanford.edu/blog/202</span><span class="invisible">4/08/regulating-platform-risk-and-design-chatgpt-says-the-quiet-part-out-loud/</span></a></p>
Miguel Afonso Caetano<p><a href="https://tldr.nettime.org/tags/UK" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>UK</span></a> <a href="https://tldr.nettime.org/tags/SocialMedia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SocialMedia</span></a> <a href="https://tldr.nettime.org/tags/InternetRegulation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>InternetRegulation</span></a> <a href="https://tldr.nettime.org/tags/HateSpeech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HateSpeech</span></a> <a href="https://tldr.nettime.org/tags/Amplification" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Amplification</span></a> <a href="https://tldr.nettime.org/tags/Riots" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Riots</span></a>: "The last question is whether we need more law anyway. There’s already a lot of law out there. When the dust settles, we’ll see that people have been prosecuted under public order legislation, under malicious communications legislation, for communications offences, and so on. Punishing those who actually riot is not going to be a problem. Punishing those who used social media to instigate these acts is not likely to prove a problem either.</p><p>Punishing those behind the acts is another matter. It seems notable to me that of the many proposals being mentioned by politicians so far, none seem to be even trying to hold those whose rhetoric, both online and offline, have made it all happen, to account. Until and unless they do, the rest is all irrelevant."</p><p><a href="https://paulbernal.wordpress.com/2024/08/15/riots-and-social-media-regulation-some-thoughts/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">paulbernal.wordpress.com/2024/</span><span class="invisible">08/15/riots-and-social-media-regulation-some-thoughts/</span></a></p>
Miguel Afonso Caetano<p><a href="https://tldr.nettime.org/tags/USA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>USA</span></a> <a href="https://tldr.nettime.org/tags/Disinformation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Disinformation</span></a> <a href="https://tldr.nettime.org/tags/Libertarianism" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Libertarianism</span></a> <a href="https://tldr.nettime.org/tags/InternetRegulation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>InternetRegulation</span></a> <a href="https://tldr.nettime.org/tags/ContentModeration" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ContentModeration</span></a>: "Disinformation concerns have heightened the importance of regulating content and speech in digital communication environments. Perceived risks have led to widespread public support for stricter control measures, even at the expense of individual speech rights. To better understand these preferences in the US context, we investigate public attitudes regarding blame for and obligation to address digital disinformation by drawing on political ideology, libertarian values, trust in societal actors, and issue salience. A manual content analysis of open-ended survey responses in combination with an issue salience experiment shows that political orientation and trust in actors primarily drive blame attribution, while libertarianism predominantly informs whose obligation it is to stop the spread. Additionally, enhancing the salience of specific aspects of the issue can influence people's assessments of blame and obligation. Our findings reveal a range of attributions, underlining the need for careful balance in regulatory interventions. Additionally, we expose a gap in previous literature by demonstrating libertarianism's unique role vis-à-vis political orientation in the context of regulating content and speech in digital communication environments."</p><p><a href="https://onlinelibrary.wiley.com/doi/10.1002/poi3.407" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">onlinelibrary.wiley.com/doi/10</span><span class="invisible">.1002/poi3.407</span></a></p>
Netopia EU<p><a href="https://eupolicy.social/tags/surveillancecapitalism" class="mention hashtag" rel="tag">#<span>surveillancecapitalism</span></a> <a href="https://eupolicy.social/tags/privacy" class="mention hashtag" rel="tag">#<span>privacy</span></a> <a href="https://eupolicy.social/tags/internetregulation" class="mention hashtag" rel="tag">#<span>internetregulation</span></a> </p><p>&quot;Only the EU has thus far passed regulation to stop large platforms’ profiling of minors for advertising purposes.&quot; ⤵️ <br />---<br />RT @AmnestyTech<br />We, @amnesty have previously called for a ban on targeted advertising, which relies on the invasive tracking of users. Only the EU has thus far passed regulation to stop large platforms’ profiling of minors for advertising purposes.</p><p>https://www…<br /><a href="https://twitter.com/AmnestyTech/status/1639294093707603968" target="_blank" rel="nofollow noopener" translate="no"><span class="invisible">https://</span><span class="ellipsis">twitter.com/AmnestyTech/status</span><span class="invisible">/1639294093707603968</span></a></p>