eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

241
active users

Philipp Hacker

1/3
Absolutely thrilled that our paper on Regulating ChatGPT and other Large Generative AI Models was accepted at ACM , perhaps the leading conference and publication on AI Law and Ethics. This is joint work w/ Andreas Engel and Marco Mauer. Many thanks to all the persons and audiences who provided great feedback on the paper!

Link to the paper: arxiv.org/abs/2302.02337

Link to the conference: facctconference.org/

arXiv.orgRegulating ChatGPT and other Large Generative AI ModelsLarge generative AI models (LGAIMs), such as ChatGPT or Stable Diffusion, are rapidly transforming the way we communicate, illustrate, and create. However, AI regulation, in the EU and beyond, has primarily focused on conventional AI models, not LGAIMs. This paper will situate these new generative models in the current debate on trustworthy AI regulation, and ask how the law can be tailored to their capabilities. After laying technical foundations, the legal part of the paper proceeds in four steps, covering (1) direct regulation, (2) data protection, (3) content moderation, and (4) policy proposals. It suggests a novel terminology to capture the AI value chain in LGAIM settings by differentiating between LGAIM developers, deployers, professional and non-professional users, as well as recipients of LGAIM output. We tailor regulatory duties to these different actors along the value chain and suggest four strategies to ensure that LGAIMs are trustworthy and deployed for the benefit of society at large. Rules in the AI Act and other direct regulation must match the specificities of pre-trained models. In particular, regulation should focus on concrete high-risk applications, and not the pre-trained model itself, and should include (i) obligations regarding transparency and (ii) risk management. Non-discrimination provisions (iii) may, however, apply to LGAIM developers. Lastly, (iv) the core of the DSA content moderation rules should be expanded to cover LGAIMs. This includes notice and action mechanisms, and trusted flaggers. In all areas, regulators and lawmakers need to act fast to keep track with the dynamics of ChatGPT et al.
#chatgpt#gpt4#law

2/3
In the paper, we make three main claims: First, we show that current EU regulation concerning generative AI is seriously misguided, inter alia by aiming to regulate models directly instead of use cases.
Second, we seek to pivot the focus from treating generative AI as a generic high-risk technology to the real and proven dangers of using it for fake news and hate speech. Here, the DSA is the right forum, not the AI Act.

3/3
Third, we suggest a range of policy options, from nuanced transparency provisions and tailored risk management to non-discrimination provisions and a meaningful extension of the DSA rules to generative AI.