eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

241
active users

Oxford reloaded 🍿: Here's the video of my talk at Magdalen College, University of Oxford on Regulating ChatGPT. I have included comments on the latest EP proposals concerning the AI Act and on the temporary Italian ChatGPT ban (which was, IMHO, quite justified).

Many thanks again to Jeremias Adams-Prassl for incredible hospitality, to Amelie Sophie Berz for great organization and discussions, and to the fantastic Oxford audience for an exciting Q&A!

Video: mycloud.europa-uni.de/s/tkygDM

myCloudPhilipp Hacker Oxford Talk.mp4myCloud - Der Cloud-Speicher für Ihre Dateien
Philipp Hacker

Link to the ACM FAccT paper, already with the comments on the Italian ban (new, updated version including the EP AI developments coming next week): arxiv.org/abs/2302.02337

arXiv.orgRegulating ChatGPT and other Large Generative AI ModelsLarge generative AI models (LGAIMs), such as ChatGPT or Stable Diffusion, are rapidly transforming the way we communicate, illustrate, and create. However, AI regulation, in the EU and beyond, has primarily focused on conventional AI models, not LGAIMs. This paper will situate these new generative models in the current debate on trustworthy AI regulation, and ask how the law can be tailored to their capabilities. After laying technical foundations, the legal part of the paper proceeds in four steps, covering (1) direct regulation, (2) data protection, (3) content moderation, and (4) policy proposals. It suggests a novel terminology to capture the AI value chain in LGAIM settings by differentiating between LGAIM developers, deployers, professional and non-professional users, as well as recipients of LGAIM output. We tailor regulatory duties to these different actors along the value chain and suggest four strategies to ensure that LGAIMs are trustworthy and deployed for the benefit of society at large. Rules in the AI Act and other direct regulation must match the specificities of pre-trained models. In particular, regulation should focus on concrete high-risk applications, and not the pre-trained model itself, and should include (i) obligations regarding transparency and (ii) risk management. Non-discrimination provisions (iii) may, however, apply to LGAIM developers. Lastly, (iv) the core of the DSA content moderation rules should be expanded to cover LGAIMs. This includes notice and action mechanisms, and trusted flaggers. In all areas, regulators and lawmakers need to act fast to keep track with the dynamics of ChatGPT et al.
#ai#gdpr#aiact