eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

223
active users

Philipp Hacker

1/n Here comes the deep dive on the of : Thrilled to have just released a new paper on Regulating ChatGTP and other Large AI Models: arxiv.org/abs/2302.02337

IMHO, the current debate is quite absurd in some respects. What do you think qualifies as a high-risk application of AI according to the latest EP proposals:

arXiv.orgRegulating ChatGPT and other Large Generative AI ModelsLarge generative AI models (LGAIMs), such as ChatGPT or Stable Diffusion, are rapidly transforming the way we communicate, illustrate, and create. However, AI regulation, in the EU and beyond, has primarily focused on conventional AI models, not LGAIMs. This paper will situate these new generative models in the current debate on trustworthy AI regulation, and ask how the law can be tailored to their capabilities. After laying technical foundations, the legal part of the paper proceeds in four steps, covering (1) direct regulation, (2) data protection, (3) content moderation, and (4) policy proposals. It suggests a novel terminology to capture the AI value chain in LGAIM settings by differentiating between LGAIM developers, deployers, professional and non-professional users, as well as recipients of LGAIM output. We tailor regulatory duties to these different actors along the value chain and suggest four strategies to ensure that LGAIMs are trustworthy and deployed for the benefit of society at large. Rules in the AI Act and other direct regulation must match the specificities of pre-trained models. In particular, regulation should focus on concrete high-risk applications, and not the pre-trained model itself, and should include (i) obligations regarding transparency and (ii) risk management. Non-discrimination provisions (iii) may, however, apply to LGAIM developers. Lastly, (iv) the core of the DSA content moderation rules should be expanded to cover LGAIMs. This includes notice and action mechanisms, and trusted flaggers. In all areas, regulators and lawmakers need to act fast to keep track with the dynamics of ChatGPT et al.

2/n

The 3 Options:
1) AI in autonomous vehicles
2) propaganda trolls using ChatGPT for the mass generation of hate speech which they then post and propagate on social media
3) me using ChatGPT and stable diffusion to design a birthday invitation for our daughter, being too busy to properly review it, and sending it out to fellow parents.

3/n
As you can easily imagine, Option 3 (birthday invite) is the right answer - no kidding! And if you think: wait a minute, this sounds preposterous – you're not alone. If you did not pick Option 3, I invite you to read our paper (see 1/n for the link)

4/n
– We critique the EP co-rapporteur proposal EURACTIV 's Luca Bertuzzi just reported on, which would include Large Generative AI Models (LGAIMs) in Annex III AI Act. Importantly, this would have the absolutely absurd consequences just outlined. Is a botched birthday invite really comparable to the killing of a human being, to failing to recruit someone, to denying her credit, to profiling him for health insurance? To ask the question is to answer it - in the negative.