1/n Here comes the deep dive on the #law of #ChatGPT: Thrilled to have just released a new paper on Regulating ChatGTP and other Large AI Models: https://arxiv.org/abs/2302.02337
IMHO, the current debate is quite absurd in some respects. What do you think qualifies as a high-risk application of AI according to the latest EP proposals:
2/n
The 3 Options:
1) AI in autonomous vehicles
2) propaganda trolls using ChatGPT for the mass generation of hate speech which they then post and propagate on social media
3) me using ChatGPT and stable diffusion to design a birthday invitation for our daughter, being too busy to properly review it, and sending it out to fellow parents.
3/n
As you can easily imagine, Option 3 (birthday invite) is the right answer - no kidding! And if you think: wait a minute, this sounds preposterous – you're not alone. If you did not pick Option 3, I invite you to read our paper (see 1/n for the link)
4/n
– We critique the EP co-rapporteur proposal EURACTIV 's Luca Bertuzzi just reported on, which would include Large Generative AI Models (LGAIMs) in Annex III AI Act. Importantly, this would have the absolutely absurd consequences just outlined. Is a botched birthday invite really comparable to the killing of a human being, to failing to recruit someone, to denying her credit, to profiling him for health insurance? To ask the question is to answer it - in the negative.