arXiv.orgRegulating ChatGPT and other Large Generative AI ModelsLarge generative AI models (LGAIMs), such as ChatGPT or Stable Diffusion, are
rapidly transforming the way we communicate, illustrate, and create. However,
AI regulation, in the EU and beyond, has primarily focused on conventional AI
models, not LGAIMs. This paper will situate these new generative models in the
current debate on trustworthy AI regulation, and ask how the law can be
tailored to their capabilities. After laying technical foundations, the legal
part of the paper proceeds in four steps, covering (1) direct regulation, (2)
data protection, (3) content moderation, and (4) policy proposals. It suggests
a novel terminology to capture the AI value chain in LGAIM settings by
differentiating between LGAIM developers, deployers, professional and
non-professional users, as well as recipients of LGAIM output. We tailor
regulatory duties to these different actors along the value chain and suggest
four strategies to ensure that LGAIMs are trustworthy and deployed for the
benefit of society at large. Rules in the AI Act and other direct regulation
must match the specificities of pre-trained models. In particular, regulation
should focus on concrete high-risk applications, and not the pre-trained model
itself, and should include (i) obligations regarding transparency and (ii) risk
management. Non-discrimination provisions (iii) may, however, apply to LGAIM
developers. Lastly, (iv) the core of the DSA content moderation rules should be
expanded to cover LGAIMs. This includes notice and action mechanisms, and
trusted flaggers. In all areas, regulators and lawmakers need to act fast to
keep track with the dynamics of ChatGPT et al.