eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

239
active users

#ITLaw

1 post1 participant0 posts today

The final phase of #LokSabhaElections2024 is scheduled for 1 June. Many parties have released their #manifestos outlining their motivations, aims, and promises to the #public.

Team @sflcin has compared these manifestos, covering #technology-related #rights and #gigworker issues.

Read the complete comparison here: sflc.in/comparing-the-2024-lok

Continued thread

2/n
... Sandra Wachter, Brent Mittelstadt, Ramayya Krishnan, Sue Hendrickson, Rishi Bommasani, Jeremias Adams-Prassl, Rumman Chowdhury, Rasmus Rothe, Jonas Andrulis, Dora Kaufman, Kai Zenner, Orly Lobel, Alexandre Zavaglia Coelho, Andreas Engel, Sarah Hammer, Herbert Zech.
On the eve of the AI for Good Global AI Summit at ITU.

Spread the word!

#chatgpt#gpt4#law

Grateful to have been interviewed for 90 (!) minutes by Thomas Schwenke and Marcus Richter for the splendid Rechtsbelehrung podcast. We talk about all things generative AI, the AI Act, the Italian DPA's ChatGPT ban, the failed Musk moratorium, content moderation, transparency, bias, ..., and other regulatory challenges.

Link: rechtsbelehrung.com/116-ai-act

Original in German, use your favorite AI translation tool :)

Rechtsbelehrung · EU AI-Act: Bahnbrechende KI-Regulierung oder jetzt schon überholt? - KI-Recht #3 - Rechtsbelehrung 115 - RechtsbelehrungWarum der EU-Vorstoß zur KI-Regulierung bereits während des Gesetzgebungsverfahrens grundlegend überdacht und kritisch hinterfragt werden muss.
#chatgpt#gpt4#law

Honored to feature in this timely article on the regulation of ChatGPT, alongside Jonas Andrulis. Many thanks to Christof Kerkmann for the excellent interview! Main take: let's regulate generative AI with respect to specific, high-risk use cases, not across the board as a foundational technology!

Comments welcome!

Link: app.handelsblatt.com/technik/i

@ens@sciences.social
 

HandelsblattKünstliche Intelligenz: Wie die EU ChatGPT künftig regulieren willBy Christof Kerkmann

1/3
Absolutely thrilled that our paper on Regulating ChatGPT and other Large Generative AI Models was accepted at ACM , perhaps the leading conference and publication on AI Law and Ethics. This is joint work w/ Andreas Engel and Marco Mauer. Many thanks to all the persons and audiences who provided great feedback on the paper!

Link to the paper: arxiv.org/abs/2302.02337

Link to the conference: facctconference.org/

arXiv.orgRegulating ChatGPT and other Large Generative AI ModelsLarge generative AI models (LGAIMs), such as ChatGPT or Stable Diffusion, are rapidly transforming the way we communicate, illustrate, and create. However, AI regulation, in the EU and beyond, has primarily focused on conventional AI models, not LGAIMs. This paper will situate these new generative models in the current debate on trustworthy AI regulation, and ask how the law can be tailored to their capabilities. After laying technical foundations, the legal part of the paper proceeds in four steps, covering (1) direct regulation, (2) data protection, (3) content moderation, and (4) policy proposals. It suggests a novel terminology to capture the AI value chain in LGAIM settings by differentiating between LGAIM developers, deployers, professional and non-professional users, as well as recipients of LGAIM output. We tailor regulatory duties to these different actors along the value chain and suggest four strategies to ensure that LGAIMs are trustworthy and deployed for the benefit of society at large. Rules in the AI Act and other direct regulation must match the specificities of pre-trained models. In particular, regulation should focus on concrete high-risk applications, and not the pre-trained model itself, and should include (i) obligations regarding transparency and (ii) risk management. Non-discrimination provisions (iii) may, however, apply to LGAIM developers. Lastly, (iv) the core of the DSA content moderation rules should be expanded to cover LGAIMs. This includes notice and action mechanisms, and trusted flaggers. In all areas, regulators and lawmakers need to act fast to keep track with the dynamics of ChatGPT et al.
#chatgpt#gpt4#law
Continued thread

2/n
Rishi Bommasani, Jeremias Adams-Prassl, Rumman Chowdhury, Rasmus Rothe, Jonas Andrulis, Dora Kaufman, Kai Zenner, Orly Lobel, Alexandre Zavaglia Coelho, Andreas Engel, Sarah Hammer, Herbert Zech, Michael Veale, and a great audience. On the eve of the AI for Good Global Summit at ITU. Will be a blast!

Spread the word!

Link: europeannewschool.eu/genaiconf

Info on registration to follow soon!

All the best, and enjoy the holidays :-)

www.europeannewschool.eugenaiconference23 - European New School of Digital StudiesStudy and research on the digital transformation | PhD program | Fellowship program ▶ Readyto shape digital Europe? Join ENS!

Join us today at 4 PM CEST/10 AM EST sharp for a transatlantic and interdisciplinary discussion on AI liability. Hosted by the United Nations AI for Good series, our panel features

Join in person at AI Campus Berlin, or online via UN's neural network or YouTube.

Links:
AI campus: aicampus.berlin/event/ai-liabi
UN ITU: aiforgood.itu.int/event/ai-lia

YouTube: youtube.com/live/Tasnt-Xp_20?f

www.aicampus.berlin| AI Campus Berlin
#chatgpt#gpt4#law

Join us today at 2 PM CEST for an online discussion on the future of academia and teaching in the age of generative AI. Hosted by the United Nations' ITU AI for Good series and featuring students from all over the world, including from our own European New School of Digital Studies. And of course Sandra Wachter and myself. Will be exciting!

Link: loom.ly/DeMpm7c

#ai#ki#ml
Continued thread

2/2 It's incredible to see with what speed generative AI, and its intersection with law and regulation, keeps evolving at the moment. This is a time we will likely look back to at one moment as something quite special.

Stay tuned, and as always: comments most welcome!

#ai#ki#aiact
Continued thread
arXiv.orgRegulating ChatGPT and other Large Generative AI ModelsLarge generative AI models (LGAIMs), such as ChatGPT or Stable Diffusion, are rapidly transforming the way we communicate, illustrate, and create. However, AI regulation, in the EU and beyond, has primarily focused on conventional AI models, not LGAIMs. This paper will situate these new generative models in the current debate on trustworthy AI regulation, and ask how the law can be tailored to their capabilities. After laying technical foundations, the legal part of the paper proceeds in four steps, covering (1) direct regulation, (2) data protection, (3) content moderation, and (4) policy proposals. It suggests a novel terminology to capture the AI value chain in LGAIM settings by differentiating between LGAIM developers, deployers, professional and non-professional users, as well as recipients of LGAIM output. We tailor regulatory duties to these different actors along the value chain and suggest four strategies to ensure that LGAIMs are trustworthy and deployed for the benefit of society at large. Rules in the AI Act and other direct regulation must match the specificities of pre-trained models. In particular, regulation should focus on concrete high-risk applications, and not the pre-trained model itself, and should include (i) obligations regarding transparency and (ii) risk management. Non-discrimination provisions (iii) may, however, apply to LGAIM developers. Lastly, (iv) the core of the DSA content moderation rules should be expanded to cover LGAIMs. This includes notice and action mechanisms, and trusted flaggers. In all areas, regulators and lawmakers need to act fast to keep track with the dynamics of ChatGPT et al.
#ki#ai#ml

Mi sono accorto che non ho fatto un toot di intro... Rimedio subito.
Sono Matteo, avvocato che si occupa di diritto dell'informatica e delle nuove tecnologie (alias #ITLaw), di trattamento dati personali (alias #GDPR e #Privacy), di #Cybersecurity. Sono #DPO e consulente aziendale per i temi indicati sopra ma sono anche un #software #developer. Miei sono @zipgenius, @myownpassphrase e Czip X (czip.it).

Piacere di conoscervi!

czip.itCzip X – Reliable encryption for daily tasks