eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

241
active users

1/3 Honored to have contributed to the 6th European AI Forum with a talk on AI Liability, delivering a very short version of my longer paper (arxiv.org/abs/2211.13960). I use autonomous vehicles as my main example. Happy to share the video below!

Link: youtube.com/watch?v=QbCJmkeqeF My talk starts at 58:00 min. It is followed by a (much more important) live intervention by the AI House (1:25 h), speaking about holding AI events in bunkers without ,

arXiv.orgThe European AI Liability Directives -- Critique of a Half-Hearted Approach and Lessons for the FutureThe optimal liability framework for AI systems remains an unsolved problem across the globe. In a much-anticipated move, the European Commission advanced two proposals outlining the European approach to AI liability in September 2022: a novel AI Liability Directive and a revision of the Product Liability Directive. They constitute the final, and much-anticipated, cornerstone of AI regulation in the EU. Crucially, the liability proposals and the EU AI Act are inherently intertwined: the latter does not contain any individual rights of affected persons, and the former lack specific, substantive rules on AI development and deployment. Taken together, these acts may well trigger a Brussels effect in AI regulation, with significant consequences for the US and other countries. This paper makes three novel contributions. First, it examines in detail the Commission proposals and shows that, while making steps in the right direction, they ultimately represent a half-hearted approach: if enacted as foreseen, AI liability in the EU will primarily rest on disclosure of evidence mechanisms and a set of narrowly defined presumptions concerning fault, defectiveness and causality. Hence, second, the article suggests amendments, which are collected in an Annex at the end of the paper. Third, based on an analysis of the key risks AI poses, the final part of the paper maps out a road for the future of AI liability and regulation, in the EU and beyond. This includes: a comprehensive framework for AI liability; provisions to support innovation; an extension to non-discrimination/algorithmic fairness, as well as explainable AI; and sustainability. I propose to jump-start sustainable AI regulation via sustainability impact assessments in the AI Act and sustainable design defects in the liability regime. In this way, the law may help spur not only fair AI and XAI, but potentially also sustainable AI (SAI).

2/3 the room illuminated by the glow of the screens only. Puts many things and debates we are having here into perspective (gas prices...)!

I cannot say enjoy the video. But do watch it if you care about the potential of AI in the defense of against the unprovoked RU aggression.

#ai#AIliability#ml