1/8 This week, EU lawmakers met to discuss the #AIAct.
Up for discussion: how will AI systems be classified as high-risk? And how will these risky systems be used?
So far, we still remain concerned about how these risky AI systems will affect our #FundamentalRights ...
4/8 @edri and 115 civil society groups urged EU lawmakers to close this loophole and put people first in #AIAct.
Letting companies decide if their AI systems are high-risk would compromise this regulation and risk our human rights
Read our call: https://edri.org/our-work/civil-society-statement-eu-close-loophole-article-6-ai-act-tech-lobby/
5/8 Also being discussed – should police and migration controls reveal when they are using high-risk AI systems?
Governments say NO – they want high-risk AI in these areas kept secret, including some very risky use cases
This lack of transparency is deeply concerning for several reasons…
6/8 AI is increasingly used for heightened, often racist #surveillance in the migration and policing contexts.
Such use affects the most marginalised of us - racialised people, migrants & refugees.
It can cause grievous harms to our lives https://www.euronews.com/2023/04/24/as-ai-act-vote-nears-the-eu-needs-to-draw-a-red-line-on-racist-surveillance
7/8 Such loopholes in the #AIAct will undermine our #FundamentalRights and fully gut the law.
We need: A clear & objective process to decide which AI systems are high-risk - no #BigTech loopholes!
Transparency for ALL high-risk AI uses, including for police & migration use.
8/8 Negotiations between EU institutions on the #AIAct will continue.
Along with a broad civil society coalition, we'll keep advocating for an AI Act that prioritises the rights of people over profits and ensures that AI development and use is both accountable and transparent