eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

238
active users

#aigovernance

1 post1 participant0 posts today
Replied in thread

@elementary tl;dr I support your objectives, and kudos on the goal, but I think you should monitor this new policy for unexpected negative outcomes. I take about 9k characters to explain why, but I’m not criticizing your intent.

While I am much more pragmatic about my stance on #aicoding this was previously a long-running issue of contention on the #StackExchange network that was never really effectively resolved outside of a few clearly egregious cases.

The triple-net is that when it comes to certain parts of software—think of the SCO copyright trials over header files from a few decades back—in many cases, obvious code will be, well…obvious. That “the simplest thing that could possibly work” was produced by an AI instead of a person is difficult to prove using existing tools, and false accusations of plagiarism have been a huge problem that has caused a number of people real #reputationalharm over the last couple of years.

That said, I don’t disagree with the stance that #vibecoding is not worth the pixels that it takes up on a screen. From a more pragmatic standpoint, though, it may be more useful to address the underlying principle that #plagiarism is unacceptable from a community standards or copyright perspective rather than making it a tool-specific policy issue.

I’m a firm believer that people have the right to run their community projects in whatever way best serves their community members. I’m only pointing out the pragmatic issues of setting forth a policy where the likelihood of false positives is quite high, and the level of pragmatic enforceability may be quite low. That is something that could lead to reputational harm to people and the project, or to community in-fighting down the road, when the real policy you’re promoting (as I understand it) is just a fundamental expectation of “original human contributions” to the project.

Because I work in #riskmanagement and #cybersecurity I see this a lot. This is an issue that comes up more often than you might think. Again, I fully support your objectives, but just wanted to offer an alternative viewpoint that your project might want to revisit down the road if the current policy doesn’t achieve the results that you’re hoping for.

In the meantime, I certainly wish you every possible success! You’re taking a #thoughtleadership stance on an important #AIgovernance policy issue that is important to society and to #FOSS right now. I think that’s terrific!

AI systems are not just “tools” — they shape decisions, amplify asymmetries, and often operate outside clear legal accountability.

We don’t need just trustworthy AI.
We need governable AI.

This requires a bridge from abstract principles to operational standards, technical specs, and legal enforceability.

That bridge is missing — and we need to build it now.

"Backed by nine governments – including Finland, France, Germany, Chile, India, Kenya, Morocco, Nigeria, Slovenia and Switzerland – as well as an assortment of philanthropic bodies and private companies (including Google and Salesforce, which are listed as “core partners”), Current AI aims to “reshape” the AI landscape by expanding access to high-quality datasets; investing in open source tooling and infrastructure to improve transparency around AI; and measuring its social and environmental impact.

European governments and private companies also partnered to commit around €200bn to AI-related investments, which is currently the largest public-private investment in the world. In the run up to the summit, Macron announced the country would attract €109bn worth of private investment in datacentres and AI projects “in the coming years”.

The summit ended with 61 countries – including France, China, India, Japan, Australia and Canada – signing a Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet at the AI Action Summit in Paris, which affirmed a number of shared priorities.

This includes promoting AI accessibility to reduce digital divides between rich and developing countries; “ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all”; avoiding market concentrations around the technology; reinforcing international cooperation; making AI sustainable; and encouraging deployments that “positively” shape labour markets.

However, the UK and US governments refused to sign the joint declaration."

computerweekly.com/news/366620

ComputerWeekly.com · AI Action Summit review: Differing views cast doubt on AI’s ability to benefit whole of societyBy Sebastian Klovig Skelton

"We currently live in a reality where computers and algorithms decide which patients are on a list for priority surgery in a public hospital, flag fraud amongst people collecting public benefits or welfare and determine the severity of a domestic violence incident. These services are developed and implemented through our governments with little or no transparency about how, when, or why computers and algorithms are making these decisions.

Building off of the work of the AI procurement primer by NYU, Platoniq and Civio facilitated a workshop to try to start asking how we might create more transparent processes for AI procurement and design, using frameworks for participation and governance such as Decidim.

Approaching procurement and design often takes place through a closed technical or legal process. The workshop aimed to start a conversation and begin developing ideas on how to open up public AI governance and support safety, transparency, and public participation for the services and tools that affect regular peoples’ everyday lives."

journal.platoniq.net/en/wilder

Wilder Journal by Platoniq · Participatory Procurement and Design of AIIn the past decade, we have seen an increase in the initiatives related to the growth and development of AI, and so has government design and acquisition of AI systems.

"The guidelines appear designed to be both conservative and business-friendly simultaneously, leaving the risk that we have no clear rules on which systems are caught.

The examples at 5.2 of systems that could fall out of scope may be welcome – as noted, the reference to linear and logistic regression could be welcome for those involved in underwriting life and health insurance or assessing consumer credit risk. However, the guidelines will not be binding even when in final form and it is difficult to predict how market surveillance authorities and courts will apply them.

In terms of what triage and assessment in an AI governance programme is likely to look like as a result, there is some scope to triage out tools that will not be AI systems, but the focus will need to be on whether the AI Act obligations would apply to tools:"

dataprotectionreport.com/2025/

Data Protection Report · The Commission’s guidelines on AI systems – what can we infer?The EU’s AI Act imposes extensive obligations on the development and use of AI.  Most of the obligations in the AI Act look to regulate the impact of
#EU#EC#AI

If you have a PhD somewhat close to the information systems field and want to be a postdoctoral researcher in AI governance research projects, here's your chance. We (Information Systems / Digital Economy and Society research group at University of Turku) are hiring 1–2 postdocs to join our ongoing projects on AI governance and responsible AI. Apply by 12 February.

More info here: ats.talentadore.com/apply/tutk

ats.talentadore.comPostdoctoral Researcher in AI governance research projectsThe Department of Management and Entrepreneurship at the Turku School of Economics invites applications for fixed term Postdoctoral Researcher (1-2) position(s) until 31 December 2026. The employment relationship starts as soon as possible or by agreement.Job description The position will especially focus on research on AI governance undertaken by the Digital Economy and Society ([des.utu.fi](https://des.utu.fi/)) research group. The Postdoctoral Researcher will work on a portfolio of research projects focusing on different facets of AI governance (Responsible AI Through Governance, Forward-Looking AI Governance, AI for Work, and AI Governance for Resilience). The position is located at the [Information System Science](https://www.utu.fi/en/university/turku-school-of-economics/information-systems-science) subject. Duties of a postdoctoral Researcher include:conducting research on AI governancepreparing applications for research fundingsupervising master's theses