eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

218
active users

#AITraining

1 post1 participant0 posts today

A for-profit corporation that makes money out of other users' content is accusing another company of trying to do exactly the same. How amusing can that be? :-D

"Reddit said the AI company unlawfully used Reddit’s data for commercial purposes without paying for it and without abiding by the company’s user data policy, according to the complaint, which was filed Wednesday in California.

“Anthropic is in fact intentionally trained on the personal data of Reddit users without ever requesting their consent,” the complaint says, alleging that Anthropic’s conduct runs counter to how it “bills itself as the white knight of the AI industry.”

Reddit, the online discussion forum where users can post anonymously and ask each other questions, has reached formal agreements with both OpenAI and Google to license Reddit’s valuable human user data.

Anthropic didn’t immediately comment."

wsj.com/tech/ai/reddit-lawsuit

"Ai2 tested DataDecide across a wide range of datasets and model sizes, using 10 benchmarks to evaluate how well small models predict large-scale performance. The findings aren’t earth-shattering, but they present useful takeaways for AI developers and researchers.

For one, Ai2 found that small models (around 150 million parameters) can predict large-scale outcomes with surprising accuracy. Some benchmarks reached over 80% decision accuracy using just 0.01% of the compute compared to billion-parameter models.

Since small-model experiments use less compute than other methods, developers don’t need to run full-scale tests just to predict outcomes. “The promise of this work is lower compute costs during training,” said Pijanowski.

Ai2 found that scaling laws didn’t outperform the simpler method of ranking datasets by small-model results. Scaling laws, a more sophisticated and more costly testing method, aim to predict how accuracy improves with model size. For now, “just stick with ablating things at one scale,” advised Magnusson.

The findings should give LLM devs pause for thought, Hunt said: “There are scaling laws that have been derived from empirical studies between data volume, compute resources and performance. Ai2’s research points out that we may want to revisit some of those assumptions.”"

thenewstack.io/new-tools-help-

The New Stack · New Tools Help LLM Developers Choose Better Pre-Training DataAi2 finds that large language model developers can reach 80% accuracy in dataset selection without costly compute.

Step Into the Future with AI!
Join TuxAcademy Artificial Intelligence Course , Certificate Included
Visit Our Website :- www.tuxacademy.org
More Information :- +91 7982029314
Our social Media Platform 👇
🔗 Instagram: lnkd.in/dra7TTnP
🔗 Facebook: lnkd.in/d7nSHqNg
🔗 LinkedIn: lnkd.in/dPfzczic
🔗 YouTube: lnkd.in/dAT_fJUv
#TuxAcademy #ArtificialIntelligence #AI #LearnAI #FutureSkills #TechCareers #Upskill #OnlineLearning #AITraining #AICourse #JoinNow

"The number of Google contractors working on various projects isn’t public knowledge, but the company may rely on as many as 12,000 AI workers across seven to 10 different contractors, Wait, who started working with GlobalLogic raters after they reached out to her last February, estimates. Other artificial intelligence engines depend on thousands more. At GlobalLogic, which declined to comment for this story, workers are assigned to support Google engineers working on projects that include Gemini, Google’s AI “assistant.”

In some ways, this type of work is not new—engines like Google have long relied on underpaid raters to train their search algorithms—but the AI boom has led to exponential unregulated growth in this workforce, as tech juggernauts pour billions of dollars into the race to capture market share. In 2023, a Time investigation found that OpenAI had paid Kenyan contractors less than $2 an hour to watch and identify violent and abusive content. The workers, who also rated content for Meta, lost their jobs when they tried to organize. A lawsuit against Meta is ongoing. On April 30, moderators launched a global trade union alliance across nine countries; so far, the United States is not one of them.

GlobalLogic workers, who are based in the United States, do make more money than their Kenyan peers, with wages for generalist raters starting at $16 an hour. So-called “super raters” are paid $20 or more an hour to do the same kind of work, because they usually have master’s or doctoral degrees—although that’s well under the average for American workers with comparable qualifications. In other ways, though, the work can be much the same across national borders. Rachael Sawyer works on a project training Google’s AI to filter out hateful and violent content, including child sexual abuse material."

thenation.com/article/society/

The Nation · The Human Workforce Behind AI Wants a UnionContractors who work on Google’s AI products are trying to organize, but new obstacles keep appearing in their path.

Tijd om #Facebook en #Instagram te verlaten:

#Meta mag posts van EU-gebruikers op Facebook en Instagram gebruiken voor #AITraining
De Amerikaanse technologiereus Meta mag Facebook- en Instagramberichten van Europese gebruikers voortaan gebruiken om zijn AI-software te trainen, tenzij gebruikers zich daar expliciet tegen verzet hebben. De deadline voor die zogenaamde opt-out verliep dinsdag.

Het bedrijf wil alle openbare content — zoals berichten en foto's — gedeeld door volwassen gebruikers in de EU inzetten om zijn AI-modellen te verbeteren. Een Duitse rechtbank verwierp vrijdag nog een klacht van een agentschap voor consumentenbescherming in Noordrijn-Westfalen tegen de gegevensverwerking.

Gebruikers van Facebook en Instagram kregen een melding waarin het bedrijf uitlegt over welke gegevens het gaat. Via een formulier in het privacycentrum van de apps kunnen Europese gebruikers op elk moment een bezwaar indienen. Om doeltreffende te zijn, moest het bezwaar voor 27 mei geregeld zijn. (belga)

bron: standaard.be/economie/helft-mi

De Standaard · Helft minder Tesla’s verkocht in Europa in aprilIn de liveblog economie brengen we nog meer en sneller financieel-economisch nieuws. Personal finance, bedrijfsnieuws, internationale economie of beurzen: volg hier de belangrijkste updates van de dag.

🚀 Boost Your Career with the Most In-Demand #DataScience
📍 Learn from expert trainers with 15+ years of real-world experience at #VisualPath – Top Data Science Institute in Hyderabad.

✅ Hands-on learning with live projects
✅ Course access across 🌎 USA, UK, Canada, India & Australia
📞 Call Us: +91-7032290546
📲 WhatsApp: wa.me/c/917032290546
🌐Visit us: visualpath.in/online-data-scie

I find it astonishing how authors are so afraid that AI is going to steal their readers. If AI only generates derivative works, why are you, as a creator and the only entity capable of producing truly creative works so afraid?

Unless you believe in the fiction that you can own artificial property. In reality, once you let a work out in the world, you can't possibly expect to "own" it and control its distribution - unless you want to enforce a totalitarian dictatorship. So, I repeat, what are you so afraid of?

Unless authors start to understand that creative works can't be really protected against unauthorized copying and distribution and that copyright is a monopoly granted by States, they will continue to repeat the same mistakes, will depriving the public of access to knowledge and culture.

"In late 2024, we surveyed over 400 members of the Australian Society of Authors, the national peak body for writers and illustrators. We asked about their use of AI, their understanding of how generative models are trained, and whether they would agree to their work being used for training – with or without compensation.

79% said they would not allow their existing work to be used to train AI models, even if they were paid. Almost as many – 77% – said the same about future work.

Among those open to payment, half expected at least $A1,000 per work. A small number nominated figures in the tens or hundreds of thousands.

But the dominant response, from both established and emerging authors, was a firm “no”.

This presents a serious roadblock for those hoping publishers might broker blanket licensing agreements with AI firms. If most authors are unwilling to grant permission under any terms, then standard contract clauses or opt-in models are unlikely to deliver a practical or ethical solution."

theconversation.com/new-resear

The ConversationNew research reveals Australian authors say no to AI using their work – even if money is on the tableWriters’ concerns about AI are not only about payment; they are about consent, trust and the future of their profession.

COMPLETE TRAVESTY: Creativity is not an industry. Anyone who dares to say that knows nothing about art, culture, creativity, and manufacturing. You cannot manufacture creativity. Either a work is considered creative by the public and the critics or not.

Am I stealing your words in favor of a rent-seeking, money-grabbing little scheme by copying and pasting it here? Do I have to pay you a license for your fake, artificial property? And you consider yourself a representative of artists and authors?

"My colleagues and I from all sides in the House of Lords have acted where the government has refused, adding emergency transparency measures to the legislation – the data (use and access) bill – that is passing through parliament. Our amendment would allow existing copyright law to be enforced: copyright owners would understand when, where and by whom their work was being stolen to train AI. The logic being that if an AI firm has to disclose evidence of theft, it will not steal in the first place. These measures, voted for in ever-increasing numbers by lords from all parties – and notable grandees from the government’s own backbenches – were voted down by a government wielding its significant, if reluctant, majority."

theguardian.com/commentisfree/

The Guardian · We have a chance to prevent AI decimating Britain’s creative industries – but it’s slipping awayBy Beeban Kidron
#UK#RentSeeking#AI

There maybe indeed situations where the overall public welfare gains obtained by the introduction of a new technology way surpass respecting artificial property. That's probably the situation here, since the gains in terms of access to knowledge and culture can be enormous.

"Referring to the question of whether “artists should be able to withhold their content from the AI models that are being trained,” he said: “On the one hand, yeah, I think it seems to me as a matter of natural justice, to say to people that they should be able to opt out of having their creativity, their products, what they’ve worked on indefinitely modelled. That seems to me to be not unreasonable to opt out.”

However, he added, “I think the creative community wants to go a step further. Quite a lot of voices say ‘you can only train on my content, [if you] first ask’. And I have to say that strikes me as somewhat implausible because these systems train on vast amounts of data.

“I just don’t know how you go around, asking everyone first. I just don’t see how that would work. And by the way if you did it in Britain and no one else did it, you would basically kill the AI industry in this country overnight."

thetimes.com/uk/technology-uk/

The Times · Nick Clegg: Artists’ demands over copyright are unworkableBy Lucy Bannerman
#UK#GenerativeAI#AI

Dell Launches AI-Powered Servers with Nvidia Blackwell Chips.

Dell introduces high-performance servers with Blackwell Ultra GPUs, enabling faster AI training up to 4x speed. Despite booming demand, profit margins may tighten due to high costs. Dell is also expanding into AI laptops and eyeing Nvidia’s next-gen CPUs.

#Dell #AI #NvidiaBlackwell #AIservers #TechNews #AITraining #ProMaxPlus #VeraRubin #NextGenAI #TECHi

Read Full Article Here :- techi.com/dell-ai-servers-blac

"You’d be hard-pressed to find a more obvious example of the need for regulation and oversight in the artificial intelligence space than recent reports that Elon Musk’s AI chatbot, known as Grok, has been discussing white nationalist themes with X users. NBC News reported Thursday that some users of Musk’s social media platform noticed the chatbot was responding to unrelated user prompts with responses discussing “white genocide.”

For background, this is a false claim promoted by Afrikaners and others, including Musk, that alleges white South African land owners have been systematically attacked for the purpose of ridding them and their influence from that country. It’s a claim that hews closely to propaganda spread by white nationalists about the purported oppression of white people elsewhere in Africa.

It’s hard to imagine a more dystopian scenario than this."

msnbc.com/top-stories/latest/g

MSNBC · Elon Musk’s chatbot just showed why AI regulation is an urgent necessityBy Ja'han Jones

🚀 #NewBatch Alert: Data Science with Generative AI!
👉Attend Online #NewBatch On #DataSciencewithGenerativeAI by Mr. Vivek.
📅Batch on: 22nd May 2025 @ 8:30 PM (IST)
☎️Contact us: +919989971070
📲WhatsApp: wa.me/c/917032290546
🌐Visit: visualpath.in/online-data-scie
👩‍🎓 Who Should Learn This Course?

✅ Freshers looking to launch a career in AI & Data Science
✅ Python/ML Enthusiasts wanting to explore Gen AI

Although I'm an avid supporter of transparency, It's a bit annoying that copyright holders only want the disclosure of all copyrighted works used for AI training of LLMs to extract rents from AI companies. And this regardless of the size, not-for-profit nature, and goals of the company/project. Media moguls and their stooges are always yelling: "PAY! PAY! PAY!" I'm sick and tired of all this blackmailing and complaining.

If you can prove that a chatbot can generate every time exact copies of your works using the same prompt, by all means go ahead and demand a license - but only for large companies. If not, please shut up or admit that you're just another seller of commodities and you don't believe in the non-material value of your art. Art works that are able to appeal to spiritual and aesthetic can generate bountiful positive externalities in the medium and long term.

------

"Ministers have used an arcane parliamentary procedure to block an amendment to the data bill that would require artificial intelligence companies to disclose their use of copyright-protected content.

The government stripped the transparency amendment, which was backed by peers in the bill’s reading in the House of Lords last week, out of the draft text by invoking financial privilege, meaning there is no budget available for new regulations, during a Commons debate on Wednesday afternoon.

The amendment, which would have required tech companies to reveal which copyrighted material is used in their models, was tabled by the crossbench peer Beeban Kidron and was passed by 272 votes to 125 in a Lords debate last week.

There were 297 MPs who voted in favour of removing the amendment, while 168 opposed."

theguardian.com/technology/202

The Guardian · Ministers block Lords bid to make AI firms declare use of copyrighted contentBy Rachel Hall

#Meta is making users who opted out of #AI #training opt out again, #watchdog says

According to Noyb, Meta is also requiring users who already opted out of #AItraining in 2024 to opt out again or forever lose their opportunity to keep their data out of Meta's models, as training data likely cannot be easily deleted. That's a seeming violation of the General Data Protection Regulation ( #GDPR ), Noyb alleged.
#privacy #eu

arstechnica.com/tech-policy/20

Ars Technica · Meta is making users who opted out of AI training opt out again, watchdog saysBy Ashley Belanger