eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

210
active users

#sovereignai

0 posts0 participants0 posts today

🇨🇭 Switzerland is taking a public-interest-first approach to LLMs. EPFL, ETH Zurich, and CSCS are building a fully open, multilingual language model trained on public infrastructure, and they’re releasing it under Apache 2.0 this summer. This one isn’t just open weights. It’s 100% transparent: source, data, training methods. And trained on 15T tokens across 1,500+ languages using a carbon-neutral supercomputer (Alps), it’s a real shot at sovereign AI that serves scientific, civic, and commercial needs without the lock-in.

TL;DR
🌍 Fluency in 1,000+ languages
🧠 Open 8B and 70B param models
⚡ Alps supercomputer, 100% green
🔓 Fully open: data, code, weights

ethz.ch/en/news-and-events/eth
#opensourceAI #multilingualAI #sovereignAI #SwissTech #Freedom #AI

An illustration of a swiss cross. The cross consists of cables, one side is red and the other blue.
ETH ZurichA language model built for the public goodETH Zurich and EPFL will release a large language model (LLM) developed on public infrastructure. Trained on the “Alps” supercomputer at the Swiss National Supercomputing Centre (CSCS), the new LLM marks a milestone in open-source AI and multilingual excellence.

🌐 How is open source powering sovereign AI? Join LF AI & Data’s webinar in Mandarin on April 24 to find out!

💡 Insights from Microsoft, Ant Group, Peking Univ & more.
🕗 8–9:30 AM CST
Join us!
Linkedin: linkedin.com/events/theglobals
Zoom: zoom.us/webinar/register/WN_R9

Interesting Economic Index paper from Anthropic, based on 1m+ Claude.AI conversations. Analysed through O*NET classifications, shows #AI use in over 36% of occupations.

Thoughts:

1. This analysis would ordinarily be undertaken by government labour departments. Analysis of the use of #AI tools is now predicated on companies releasing this data. Anthropic has released *some* of the data used for analysis - but not all - e.g. the actual prompts.

huggingface.co/datasets/Anthro

2. This data is linked to US occupational classifications (O*NET), and AFAICT, there is no way to identify in the dataset (I looked) what the geography of the user is. That means this analysis can't be used to analyse **Australian** patterns of AI use - which links to the #sovereignAI discourse.

3. Given Anthropic's outsized role in the industry, and the push for adoption by e.g. Microsoft of tools like e.g. CoPilot, I wonder if this economic analysis will become a *target* - following Goodhart's Law. Which would increase AI usage, which would benefit Anthropic.

4. I found the distinction between automation and augmentation in this analysis useful. Drawing from #cybernetics, automation can be viewed as first-order - the user directs the intent. Augmentation is more reflexive, with the intent negotiated. What are the implications of #LLM involvement here?

5. The pattern of increasing use among higher-skilled professions - up to the cliff of those requiring advanced degrees (e.g. surgeons) where usage dropped off - indicates to me that advanced degrees still provide a "moat" - but for how long?

6. I really loved the feedback form Anthropic provided for researchers to suggest new research directions and to give feedback on the format of the dataset that was released. This connects research with practice - praxis.

docs.google.com/forms/d/e/1FAI

➡️ anthropic.com/news/the-anthrop

huggingface.coAnthropic/EconomicIndex · Datasets at Hugging FaceWe’re on a journey to advance and democratize artificial intelligence through open source and open science.