eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

195
active users

#languagemodels

0 posts0 participants0 posts today
Continued thread

The key insight: hallucinations are not bugs, but artifacts of compression. Like Xerox photocopiers that silently replaced digits in floorplans to save memory, LLMs can introduce subtle distortions. Because the output still looks right, we may not notice what has been lost or changed.
The more they’re used to generate content, the more the web becomes a blurrier copy of itself.
#LanguageModels #CompressionArtifacts #AIliteracy

From Obedience to Execution: Structural Legitimacy in the Age of Reasoning Models
When models no longer obey but execute, what happens to legitimacy?

Core contributions:
• Execution vs. obedience in LLMs
• Structural legitimacy without subject
• Reasoning as authority loop

🔗 Full article: zenodo.org/records/15635364
🌐 Website: agustinvstartari.com
🪪 ORCID: orcid.org/0009-0002-1483-7154

ZenodoFrom Obedience to Execution: Structural Legitimacy in the Age of Reasoning ModelsThis article formulates a structural transition from Large Language Models (LLMs) to Language Reasoning Models (LRMs), redefining authority in artificial systems. While LLMs operated under syntactic authority without execution, producing fluent but functionally passive outputs, LRMs establish functional authority without agency. These models do not intend, interpret, or know. They instantiate procedural trajectories that resolve internally, without reference, meaning, or epistemic grounding. This marks the onset of a post-representational regime, where outputs are structurally valid not because they correspond to reality, but because they complete operations encoded in the architecture. Neutrality, previously a statistical illusion tied to training data, becomes a structural simulation of rationality, governed by constraint, not intention. The model does not speak. It acts. It does not signify. It computes. Authority no longer obeys form, it executes function. A mirrored version of this article is also available on Figshare for redundancy and citation indexing purposes: DOI: 10.6084/m9.figshare.29286362 Resumen Este artículo formula una transición estructural desde los Modelos de Lenguaje a Gran Escala (LLMs) hacia los Modelos de Razonamiento Lingüístico (LRMs), redefiniendo la noción de autoridad en sistemas artificiales. Mientras los LLMs operaban bajo una autoridad sintáctica sin ejecución, generando salidas coherentes pero pasivas, los LRMs instauran una autoridad funcional sin agencia. Estos modelos no interpretan, no intencionan, no conocen. Resuelven trayectorias procedurales internas sin referente, sin sentido, sin anclaje epistémico. Se inaugura así un régimen post-representacional, donde la validez no proviene de la correspondencia con el mundo, sino de la finalización estructural de operaciones. La neutralidad, antes ilusión estadística derivada del corpus, se convierte en simulación estructural de racionalidad, regulada por restricciones y no por decisiones. El modelo no habla: actúa. No significa: computa. La autoridad ya no obedece forma, ejecuta estructura.
#AI#LLM#Execution

🧠 🤖 Researchers from the Natural Language Processing Laboratory and NeuroAI Laboratory have discovered key ‘units’ in large AI models that seem to be important for language, mirroring the brain’s language system. When these specific units were turned off, the models got much worse at language tasks.

#LanguageModels #ArtificialIntelligence #AIResearch

Read more: go.epfl.ch/LJx-en

go.epfl.ch · A step towards understanding machine intelligence the human wayEPFL researchers have discovered key ‘units’ in large AI models that seem to be important for language, mirroring the brain’s language system. When these specific units were turned off, the models got much worse at language tasks.

Building on the 90s, statistical n-gram language models, trained on vast text collections, became the backbone of NLP research. They fueled advancements in nearly all NLP techniques of the era, laying the groundwork for today's AI.

F. Jelinek (1997), Statistical Methods for Speech Recognition, MIT Press, Cambridge, MA

#NLP #LanguageModels #HistoryOfAI #TextProcessing #AI #historyofscience #ISE2025 @fizise @fiz_karlsruhe @tabea @enorouzi @sourisnumerique