eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

213
active users

#computerscience

9 posts8 participants0 posts today

This is for the super nerds, so don't feel bad if you don't get it.

I asked ChatGPT to design a menu with Dutch food influences for an Edsger W. Dijkstra-themed restaurant based upon his work. I then asked it to create the LaTeX code to generate a printable version of the menu.

No notes. Perfection. Lost in the PDF generation was that drinks were labeled as “Side Effects (Handled)" which is divine.

🎓 Interested in a research-oriented, fully funded PhD position in computer science, supervised by world-renowned researchers? Then apply for IMPRS-TRUST, the joint graduate program of MPI for Informatics, MPI for Software Systems, Saarland University (UdS) and the University of Kaiserslautern-Landau (RPTU).

🗓️Application Deadline: June 30.

↪️ Information on application:
sic.link/imprs

💻 **Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task**

"_Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning._"

Kosmyna, N. et al. (2025) Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. arxiv.org/abs/2506.08872.

#Preprint #AI #ArtificialIntelligence #LLM #LLMS #ComputerScience #Technology #Tech #Research #Learning #Education @ai

arXiv logo
arXiv.orgYour Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing TaskThis study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

💻 **Dark LLMs: The Growing Threat of Unaligned AI Models**

"_In our research, we uncovered a universal jailbreak attack that effectively compromises multiple state-of-the-art models, enabling them to answer almost any question and produce harmful outputs upon request._"

Fire, M. et al. (2025) Dark LLMs: The growing threat of unaligned AI models. arxiv.org/abs/2505.10066.

#AI #ArtificialIntelligence #LLMS #DarkLLMS #Technology #Tech #Preprint #Research #ComputerScience @ai

arXiv logo
arXiv.orgDark LLMs: The Growing Threat of Unaligned AI ModelsLarge Language Models (LLMs) rapidly reshape modern life, advancing fields from healthcare to education and beyond. However, alongside their remarkable capabilities lies a significant threat: the susceptibility of these models to jailbreaking. The fundamental vulnerability of LLMs to jailbreak attacks stems from the very data they learn from. As long as this training data includes unfiltered, problematic, or 'dark' content, the models can inherently learn undesirable patterns or weaknesses that allow users to circumvent their intended safety controls. Our research identifies the growing threat posed by dark LLMs models deliberately designed without ethical guardrails or modified through jailbreak techniques. In our research, we uncovered a universal jailbreak attack that effectively compromises multiple state-of-the-art models, enabling them to answer almost any question and produce harmful outputs upon request. The main idea of our attack was published online over seven months ago. However, many of the tested LLMs were still vulnerable to this attack. Despite our responsible disclosure efforts, responses from major LLM providers were often inadequate, highlighting a concerning gap in industry practices regarding AI safety. As model training becomes more accessible and cheaper, and as open-source LLMs proliferate, the risk of widespread misuse escalates. Without decisive intervention, LLMs may continue democratizing access to dangerous knowledge, posing greater risks than anticipated.

#softwareEngineering #computerScience #programming #lisp #commonLisp #interview #macro #discussion with historical notes-

screwlisp.small-web.org/show/V

My quick notes on the downloadable interview discussion with @vnikolov and @kentpitman About Vassil's assertables classed toggleable assertion macro design.

Provokes lots of fascinating historical notes from Kent about what the ANSI CL and earlier standardisations were doing and had in mind.

screwlisp.small-web.orgVassil Nikolov’s assertables with Kent Pitman

How safe are AI companions? Experts say app developers are falling short
By Ellen Phiddian

AI-powered friends and partners can fight loneliness, but they can also supercharge isolation. So how can companion apps be made safer?

abc.net.au/news/science/2025-0

ABC News · AI companion apps such as Replika need more effective safety controls, experts sayBy Ellen Phiddian

How safe are AI companions? Experts say app developers are falling short
By Ellen Phiddian

AI-powered friends and partners can fight loneliness, but they can also supercharge isolation. So how can companion apps be made safer?

abc.net.au/news/science/2025-0

ABC News · AI companion apps such as Replika need more effective safety controls, experts sayBy Ellen Phiddian

#computerScience #engineering #commonLisp #show #live #lispyGopherClimate communitymedia.video/w/uBZexon

#climateCrisis #haiku @kentpitman

We have @vnikolov talking about common lisp and type checking macros

+:
We do not have incredible artist @shizamura who has her fourth #scifi comic volume finished being funded or something (?) sarilho.net/en/ (if you speak english and not portuguese).
She promises to record something about semantics for us in the future.

#lambdaMOO live chat

Computer engineer and Apple veteran William "Bill" Atkinson has died of pancreatic cancer at age 74. Atkinson created the QuickDraw graphics engine, which made the Macintosh interface possible and, says @arstechnica's @benjedwards, "transformed abstract computer science into intuitive visual experiences that millions would use daily."

"I say this with no hyperbole: Bill Atkinson may well have been the best computer programmer who ever lived," wrote veteran Apple analyst @gruber on his Daring Fireball blog. "Without question, he's on the short list. What a man, what a mind, what gifts to the world he left us." Here's Edwards' story; find Gruber's full tribute at the second link.

flip.it/UJO0cf

flip.it/ImgyIy

Bill Atkinson in 1987.
Ars Technica · Bill Atkinson, architect of the Mac’s graphical soul, dies at 74By Benj Edwards

How Accurately Do Large Language Models Understand Code?

arxiv.org/html/2504.04372v1

"This paper presents the first large-scale empirical investigation into the ability of LLMs to understand code. Inspired by mutation testing, we use an LLM’s ability to find faults as a proxy for its deep understanding of code. This approach is based on the insight that a model capable of identifying subtle functional discrepancies must understand the code well."

It appears that coding LLMs are vulnerable to misleading code comments, misleading variable names and misleading dead code. They still have shallow understanding of code, based on syntax and tokenization designed for natural languages, instead of analyzing code semantics. Writing a lot of incorrect comments can confuse them 😉

arxiv.orgHow Accurately Do Large Language Models Understand Code?