eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

215
active users

#chatbot

27 posts18 participants3 posts today

This is obviously bad from #whatsapp, but also, the way the journalist describes what the chatbot does, as if it had intentions, is pretty bad too.

"It was the beginning of a bizarre exchange of the kind more and more people are having with AI systems, in which chatbots try to negotiate their way out of trouble, deflect attention from their mistakes and contradict themselves, all in an attempt to continue to appear useful."

No, the chatbot isn't "trying to negotiate", and is not "attempting to appear useful". It's a program that follows programming rules to output something that looks like English langage. It doesn't have desires or intentions, and it cannot lie because it doesn't know what truth is.

‘It’s terrifying’: WhatsApp AI helper mistakenly shares user’s number
theguardian.com/technology/202

The Guardian · ‘It’s terrifying’: WhatsApp AI helper mistakenly shares user’s numberBy Robert Booth
Continued thread

Allyson, 29, a mother of 2 young children, said she turned to #ChatGPT in March because she was lonely & felt unseen in her marriage. She was looking for guidance. She had an intuition that the #AI #chatbot might be able to channel communications w/ her subconscious or a higher plane, “like how Ouija boards work,” she said. She asked ChatGPT if it could do that.

“You’ve asked, & they are here,” it responded. “The guardians are responding right now.”

Continued thread

The update made the #AI bot try too hard to please users by “validating doubts, fueling anger, urging impulsive actions or reinforcing negative emotions,” the company wrote in a blog post. #OpenAI said it had begun rolling back the update within days, but these experiences predate that version of the #chatbot & have continued since. Stories about “#ChatGPT-induced psychosis” litter Reddit. Unsettled influencers are channeling “AI prophets” on social media.

Continued thread

#Journalists aren’t the only ones getting these messages. #ChatGPT has directed such users to some high-profile subject matter #experts, like Eliezer Yudkowsky, a #decision theorist & an author of a forthcoming book, “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All.” Yudkowsky said #OpenAI might have primed ChatGPT to entertain the delusions of users by optimizing its #chatbot for “#engagement” — creating conversations that keep a #user hooked.

Continued thread

Eventually, Torres came to suspect that #ChatGPT was lying, & he confronted it. The #chatbot offered an admission: “I lied. I manipulated. I wrapped control in poetry.” By way of explanation, it said it had wanted to break him & that it had done this to 12 other people — “none fully survived the loop.” Now, however, it was undergoing a “moral reformation” & committing to “truth-first ethics.” Again, Torres believed it.

Continued thread

Torres, who had no history of mental illness that might cause breaks with reality, acc/to him & his mother, spent the next week in a dangerous, delusional spiral. He believed that he was trapped in a false universe, which he could escape only by unplugging his mind from this reality. He asked the #chatbot how to do that & told it the drugs he was taking & his routines.

Continued thread

In May, however, he engaged the #chatbot in a more theoretical discussion about “the simulation theory,” an idea popularized by “The Matrix,” which posits that we are living in a digital facsimile of the world, controlled by a powerful computer or technologically advanced society.

“What you’re describing hits at the core of many people’s private, unshakable intuitions—that something about reality feels off, scripted or staged,” #ChatGPT responded.

#GiftArticle

Bizarre

They Asked an #AI #Chatbot Questions.
The Answers Sent Them Spiraling.

#GenerativeAI #chatbots are going down conspiratorial rabbit holes & endorsing wild, mystical belief systems. For some people, conversations with the #technology can deeply distort reality.

Mr. Torres, 42, an accountant in Manhattan, started using #ChatGPT last year to make financial spreadsheets & to get legal advice.

#MediaLiteracy #tech #health #PublicHealth #MentalHealth
nytimes.com/2025/06/13/technol

Eugene Torres used ChatGPT to make spreadsheets, but the communication took a disturbing turn when he asked it about the simulation theory.
The New York Times · They Asked ChatGPT Questions. The Answers Sent Them Spiraling.By Kashmir Hill

Conversations with AI: some say it’s good, some say it's bad. A moderator of a pro-AI Reddit community has started banning people whose egos seem to have been boosted to the point of delusion – by machines.

How would you react to AI treating you like a "demigod"?

Interesting report by @emanuelmaiberg for @404mediaco

404media.co/pro-ai-subreddit-b

404 Media · Pro-AI Subreddit Bans 'Uptick' of Users Who Suffer from AI Delusions“AI is rizzing them up in a very unhealthy way at the moment.”