eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

224
active users

#chatgpt

132 posts118 participants4 posts today

ChatGPT: Ông Trùm AI Hay Con Quái Vật Sắp Nuốt Chửng Thế Giới? #OpenAI #ChatGPT #TríTuệNhânTạo #CôngNghệ #AI #TươngLai

ChatGPT: Ông Trùm AI Hay Con Quái Vật Sắp Nuốt Chửng Thế Giới? #OpenAI #ChatGPT #TríTuệNhânTạo #CôngNghệ #AI #TươngLai Sự trỗi dậy chóng mặt của ChatGPT, đứa con cưng của OpenAI, đã đặt ra những câu hỏi hóc búa về tương lai của công nghệ và vị trí của con người trong kỷ nguyên AI. Liệu tham vọng "không đáy" của OpenAI có dẫn đến một cuộc cách mạng công…

bietduoc.io.vn/2025/06/04/chat

ChatGPT: Ông Trùm AI Hay Con Quái Vật Sắp Nuốt Chửng Thế Giới? #OpenAI #ChatGPT #TríTuệNhânTạo #CôngNghệ #AI #TươngLai
Queen Mobile · ChatGPT: Ông Trùm AI Hay Con Quái Vật Sắp Nuốt Chửng Thế Giới? #OpenAI #ChatGPT #TríTuệNhânTạo #CôngNghệ #AI #TươngLaiBy admin

Diabolus Ex Machina

"... the following is a ‘conversation’ I had with Chat GPT upon asking whether it could help me choose several of my own essays to link in a query letter I intended to send to an agent. What ultimately transpired is the closest thing to a personal episode of Black Mirror I hope to experience in this lifetime."

amandaguinzburg.substack.com/p

Everything Is A Wave · Diabolus Ex MachinaBy Amanda Guinzburg

Robert W. Gehl: "We need to rethink higher ed, grading, the whole thing. I think part of the problem is that we've been inconsistent in rules about genAI use. Some profs ban it altogether, while others attempt to carve out acceptable uses. The problem is the line between acceptable and unacceptable use. For example, some profs say students can use genAI for "idea generation" but then prohibit using it for writing text. Where's the line between those? In addition, universities are contracting with companies like Microsoft, Adobe, and Google for digital services, and those companies are constantly pushing their AI tools. So a student might hear "don't use generative AI" from a prof but then log on to the university's Microsoft suite, which then suggests using Copilot to sum up readings or help draft writing. It's inconsistent and confusing.

I've been working on ways to increase the amount of in-class discussion we do in classes. But that's tricky because it's hard to grade in-class discussions—it's much easier to manage digital files. Another option would be to do hand-written in-class essays, but I have a hard time asking that of students. I hardly write by hand anymore, so why would I demand they do so?

I am sick to my stomach as I write this because I've spent 20 years developing a pedagogy that's about wrestling with big ideas through writing and discussion, and that whole project has been evaporated by for-profit corporations who built their systems on stolen work. It's demoralizing.

It has made my job much, much harder."

404media.co/teachers-are-not-o

404 Media · Teachers Are Not OKAI, ChatGPT, and LLMs "have absolutely blown up what I try to accomplish with my teaching."

I have asked #GPT-4.0 to translate a news report from Dutch to German for me because I wanted to inform my family about current debates in the Netherlands. I was surprised that #ChatGPT started a #deepsearch although I had not actively selected this feature. I had vaguely heard about it but never used it before, and I am still wondering what it actually does. I will have to #readthedocs for sure! But has anyone got insights they'd like to share? Deep search sounds a lot like #energywaste to me.

Good read: Beware of AI model collapse!

In an AI model collapse, AI systems, which are trained on their own outputs, gradually lose accuracy, diversity, and reliability. This occurs because errors compound across successive model generations, leading to distorted data distributions and "irreversible defects" in performance. The final result? A Nature 2024 paper stated, "The model becomes poisoned with its own projection of reality." nature.com/articles/s41586-024

"In simpler terms, when AI is trained on its own outputs, the results can drift further away from reality." as stated by Aquant.

Full article here: theregister.com/2025/05/27/opi #AI #AIErrors #AI_Model_Collapse #LLMs #ChatGPT #Llama #Claude #TheRegister