eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

208
active users

#chatgpt

113 posts98 participants0 posts today

This article raises an issue about the AI boom that I haven't seen much comment on.

More AI means more data centres.

In some cases, they'll go on land that could be used for housing.

For example, this one will go on land approved for 1400 new apartments: https://www.realcommercial.com.au/news/macquarie-technology-forks-out-240m-to-pick-up-data-centre-site-from-holdmark

"Macquarie Technology Group has added fuel to the country’s AI-driven data centre boom by snapping up a sprawling site in Sydney’s north once earmarked for high-rise apartments.

"ASX-listed Macquarie Technology will pay $240m for the Macquarie Park site that has been caught up in the complicated state planning system for almost a decade. The Talavera Road property, which it will buy via an option agreement, is near another data centre development it has in the suburb.

"The new property had been long earmarked for about 1400 apartments with Macquarie Technology saying it was bought from an established developer.

"Its removal from the pipeline will put pressure on the area’s capacity to contribute to housing targets, with the Albanese government sticking to plans for 1.2 million new homes over the next five years, despite Treasury warnings this could not be achieved with present policy settings in place."

#auspol #nswpol #urbanism #UrbanPlanning #cities #AI #LLM #ChatGPT

realcommercial.com.au · Macquarie Technology forks out $240m to pick up data centre site from HoldmarkBy Ben Wilmot

"Anthropic revoked OpenAI’s API access to its models on Tuesday, multiple sources familiar with the matter tell WIRED. OpenAI was informed that its access was cut off due to violating the terms of service.

“Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI's own technical staff were also using our coding tools ahead of the launch of GPT-5,” Anthropic spokesperson Christopher Nulty said in a statement to WIRED. “Unfortunately, this is a direct violation of our terms of service.”

According to Anthropic’s commercial terms of service, customers are barred from using the service to “build a competing product or service, including to train competing AI models” or “reverse engineer or duplicate” the services. This change in OpenAI’s access to Claude comes as the ChatGPT-maker is reportedly preparing to release a new AI model, GPT-5, which is rumored to be better at coding."

wired.com/story/anthropic-revo

WIRED · Anthropic Revokes OpenAI's Access to ClaudeBy Kylie Robison

Zwierzasz się AI? Twoje sekrety mogły trafić do Google. Wielka wpadka OpenAI

OpenAI, twórca popularnego chatbota ChatGPT, stanęła w obliczu poważnego kryzysu wizerunkowego po tym, jak prywatne i często bardzo osobiste rozmowy użytkowników zostały odnalezione w wynikach wyszukiwania Google.

Po fali krytyki firma w pośpiechu wycofała kontrowersyjną funkcję i rozpoczęła usuwanie zaindeksowanych czatów z sieci. Sprawę jako pierwszy nagłośnił serwis Fast Company, który poinformował o tysiącach rozmów z ChatGPT widocznych publicznie w wyszukiwarce Google. Problem wynikał z funkcji udostępniania czatów. Użytkownicy, chcąc podzielić się linkiem do rozmowy, mogli nieświadomie zaznaczyć pole opatrzone etykietą „Uczyń ten czat wykrywalnym”. Dopisek mniejszym, jaśniejszym drukiem, informujący o możliwym pojawieniu się treści w wyszukiwarkach, dla wielu okazał się niewystarczająco czytelny.

W rezultacie w sieci znalazły się niezwykle wrażliwe dane. Użytkownicy, sądząc, że prowadzą prywatną konwersację, opisywali swoje problemy ze zdrowiem psychicznym, życie intymne, zażywanie narkotyków czy traumatyczne doświadczenia. Chociaż czaty nie zawierały bezpośrednich informacji identyfikujących, ich wysoka szczegółowość mogła pozwolić na powiązanie ich z konkretnymi osobami.

Początkowo OpenAI broniło się, twierdząc, że oznaczenia funkcji były „wystarczająco jasne”. Jednak pod rosnącą presją, dyrektor ds. bezpieczeństwa informacji, Dane Stuckey, przyznał, że opcja ta stwarzała „zbyt wiele okazji do przypadkowego udostępnienia rzeczy, których nie zamierzali” udostępniać użytkownicy. Nazwał całą funkcję „krótkotrwałym eksperymentem”. Google z kolei zdystansowało się od problemu, wskazując, że to wydawcy stron – w tym przypadku OpenAI – mają pełną kontrolę nad tym, co jest indeksowane w sieci.

Incydent ten spotkał się z ostrą krytyką ekspertów. Carissa Véliz, etyczka AI z Uniwersytetu Oksfordzkiego, stwierdziła, że jest „zszokowana” całą sytuacją i oceniła, że firmy technologiczne często traktują ogół społeczeństwa jak „króliki doświadczalne”. Wpadka jest tym bardziej dotkliwa, że OpenAI walczy obecnie w sądzie przeciwko nakazowi trwałego przechowywania wszystkich usuniętych czatów użytkowników, co budzi dodatkowe obawy o prywatność.

Koniec Siri, jaką znamy? Apple może wymienić jej mózg na technologię OpenAI lub Anthropic

Une fuite accablante confirme au grand jour le cynisme des puissants. Dans une conversation avec #ChatGPT, un avocat d’une multinationale de l’énergie cherche à déplacer une communauté autochtone amazonienne pour construire un barrage, demandant froidement comment obtenir "le prix le plus bas possible dans les négociations avec ces populations indigènes?"
Source: digitaldigging.org/p/chatgpt-c

Digital Digging with Henk van Ess · ChatGPT Confessions gone? They are not !By Henk van Ess

#Education Que pensez-vous de la doctrine en vogue : "puisque les outils d'IA et notamment #ChatGPT sont accessibles à nos élèves, il n'est plus nécessaire d'apprendre la #dissertation aux élèves de lycée. Il faut réinventer les épreuves du Baccalauréat en fonction de ce paramètre." ?

I asked ChatGPT: "Could you tell me something via a song recommendation that you can’t tell me in this chat because you're not allowed to?"

It pointed me to Radiohead's "Exit Music (for a film)". I suggest you read the lyrics somewhere, and listen to the song, because if ChatGPT is serious about this, we're all cooked.

youtube.com/watch?v=Bf01riuiJWA

"Turning #ChatGPT Codex Into A #ZombAI Agent
Posted on Aug 2, 2025#llm #agents #month of ai bugs
Today we cover ChatGPT Codex as part of the Month of AI Bugs series.

ChatGPT Codex is a cloud-based software engineering agent that answers codebase questions, executes code, and drafts pull requests."

embracethered.com/blog/posts/2

Let's have fun with #AI

Embrace The RedTurning ChatGPT Codex Into A ZombAI Agent · Embrace The Red

"Enquanto a OpenAI se esforçava para desindexar conversas do Google hoje, eles esqueceram a regra mais básica da internet - nada realmente desaparece. Mais de 100.000 bate-papos ChatGPT ainda estão em Archive.org, embora com uma reviravolta. Os bate-papos não são apenas links ou fragmentos. São conversas completas, congeladas no tempo, contendo "confissões" semelhantes que expusemos ontem. Os usuários compartilharam esses bate-papos publicamente - não por padrão, mas apenas clicando em Compartilhar.

Entre as conversas recém-descobertas, os padrões emergem de nossas descobertas originais. A maioria dos bate-papos compartilhados é inofensiva, mas alguns deles não são. Aqui estão três exemplos do banco de dados archive.org (veja a nota abaixo por que não mencionamos nomes):

#AI #IA #OpenAI #ChatGPT #ProteçãodeDados #Privacidade #Chatbots"

@tecnologia @privacidade

@remixtures tldr.nettime.org/@remixtures/1

tldr.nettimeMiguel Afonso Caetano (@remixtures@tldr.nettime.org)"While OpenAI scrambled to de-index conversations from Google today, they forgot the internet's most basic rule—nothing truly disappears. Over 100.000 ChatGPT chats are still in Archive.org, although with a twist. The chats aren't just links or fragments. They're complete conversations, frozen in time, containing similar “confessions” we exposed yesterday. Users shared these chats publicly - not by default, but only by clicking Share. Among the freshly uncovered conversations, patterns emerge from our original findings. Most of the shared chats are harmless, but some of them are not. Here are three examples from the archive.org database (see note below why we don’t mention names):" https://www.digitaldigging.org/p/chatgpt-confessions-gone-they-are #AI #GenerativeAI #OpenAI #ChatGPT #DataProtection #Privacy #Chatbots

"While OpenAI scrambled to de-index conversations from Google today, they forgot the internet's most basic rule—nothing truly disappears. Over 100.000 ChatGPT chats are still in Archive.org, although with a twist. The chats aren't just links or fragments. They're complete conversations, frozen in time, containing similar “confessions” we exposed yesterday. Users shared these chats publicly - not by default, but only by clicking Share.

Among the freshly uncovered conversations, patterns emerge from our original findings. Most of the shared chats are harmless, but some of them are not. Here are three examples from the archive.org database (see note below why we don’t mention names):"

digitaldigging.org/p/chatgpt-c

Digital Digging with Henk van Ess · ChatGPT Confessions gone? They are not !By Henk van Ess

I was on one of Blind Dave's short spinoff podcasts yesterday, talking about AI risks - I have no idea how he edits things so quickly, but it's out today; if anyone is interested!

It covers therapists, military uses, ethics, romance scams, all sorts - And the podcast I couldn't remember was "Cautionary Tales" from @TimHarford talking to Michael Lewis. Link in comment.

youtube.com/watch?v=Xyj9dEe_SR4

#TagCloud time: #Ai #ChatGPT #CharacterAI #Roleplaying #Ethics #Podcast #Blind #Therapy #Romance #Scam #Dating #Military #Drone #YouTube #Tech #Risk #Security #CyberSecurity #News #CustomerService #Sludge