

New Atlantic article regarding AI, titled “AI Is a Mass-Delusion Event”. Its primarily about the author’s feelings of confusion and anxiety about the general clusterfuck that is the bubble.
New Atlantic article regarding AI, titled “AI Is a Mass-Delusion Event”. Its primarily about the author’s feelings of confusion and anxiety about the general clusterfuck that is the bubble.
they were also more media savvy in that they didn’t pollute info space with their ideas only using blog posts, they had entire radio station rented time from a major radio station within russia, broadcasting both within freshly former soviet union and into japan from vladivostok (which was much bigger deal in 90s than today)
Its pretty telling about Our Good Friends’ media savviness that it took an all-consuming AI bubble and plenty of help from friends in high places to break into the mainstream.
Things will be better in the medium term as the surviving companies realise that workers do things and AI doesn’t. But the short term will be a bit of an arse.
Short-tem’s definitely gonna be a nightmare and a half, long-term’s probably gonna be better overall, but the medium term could go either way.
On the one hand, the AI bubble’s burst could be enough to force CEOs and investors to see reality - they aren’t gonna suffer any sort of material harm from this bubble, but seeing incontrovertible evidence that AI will make them zero money, rather than all of the money, should be enough to get them to finally fucking stop.
On the other hand, AI not only has horrendous amounts of money put behind it, but horrendous amounts of political capital - far as the CEOs and investors of the world see it, AI is their opportunity to destroy labour once and for all, and will burn the world to the ground if it means their dystopian dreams can be realised.
Xe Iaso’s chimed in on the GPT-5 fallout, giving her thoughts on chatbots’ use as assistants/therapists.
New piece from the Financial Times: Tech utterly dominates markets. Should we worry?
Pulling out a specific point, the article’s noted how market concentration is higher now than it was in the dot-com bubble back in 2000:
You want my overall take, I’m with Zitron - this is quite a narrative shift.
Good catch, I’ll quickly update my post now.
I decided to look deeper into that subreddit, and I found the most utterly cursed sentence I’ve read all week (coming from the aptly-titled “Why 99% of YouTubers Fail (And How to Be the 1% That Doesn’t)”):
If that doesn’t sum up everything wrong with AI sloppers in a single sentence, I don’t know what does.
EDIT: Incorrectly claimed it came from “My Unethical Strategy to Hit 4000 Hours Watch Time in 40 Days” - fixed that now.
Ed Zitron has chimed in on OpenAI’s woes, directly comparing their situation to a dying MMO:
Zitron is in a pretty good position to make this comparison - he worked as a games journalist in the '00s before pivoting to working in public relations.
In other news, Politico’s management has gone on record stating their AI tools aren’t being held to newsroom editorial standards, in an arbitration hearing trying to resolve a major union dispute.
This is some primo Pivot to AI material, if I do say so myself.
New piece from Brian Merchant, about the growing power the AI bubble’s granted Microsoft, Google, and Amazon: The AI boom is fueling a land grab for Big Cloud
Another day, another case of “personal responsibility” used to shift blame for systemic issues, and scapegoat the masses for problems bad actors actively imposed on them.
Its not like we’ve heard that exact same song and dance a million times before, I’m sure the public hasn’t gotten sick and tired of it by this point.
Probable hot take: this shit’s probably also hampering people’s efforts to overcome self-serving bias, as well - taking responsibility for your own faults is hard enough in a vacuum, its likely even harder when bad actors act with impunity by shifting the blame to you.
I like the DNF / vaporware analogy, but did we ever have a GPT Doom or Duke3d killer app in the first place? Did I miss it?
In a literal sense, Google did attempt to make GPT Doom, and failed (i.e. a large language model can’t run Doom).
In a metaphorical sense, the AI equivalent to Doom was probably AI Dungeon, a roleplay-focused chatbot viewed as quite impressive when it released in 2020.
Ed Zitron’s given his thoughts on GPT-5’s dumpster fire launch:
Personally, I can see his point - the Duke Nukem Forever levels of hype around GPT-5 set the promptfondlers up for Duke Nukem Forever levels of disappointment with GPT-5, and the “deaths” of their AI waifus/therapists this has killed whatever dopamine delivery mechanisms they’ve set up for themselves.
Anyways, personal sidenote/prediction: I suspect the Internet Archive’s gonna have a much harder time archiving blogs/websites going forward.
Me, two months ago
Looks like I was on the money - Reddit’s began limiting what the Internet Archive can access, claiming AI corps have been scraping archived posts to get around Reddit’s pre-existing blocks on scrapers. Part of me suspects more sites are gonna follow suit pretty soon - Reddit’s given them a pretty solid excuse to use.
You’re dead right on that.
Part of me suspects STEM in general (primarily tech, the other disciplines look well-protected from the fallout) will have to deal with cleaning off the stench of Eau de Fash after the dust settles, with tech in particular viewed as unequipped to resist fascism at best and out-and-proud fascists at worst.
Iris van-Rooij found AI slop in the wild (determining it as such by how it mangled a word’s definition) and went on find multiple other cases. She’s written a blog post about this, titled “AI slop and the destruction of knowledge”.
New Blood in the Machine about GPT-5’s dumpster fire launch: GPT-5 is a joke. Will it matter?
I wrote yesterday about red-team cybersecurity and how the attack testing teams don’t see a lot of use for AI in their jobs. But maybe the security guys should be getting into AI. Because all these agents are a hilariously vulnerable attack surface that will reap rich rewards for a long while to come.
Hey, look on the bright side, David - the user is no longer the weakest part of a cybersecurity system, so they won’t face as many social engineering attempts on them.
Seriously, though, I fully expect someone’s gonna pull off a major breach through a chatbot sooner or later. We’re probably overdue for an ILOVEYOU-level disaster.
Ed Zitron’s planning to hold AI boosters to account: