

Yes, but the article’s not actually about that. It’s about Microsoft returning to the same datacenter-building schedule from a decade ago. Datacenters have a lag of about 3-5yrs depending on what’s inside them and where they’re located, so what we’re actually seeing is Microsoft projecting a relative reduction in overall usage. Note that among all the cancellations of notes and prospective claims, Microsoft isn’t walking back their two-decade nuclear-power deal with Westinghouse; they’re not destroying or reducing any existing capacity, just planning to build less. At risk of quoting Bloomberg:
After a frantic expansion to support OpenAI and other artificial intelligence projects, [Microsoft] expects spending to shift from new construction to fitting out data centers with servers and other equipment.
To the extent that the bubble is popping, Microsoft and other datacenter owners have to guess half a decade in advance when the bubble will pop, and if you take them at their word — that is, if we assume that they canceled these contracts with perfect foresight — then the bubble must have already popped in 2023-2024, and the market is experiencing coyote time because…? More likely, this is fallout from their ongoing breakup with OpenAI, who almost certainly begged Microsoft for so much compute (and definitely begged for too many nVidia GPUs!) that Microsoft had to adjust their datacenter plans. The bubble’s not done until OpenAI has exhausted all possible funding, say in late 2025 or early 2026 when Softbank and the Saudis realize that they’ve made a hilarious mistake.
We’ve discussed this previously on awful.systems, both the value of nuclear-energy contracts and Microsoft’s retraction of intents.
In practice, the behaviors that the chatbots learn in post-training are FUD and weasel-wording; they appear to not unlearn facts, but to learn so much additional nuance as to bury the facts. The bots perform worse on various standardized tests about the natural world after post-training; there are quantitative downsides to forcing them to adopt any particular etiquette, including speaking like a chud.
The problem is mostly that the uninformed public will think that the chatbot is knowledgeable and well-spoken because it rattles off the same weak-worded hedges as right-wing pundits, and it’s addressed by the same improvements in education required to counter those pundits.
Answering your question directly: no, slop machines can’t be countered with more slop machines without drowning us all in slop. A more direct approach will be required.