

I’ve created a new godlike AI model. Its the Eliziest yet.
I’ve created a new godlike AI model. Its the Eliziest yet.
The thing that kills me about this is that, speaking as a tragically monolingual person, the MTPE work doesn’t sound like it’s actually less skilled than directly translating from scratch. Like, the skill was never in being able to type fast enough or read faster or whatever, it was in the difficult process of considering the meaning of what was being said and adapting it to another language and culture. If you’re editing chatbot output you’re still doing all of that skilled work, but being asked to accept half as much money for it because a robot made a first attempt.
In terms of that old joke about auto mechanics, AI is automating the part where you smack the engine in the right place, but you still need to know where to hit it in order to evaluate whether it did a good job.
I get the idea they’re going for: that coding ability is a leading indicator for progress towards AGI. But even if you ignore how nonsensical the overall graph is the argument itself is still begging the question of how much actual progress and capability it has to write code rather than spitting out code-shaped blocks of text that can successfully compile.
NANDA claims that agentic AI — or the thing of that name that they’re selling — will definitely learn real good without training completely afresh.
Given their web3 roots, I feel like we should point out that blockchain storage systems are famously cheap and efficient to update and modify, so this claim actually seems perfectly reasonable to me /s.
Anyone who said this about their product would almost certainly by lying, but these guys are extra lying.
And once it does they’ll quietly stop talking about it for a while to “focus on the human stories of those affected” or whatever until the nostalgic retrospectives can start along with the next thing.
Oxford Economist in the NYT says that AI is going to kill cities if they don’t prepare for change. (Original, paywalled)
I feel like this is at most half the picture. The analogy to new manufacturing technologies in the 70s is apt in some ways, and the threat of this specific kind of economic disruption hollowing out entire communities is very real. But at the same time as orthodox economists so frequently do his analysis only hints at some of the political factors in the relevant decisions that are if anything more important than technological change alone.
In particular, he only makes passing reference to the Detroit and Pittsburgh industrial centers being “sprawling, unionized compounds” (emphasis added). In doing so he briefly highlights how the changes that technology enabled served to disempower labor. Smaller and more distributed factories can’t unionize as effectively, and that fragmentation empowers firms to reduce the wages and benefits of the positions they offer even as they hire people in the new areas. For a unionized auto worker in Detroit, even if they had replaced the old factories with new and more efficient ones the kind of job that they had previously worked that had allowed them to support themselves and their families at a certain quality of life was still gone.
This fits into our AI skepticism rather neatly, because if the political dimension of disempowering labor is what matters then it becomes largely irrelevant whether LLM-based “AI” products and services can actually perform as advertised. Rather than being the central cause of this disruption it becomes the excuse, and so it just has to be good enough to create the narrative. It doesn’t need to actually be able to write code like a junior developer in order to change the senior developer’s job to focus on editing and correcting code-shaped blocks of tokens checked in by the hallucination machine. This also means that it’s not going to “snap back” when the AI bubble pops because the impacts on labor will have already happened, any more than it was possible to bring back the same kinds of manufacturing jobs that built families in the postwar era once they had been displaced in the 70s and 80s.
deleted by creator
Even if they aren’t actively relying on each other here I would assume that we’re reaching a stage where all of the competing LLMs are using basically the entire Internet as their training data, and while there is going to be some difference based on the reinforcement learning process there’s still going to be a lot of convergence there.
I found this article in Fortune that similarly says 95% of GenAI pilots at companies fail to have a positive impact on the bottom line. They spend a lot of ink trying to sidestep the obvious explanation in favor of talking about the ways people are probably just prompting it wrong, and I couldn’t be bothered to fill out the form asking MIT’s group for access to the underlying report.
Okay so I know GPT-5 had a bad launch and has been getting raked over the coals, but AGI is totally still on, guys!
Why? Because trust me it’s definitely getting better behind the scenes in ways that we can’t see. Also China is still scary and we need to make sure we make the AI God that will kill us all before China does because reasons.
Also despite talking about a how much of the lack of progress is due to the consumer model and this is a cost-saving there’s no reference to the work of folks like Ed Zitron on how unprofitable these models are, much less the recent discussions on whether GPT-5 as a whole is actually cheaper to operate than earlier models given the changes it necessitates in caching.
In related news I’ve been getting podcast ads for Anthropic touting Claude’s emotional intelligence and value in working through life’s challenges and listening to your relationship issues.
They’re not explicitly saying that their chatbot is a therapist, but they’re getting about as close as the law would allow, I’m sure.
When are they going to learn that it’s all about the alien tech med beds that I definitely have in my basement and can sell you 14 seconds on for just $6660?
Looking to exploit citogenesis for political gain.
[…] it actually has surprisingly little to do with any of the intellectual lineages that its proponents claim to subscribe to (Marxism, poststructuralism, feminism, conflict studies, etc.) but is a shockingly pervasive influence across modern culture to a greater degree than even most people who complain about it realize.
I mean, when describing TESCREAL Torres never had to argue that it’s adherents were lying or incorrect about their own ideas. It seems like whenever someone tries this kind of backlash they always have to add in a whole mess of additional layers that are somehow tied to what their interlocutors really believe.
I’m reminded, ironically, of Scott’s (imo very strong) argument against the NRx category of “demotist” states. It’s fundamentally dishonest to create a category that ties together both the innocuous or positive things your opponents actually believe and some obnoxious and terrible stuff, and then claim that the same criticisms apply to all of them.
I’m a little surprised there hasn’t been more direct interaction between my “watching the far-right like heavily armed chimpanzees in a zoo” podcast circles and our techtakes sneerspace. Zitron’s work on Better Offline is great, obviously, but I’ve been listening through QAA, for example, and their discussions of AI and its implications could probably benefit from a better technical grounding.
You love to see it, though.
Yeah. I think there’s definitely something interesting here, but it’s mostly in how badly compromised the final pproduct ends up being in order to support the AI tools.
That’s how I remember it too. Also the context about conserving N95 masks always feels like it gets lost. Like, predictably so and I think there’s definitely room to criticize the CDC’s messaging and handling there, but the actual facts here aren’t as absurd as the current fight would imply. The argument was:
I think later research cast some doubt on point 1, but 2-4 are still pretty solid given the circumstances that we (collectively) found ourselves in.
The on-camera duo are exempt for obvious reasons, but they’ve definitely hit at least one of their mods. Before The Wheel was implemented I seem to remember they even specifically targeted them sometimes for the joke.
I’m reminded of the comedy/gaming stream that I watch that opens every episode with banning a random member of chat based on a spin of the wheel. It certainly lends the community a certain flavor, even if it is more “jingly keys” rather than “strong community.”
Promptfondlers are tragically close to the point. Like I was saying yesterday about translators the future of programming in AI hell is going to be senior developers using their knowledge and experience to fix the bullshit that the LLM outputs. What’s going to happen when they retire and there’s nobody with that knowledge and experience to take their place? I’ll have sold off my shares by then, I’m sure.