

By refusing to focus on a single field at a time AI companies really did make it impossible to take advantage of Gel-Mann amnesia.
By refusing to focus on a single field at a time AI companies really did make it impossible to take advantage of Gel-Mann amnesia.
There’s inarguably an organizational culture that is fundamentally disinterested in the things that the organization is supposed to actually do. Even if they aren’t explicitly planning to end social security as a concept by wrecking the technical infrastructure it relies on, they’re almost comedically apathetic about whether or not the project succeeds. At the top this makes sense because politicians can spin a bad project into everyone else’s fault, but the fact that they’re able to find programmers to work under those conditions makes me weep for the future of the industry. Even simple mercenaries should be able to smell that this project is going to fail and look awful on your resume, but I guess these yahoos are expecting to pivot into politics or whatever administration position they can bargain with whoever succeeds Trump.
That’s fascinating, actually. Like, it seems like it shouldn’t be possible to create this level of grammatically correct text without understanding the words you’re using, and yet even immediately after defining “unsupervised” correctly the system still (supposedly) immediately sets about applying a baffling number of alternative constraints that it seems to pull out of nowhere.
OR alternatively despite letting it “cook” for longer and pregenerate a significant volume of its own additional context before the final answer the system is still, at the end of the day, an assembly of sochastic parrots who don’t actually understand anything.
I don’t think that the actual performance here is as important as the fact that it’s clearly not meaningfully “reasoning” at all. This isn’t a failure mode that happens if it’s actually thinking through the problem in front of it and understanding the request. It’s a failure mode that comes from pattern matching without actual reasoning.
write it out in ASCII
My dude what do you think ASCII is? Assuming we’re using standard internet interfaces here and the request is coming in as UTF-8 encoded English text it is being written out in ASCII
Sneers aside, given that the supposed capability here is examining a text prompt and reason through the relevant information to provide a solution in the form of a text response this kind of test is, if anything, rigged in favor of the AI compared to some similar versions that add in more steps to the task like OCR or other forms of image parsing.
It also speaks to a difference in how AI pattern recognition compared to the human version. For a sufficiently well-known pattern like the form of this river-crossing puzzle it’s the changes and exceptions that jump out. This feels almost like giving someone a picture of the Mona Lisa with aviators on; the model recognizes that it’s 99% of the Mona Lisa and goes from there, rather than recognizing that the changes from that base case are significant and intentional variation rather than either a totally new thing or a ‘corrupted’ version of the original.
It’s also the sound it makes when I drop-kick their goddamned GPU clusters into the fuckin ocean. Thankfully I haven’t run into one of these yet, but given how much of the domestic job market appears to be devoted towards not hiring people while still listing an opening it feels like I’m going to.
On a related note, if anyone in the Seattle area is aware of an opening for a network engineer or sysadmin please PM me.
I feel like cult orthodoxy probably accounts for most of it. The fact that they put serious thought into how to handle a sentient AI wanting to post on their forums does also suggest that they’re taking the AGI “possibility” far more seriously than any of the companies that are using it to fill out marketing copy and bad news cycles. I for one find this deeply sad.
Edit to expand: if it wasn’t actively lighting the world on fire I would think there’s something perversely admirable about trying to make sure the angels dancing on the head of a pin have civil rights. As it is they’re close enough to actual power and influence that their enabling the stripping of rights and dignity from actual human people instead of staying in their little bubble of sci-fi and philosophy nerds.
Only as a subset of the broader problem. What if, instead of creating societies in which everyone can live and prosper, we created people who can live and prosper in the late capitalist hell we’ve already created! And what if we embraced the obvious feedback loop that results and call the trillions of disposable wireheaded drones that we’ve created a utopia because of how high they’ll be able to push various meaningless numbers!
I read through a couple of his fiction pieces and I think we can safely disregard him. Whatever insights he may have into technology and authoritarianism appear to be pretty badly corrupted by a predictable strain of antiwokism. It’s not offensive in anything I read - he’s not out here whining about not being allowed to use slurs - but he seems sufficiently invested in how authoritarians might use the concerns of marginalized people as a cudgel that he completely misses how in reality marginalized people are more useful to authoritarian structures as a target than a weapon.
The whole CoreWeave affair (and the AI business in general) increasingly remind me of this potion shop, only with literally everyone playing the role of the idiot gnomes.
I gave him a long enough chance to prove his views had changed to go read Hanania’s actual feed. Pinned tweet is bitching about liberals canceling people. Just a couple days ago he was on a podcast bitching about trans people and talking about how it’s great to be a young broke (asian) woman because you can be exploited by rich old (white) men.
So yeah he’s totally not a piece of shit anymore. Don’t even worry about it.
Can’t wait until someone tries to Samizdat their AI slop to get around this kind of test.
Just think of how much more profit you could make to address environmental issues by forgoing basic safety and ecological protections. Who needs blowout preventers anyway?
Nah, to keep with the times it should be a matte black Tesla Model 3 with the sith empire insignia on top and a horn that plays the imperial march.
Script kiddies at least have the potential to learn what they’re doing and become proper hackers. Vibe coders are like middle management; no actual interest in learning to solve the problem, just trying to find the cheapest thing to point at and say “fetch.”
There’s a headline in there somewhere. Vibe Coders: stop trying to make fetch happen
Get David Graeber’s name out ya damn mouth. The point of Bullshit Jobs wasn’t that these roles weren’t necessary to the functioning of the company, it’s that they were socially superfluous. As in the entire telemarketing industry, which is both reasonably profitable and as well-run as any other, but would make the world objectively better if it didn’t exist
The idea was not that “these people should be fired to streamline efficiency of the capitalist orphan-threshing machine”.
This is how you know that most of the people working in AI don’t think AGI is actually going to happen. If there was any chance of these models somehow gaining a meaningful internal experience then making this their whole life and identity would be some kind of war crime.
New watermark technology interacts with increasingly widespread training data poisoning efforts so that if you try and have a commercial model remove it the picture is replaced entirely with dickbutt. Actually can we just infect all AI models so that any output contains hidden a dickbutt?
I’m reminded of my previous comment back on an unrelated subreddit talking about the Eye of Argon. Obviously that wasn’t as structural insane as My Immortal but I think the same principle holds to a degree:
“With a decent editor and several further drafts it could have been a solid, fun, entirely forgettable Conan pastiche. Instead, it’s the Eye of Argon.”
There’s a particular failure mode at play here that speaks to incompetent accounting on top of everything else. Like, without autocontouring how many additional radiologists would need to magically be spawned inti existence and get salaries, benefits, pensions, etc in order to reduce overall wait times by that amount? Because in reality that’s the money being left on the table; the fact that it’s being made up in shitty service rather than actual money shouldn’t meaningfully affect the calculus there.