

My name is Rocco Pa’perclip Basilico Yudkowski Way…
My name is Rocco Pa’perclip Basilico Yudkowski Way…
It is kinda fun to think about the counterfactual world where this shit had worked as expected on the first try but for obvious reasons didn’t hold up in copyright court, making the entire catalogue used for training functionally public domain.
Don’t mock me dammit, I’m allowed to dream.
Technocapital identifying a real problem only to frame it in a way that puts the blame on users/workers rather than themselves? Shocking development. Usually they can’t come up with half that good a name for it.
This is a fascinating case study in the democracy vs technocrat debate, where the educated, well-paid consultant class is too far up their own ideological asshole to recognize that their GenAI chatbot doesn’t work as advertised, and that lack of sanity is poisoning the well and killing discussions over how to use the power of machine learning well. But rather than back off the chatbot, even just to “let it cook” (read: let Saltman and friends continue to play with it in arenas that don’t have geopolitically relevant stakes) they’re doubling down to make this a pure messaging issue which I expect is only going to make the problem worse. Like, people aren’t unaware of GenAI at this point, and education on prompting it right is just a distraction from the fundamental problem.
Yeah, sharing an acronym with Traumatic Brain Injury was probably a prophetic coincidence rather than unfortunate one.
RAIDEN!
Ed: I’m sure there’s an MGS2 quote I can’t think of that would make this actually funny, but here we are.
Now I want to mod an item that reverts a boss fight to phase 1.
“Have a snickers, Malenia, you turn into a real Goddess of Rot when you’re hungry”
“Two friends short of a podcast” is the single most powerful burn I’ve seen in quite some time.
It’s only fair that his audience shouldn’t be the only ones suffering when that happens.
Especially considering that the whole “your AI will negotiate with theirs” speaks to the kind of algorithmic price discrimination that you see in Uber and the like, where the system is designed specifically to maximize how much you’re willing and able to pay for a ride and minimize how much the driver is willing to accept for it. Hardcore techno libertarians want nothing more than to make it impossible for anyone to make meaningful informed choices about their lives that might prevent them from being taken advantage of by hardcore techno libertarians.
Word problems referring to aliens from cartoons. “Bobbby on planet Glorxon has four Strawberies, which are similar to but distinct from earth strawberries, and Kleelax has seven…”
I also wonder if you could create context breaks, or if they’ve hit a point where that isn’t as much of a factor. "A train leaves Athens, KY traveling at 45 mph. Another train leaves Paris, FL traveling at 50 mph. If the track is 500 miles long, how long is a train trip from Athens to Paris?
I mean, I think the relevant difference is that rather than trying to argue against a weak opponent they’re trying to validate their feelings of victimization, superiority, and/or outrage by imagining an appropriate foil.
It’s a straw man that exists to be effectively venerated rather than torn down.
I don’t know if it quite applies here since all the money is openly flowing to nVidia in exchange for very real silicon, but I’m partial to “the bezzle” - referring to the duration of time between a con artist taking your money and you realizing the money is gone. Some cons will stretch the bezzle out as long as possible by lying and faking returns to try and get the victims to give them even more money, but despite how excited the victims may be about this period the money is in fact already gone.
I mean if it gets too hot he could try the traditional fiber arts dodge for internet hate and fake his own death.
“As a scientist…” please stop giving the world more reasons to stuff nerds in lockers.
I would actually contend that crypto and the metaverse both qualify as early precursors to the modern AI post-economic bubble. In both cases you had a (heavily politicized) story about technology attract investment money well in excess of anyone actually wanting the product. But crypto ran into a problem where the available products were fundamentally well-understood forms of financial fraud, significantly increasing the risk because of the inherent instability of that (even absent regulatory pressure the bezzle eventually runs out and everyone realizes that all the money in those ‘returns’ never existed). And the VR technology was embarrassingly unable to match the story that the pushers were trying to tell, to the point where the next question, whether anyone actually wanted this, never came up.
GenAI is somewhat unique in that the LLMs can do something impressive in mimicking the form of actual language or photography or whatever it was trained on. And on top of that, you can get impressively close to doing a lot of useful things with that, but not close enough to trust it. That’s the part that limits genAI to being a neat party trick, generating bulk spam text that nobody was going to read anyways, and little more. The economics don’t work out when you need to hire someone skilled enough to do the work to take just as much time double-checking the untrustworthy robot output, and once new investment capital stops subsidizing their operating costs I expect this to become obvious, though with a lot of human suffering in the debris. The challenge of “is this useful enough to justify paying its costs” is the actual stumbling block here. Older bubbles were either blatantly absurd (tulips, crypto) or overinvestment as people tried to get their slice of a pie that anyone with eyes could see was going to be huge (railroad, dotcom). The combination of purely synthetic demand with an actual product is something I can’t think of other examples of, at this scale.
Thank you for sharing this bit of internet deep lore. Now I just need to find the four hour youtube video of some ex-GI gun nut explaining in exhausting detail exactly how bullshit every detail of those stories is because whatever the fuck is going on there is fascinating.
Sneer inspired by a thread on the preferred Tumblr aggregator subreddit.
Rationalists found out that human behavior didn’t match their ideological model, then rather than abandon their model or change their ideology decided to replace humanity with AIs designed to behave the way they think humans should, just as soon as they can figure out a way to do that without them destroying all life in the universe.
I don’t even know the degree to which that’s the fault of the old hackers, though. I think we need to acknowledge the degree to which a CS degree became a good default like an MBA before it, only instead of “business” it was pitched as a ticket to a well-paying job in “computer”. I would argue that a large number of those graduates were never going to be particularly interested in the craft of programming beyond what was absolutely necessary to pull a paycheck.
I need you to understand how nearly I added myself to even more watch lists because of that analogy. Avalanche!