Despite all of this progress, one critical piece is still missing: business models that make economic sense and allow AI developers to sustain themselves long enough to deliver lasting impact.
“It’s definitely working, except for the part where it does stuff for money.”
I am one of Lemmy’s rare defenders of the underlying technology, so allow me say: yikes.
The business model for LLMs will be trying to beat the convenience, cost, and security of running the damn thing yourself. So far you idiots have burned money giving them away, and those aren’t going anywhere when you start to crash and burn. People will foreverafter have unfettered access to a program that half-asses anything you ask it for. The big-boy versions your datacenters run have fractionally more ass. They’re not fundamentally better.
Frankly, teaching people to use the small-scale versions properly will work out better than adding another billion weights. These things struggle to remember what they’re doing when simply listing US states without an E in them. A semantic .h file describing other files must be more reliable than ingesting a big-ass project and hoping the context never gets lost.
We’re gonna end up personifying that functionality just so people forgive it for being stupid. The sycophantic butler routine is too Asimov, and people are shocked that a computer doesn’t do exactly what it’s told. The button in your system tray needs to pop up a little guy who you know is only trying his best. You can absolutely get useful work out of him - or out of fifteen of him at the same time, doing different stuff. But if a cat in a necktie says ‘here you go,’ you’re gonna skim for problems. And if the loading animation on his forehead says it’s not his turn on the brain cell, you’ll avoid giving him your credit card.



