Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
A nonprofit that serves teenagers complains about getting mugged by Salesforce. dang dons his shining armor and dashes forth to save the uwu smol bean megacorp from the peasant mob.
probably offtop? bad tech killing people, no need for imaginary robots, just greed https://xcancel.com/hntrbrkmedia/status/1968661471056769252#m https://hntrbrk.com/dexcom/
This one’s been making the rounds, so people have probably already seen it. But just in case…
Meta did a live “demo” of their
recordingnew AI.Recently thought about how this one xkcd has probably done more recruiting for the rat community per unit effort spent making it than that 700k word salad.
Where are we on xkcd? I haven’t looked at it regularly for over a decade now. Nothing personally against the author or comic itself, I just completely deconverted from consuming nerd celebrity content at that point in the past.
I still read xkcd regularly and think it’s pretty good. I don’t think “we” as a community need to have a particular opinion of Randall Munroe or his work but personally I think he seems alright and I enjoy the things he makes.
Seems like a stretch to assume that comic does anything to recruit rationalists. If you’re not already in the rat pipeline it’s just a pretty good joke about probability and if you are, it’s still a better example of Bayesian reasoning than whatever the rats pretend to do.
I follow it on RSS, it’s sometime’s funny but not required reading.
I don’t think you can blame the comic’s author for people on HN and elsewhere passing around references to specific comics to make their points.
As to the specific one mentioned here, I don’t remember reading it before.
edit to add sometimes it’s obvious the entire joke is in the alt-text, like so: https://xkcd.com/3143/
He kind of left his prime I think, the humor becoming alternatingly a bit too esoteric or a bit too obvious, and kind of stale in general. Nothing particularly objectionable about the author comes to mind otherwise.
OT: Baldur Bjarnason’s lamented how his webdev feed has turned to complete shit:
Between the direct and indirect support of fascism and the uncritical embrace of LLMs and the overwhelming majority of the dev sites in my feed reader have turned to an undifferentiated puddle of nonsense…
…Two years ago these feeds (I never subscribed to any of the React grifters) were all largely posts on concrete problem-solving and, y’know, useful stuff. Useful dev discourse has collapsed into a tiny handful of blogs.
Many of our favorite people abuse meth and meth adjacent sustances. In the long term, this behavior visibly degrades dental health.
Therefore, it wont be long until we witness actual real life cases of smartmouth.
Have they ever discussed how much they take on average?
Creative applications for stimulants comes up pretty frequently in techie discussions of nootropics and “stacks” thereof.
This is about as techie as I go as far as internet communities haha.
Accidentally posted in an old thread:
Math competitions need to start assigning problems that require counting the letters in fruit names.
Word problems referring to aliens from cartoons. “Bobbby on planet Glorxon has four Strawberies, which are similar to but distinct from earth strawberries, and Kleelax has seven…”
I also wonder if you could create context breaks, or if they’ve hit a point where that isn’t as much of a factor. "A train leaves Athens, KY traveling at 45 mph. Another train leaves Paris, FL traveling at 50 mph. If the track is 500 miles long, how long is a train trip from Athens to Paris?
LLM’s ability to fake solving word problems hinges on being able to crib the answer, so using aliens from cartoons (or automatically-generating random names for objects/characters) will prove highly effective until AI corps can get the answers into their training data.
As for context breaks, those will remain highly effective against LLMs pretty much forever - successfully working around a context break requires reasoning, which LLMs are categorically incapable of doing.
Constantly and subtly twiddling with questions (ideally through automatic means) should prove effective as well - Apple got “reasoning” text extruders to flounder and fail at simple logic puzzles through such a method.
Nice result, not too shocking after IMO performance. A friend of mind told me that this particular competition is highly time constrained for human competitors, i.e., questions aren’t impossibly difficult per se, but some are time sinks that you simply avoid to get points elsewhere. (5 hours on 12 Qs is tight…)
So when you are competing against a data center using a nuclear reactor vs 3 humans running on broccoli, the claims of superhuman performance definitely require an * attached to them.
Also accidentally posted in an old thread:
Hot take: If a text extruder’s winning gold medals at your contest, that’s not a sign the text extruder’s good at something, that’s a sign your contest is worthless for determining skill.
Pretty sure I’ve heard similar things about AI “art” vs artists as well.
LLVM is having a discussion on how to handle vibe coders attacking the project, and its causing Discoursetm on the red site.
Some of our younger readers might not be fully inoculated against high-control language. Fortunately, cult analyst Amanda Montell is on Crash Course this week with a 45min lecture introducing the dynamics of cult linguistics. For example, describing Synanon attack therapy, Youtube comments, doomscrolling, and maybe a familiar watering hole or two:
You know when people can’t stop posting negative or conspiratorial comments, thinking they’re calling someone out for some moral infraction, when really they’re just aiming for clout and maybe catharsis?
Cultish and Magical Overthinking are top shelf.
Angela Collier: Dyson spheres are a joke.
spoiler
Turns out Dyson agreed.
david heinemeier hanson of the ruby on rails fame decided to post a white supremacist screed with a side of transphobia because now he doesn’t need to pretend anything anymore. it’s not surprising, he was heading this way for a while, but seeing the naked apology of fascism is still shocking for me.
any reasonable open source project he participates in should immediately cut ties with the fucker. (i’m not holding my breath waiting, though.)
Urgh, I couldn’t even get through the whole article, it’s too disgusting. What a surprise that yet another “no politics at work”-guy turns out to support fascism!
@mawhrin just casually pitching “great replacement theory” there. What a little Nazi
just yesterday I saw this toot and now I know why
(I mean, they probably should’ve bounced the guy a decade ago, but definitely even more time for it now)
Sabine Hossenfelder claims she finally got cancelled, kind of - Munich Center for Mathematical Philosophy cut ties with Sabine Hossenfelder.
Supposedly the MCMP thought publicly shitting on a paper for clicks on your very popular youtube channel was antideontological. Link goes to reddit post in case you don’t want to give her views.
(sees YouTube video)
I ain’t [watchin] all that
I’m happy for u tho
Or sorry that happened
The commentator who thinks that USD 120k / year is a poor income for someone with a PhD makes me sad. That is what you earn if you become a professor of physics at a research university or get a good postdoc, but she aged out of all of those jobs and was stuck on poorly paid short-term contracts. There are lots of well-paid things that someone with a PhD in physics can do if she is willing to network and work for it, but she chose “rogue intellectual.”
A German term to look up is WissZeitVG but many academic jobs in many countries are only offered to people no more than x years after receiving their PhD (yep, this discriminates against women and the disabled and those with sick spouses or parents).
Was reading some science fiction from the 90’s and the AI/AGI said ‘im an analog computer, just like you, im actually really bad at math.’ And I wonder how much damage these one of these ideas (the other being there are computer types that can do more/different things. Not sure if analog turing machines provide any new capabilities that digital TMs do, but I leave that question for the smarter people in the subject of theorethical computer science) did.
The idea that a smart computer will be worse at math (which makes sense from a storytelling perspective as a writer, because smart AI who also can do math super well is gonna be hard to write), which now leads people who read enough science fiction to see the machine that can’t count nor run doom and go ‘this is what they predicted!’.
Not a sneer just a random thought.
It’s because of research in the mid-80s leading to Moravec’s paradox — sensorimotor stuff takes more neurons than basic maths — and Sharp’s 1983 international release of the PC-1401, the first modern pocket computer, along with everybody suddenly learning about Piaget’s research with children. By the end of the 80s, AI research had accepted that the difficulty with basic arithmetic tasks must be in learning simple circuitry which expresses those tasks; actually performing the arithmetic is easy, but discovering a working circuit can’t be done without some sort of process that reduces intermediate circuits, so the effort must also be recursive in the sense that there are meta-circuits which also express those tasks. This seemed to line up with how children learn arithmetic: a child first learns to add by counting piles, then by abstracting to symbols, then by internalizing addition tables, and finally by specializing some brain structures to intuitively make leaps of addition. But sometimes these steps result in wrong intuition, and so a human-like brain-like computer will also sometimes be wrong about arithmetic too.
As usual, this is unproblematic when applied to understanding humans or computation, but not a reasonable basis for designing a product. Who would pay for wrong arithmetic when they could pay for a Sharp or Casio instead?
Bonus: Everybody in the industry knew how many transistors were in Casio and Sharp’s products. Moravec’s paradox can be numerically estimated. Moore’s law gives an estimate for how many transistors can be fit onto a chip. This is why so much sci-fi of the 80s and 90s suggests that we will have a robotics breakthrough around 2020. We didn’t actually get the breakthrough IMO; Moravec’s paradox is mostly about kinematics and moving a robot around in the world, and we are still using the same kinematic paradigms from the 80s. But this is why bros think that scaling is so important.
Could be, not sure the science fiction authors thought this much about it. (Or if the thing I was musing about is even real and not just a coincidence that I read a few works in which it is a thing). Certainly seems likely that this sort of science is where the idea came from.
Moravec’s Paradox
Had totally forgotten the name of that (Being better at remembering random meme stuff but not names of concepts like this, or a lot of names in general is a curse, also a source of imposter syndrome). But I recall having read the wikipedia page of that before. (Moravec also was the guy who thought of bush robots, wonder if that idea survived the more recent developments of nanotechnology.
Not sure if analog turing machines provide any new capabilities that digital TMs do, but I leave that question for the smarter people in the subject of theorethical computer science
The general idea among computer scientists is that analog TMs are not more powerful than digital TMs. The supposed advantage of an analog machine is that it can store real numbers that vary continuously while digital machines can only store discrete values, and a real number would require an infinite number of discrete values to simulate. However, each real number “stored” by an analog machine can only be measured up to a certain precision, due to noise, quantum effects, or just the fact that nothing is infinitely precise in real life. So, in any reasonable model of analog machines, a digital machine can simulate an analog value just fine by using enough precision.
There aren’t many formal proofs that digital and analog are equivalent, since any such proof would depend on exactly how you model an analog machine. Here is one example.
Quantum computers are in fact (believed to be) more powerful than classical digital TMs in terms of efficiency, but the reasons for why they are more powerful are not easy to explain without a fair bit of math. This causes techbros to get some interesting ideas on what they think quantum computers are capable of. I’ve seen enough nonsense about quantum machine learning for a lifetime. Also, there is the issue of when practical quantum computers will be built.
Thanks. I know some complexity theory, but not enough. (Enough to know it wasn’t gonna be my thing).
this is one of those things that’s, in a narrative sense, a great way to tell a story, while being completely untethered from fact/reality. and that’s fine! stories have no obligation to be based in fact!
to put a very mild armchair analysis about it forward: it’s playing on the definition of the conceptual “smart” computer, as it relates to human experience. there’s been a couple of other things in recent history that I can think of that hit similar or related notes (M3GAN, the whole “omg the AI tricked us (and then the different species with a different neurotype and capability noticed it!)” arc in ST:DIS, the last few Mission Impossible films, etc). it’s one of those ways in which art and stories tend to express “grappling with $x to make sense of it”
The idea that a smart computer will be worse at math (which makes sense from a storytelling perspective as a writer, because smart AI who also can do math super well is gonna be hard to write)
personally speaking, one of the ways about it that I find most jarring is when the fantastical vastly outweighs anything else purely for narrative reasons - so much so that it’s a 4th-wallbreak for me ito what the story means to convey. I reflect on this somewhat regularly, as it’s a rather cursed rabbithole that instances repeatedly: “is it my knowledge of this domain that’s spoiling my enjoyment of this thing, or is the story simply badly written?” is the question that comes up, and it’s surprisingly varied and complicated in its answering
on the whole I think it’s often good/best to keep in mind that scifi is often an exploration and a pressure valve, but that it’s also worth keeping an eye on how much it’s a pressure valve. too much of the latter, and something™ is up
Ow yeah the way it used in this story also made sense but not in a computer science way. Just felt a bit how Gibson famously had never used a modem before he wrote his cyberpunk series.
@Soyweiser @techtakes You misremembered: Gibson wrote his early stories and Neuromancer on a typewriter, he didn’t own a computer until he bought one with the royalties (an Apple IIc, which then freaked him out by making graunching noises at first—he had no idea it needed a floopy disk inserting).
Thanks! I should have looked up the whole quote, but I just made a quick reply I knew I had worded it badly and I had it wrong, but just didn’t do anything about it. My bad.
AGI: I’m not a superintelligence, I’m you.
My im not a witch shirt …
@Soyweiser @BlueMonday1984 seems plausible tbh
This isn’t an idea that I’ve heard of until you mentioned it, so it likely hasn’t got much purchase in the public consciousness. (Intuitively speaking, a computer which sucks at maths isn’t a good computer, let alone AGI material.)
Yeah, I was also just wondering, as obv what I read is not really typical of the average public. Can’t think of any place where this idea spread in non-written science fiction for example, with an exception being the predictions of C-3PO, who always seems to be wrong. But he is intended as a comedic sidekick. (him being wrong can also be seen as just the lack of value in calculating odds like that, esp in a universe with The Force).
But yes, not likely to be a big thing indeed.
Getting pretty far afield here, but goddamn Matt Yglesias’s new magazine sucks:
The case for affirmative action for conservatives
“If we cave in and give the right exactly what they want on this issue, they’ll finally be nice to us! Sure, you might think based on the last 50,000 times we’ve tried this strategy that they’ll just move the goalposts and demand further concessions, but then they’ll totally look like hypocrites and we’ll win the moral victory, which is what actually matters!”
@PMMeYourJerkyRecipes @BlueMonday1984
The guy from the Federalist *doesn’t* want more ideological diversity in academia, he wants *less*. But he’ll settle for more as an interim goal until he can purge the wrong-thinkers.
We need a word for when they make up a guy who doesn’t exist and then get mad at him.
Pretty sure that’s a strawman.
Since this is the solo version, strawmasturbating
Straw-onanism
Oooh that’s good
I mean, I think the relevant difference is that rather than trying to argue against a weak opponent they’re trying to validate their feelings of victimization, superiority, and/or outrage by imagining an appropriate foil.
It’s a straw man that exists to be effectively venerated rather than torn down.
I think I might be missing some context here. Granted without context I’m pretty sure that strawman is still the right word.
I guess keeping in theme, “vibe replying”