Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      2 hours ago

      Man, it’s frustrating to see him end up going down this route because the opening part of this is actually one of the better descriptions of AI psychosis I’ve seen, and i appreciate his emphasis on the way the delusion is built up in the sufferer’s mind rather than trying to game out what’s happening “inside” the chatbot. Even his point about how LLMs aren’t bad in exceptional ways for a new technology is pretty cogent. But his insistence on defending his own use of these things (and others who do so in “centaur-configured” ways) rather than thinking about how it interacts with all the relatively normal ways that this technology is wildly destructive is a very conspicuous blind spot.

      Like, you can absolutely drive a nail with a phone book, and given the wider surface area it even has the advantage over a traditional hammer of being harder to smash your fingers. An individual craftsman may well decide that this is a useful tool and in some cases worth using over other options. But if the only source of these hammer-books was an industry that relied on massive uncompensated use of creative work passed through exploited third-world labor, ground rainforests to dust to create special “old-growth paper”, placed massive and unsustainable burdens on existing road infrastructure to collect these parts and deliver them, and somehow had been blown into a speculative bubble that represented something like a quarter of the entire US economy by promising that if they created a big enough book then one guy could hammer all the nails at once and they could lay off all the carpenters, I think it’s justifiable to look at the people using it as a normal tool and ask them “what the actual fuck are you doing?” The usage statistics they represent and the user stories they tell are used to justify not addressing any of the harms necessary to enable this tool to exist in its current form, and are largely driving the absurd valuations that keep pumping the bubble. Your individual role in those harms as a small-time user who finds it occasionally useful may be incalculably small, but it is still real.

      Like, it feels like I agree with Doctorow on basically all the premises here. He seems to have a decent grasp on how the things actually work (even if he’s wrong about Ollama specifically being an LLM in its own right) and their associated limitations. He draws a decent line separating criticism from criti-hype. He is basically correct about how much of a bastard everyone involved in the industry at a high level is. But maybe because so many of these things aren’t really exceptional (save possibly in their sheer scale) he can’t seem to conceive of a world where things happen any differently, or of the role his actions and words play in reinforcing the status quo even as he writes pretty explicitly about how fucked up that status quo is.

      Honestly it makes me think of the finale of his second Martin Hench novel, The Bezzle. After drilling into the business of the private prison operator that is making his friend’s life hell and separating the merely fucked up parts from the things that might actually have consequences if word got to what passes for cops in that tax bracket, he doesn’t go to the papers or start reaching out to the SEC. Instead he goes to the bastard at the head of it all and blackmails him into making his friend’s remaining incarceration less hellish and leaving him alone. And his friend, who started all this by begging for help unraveling this shit, rightly calls Marty a coward for it. There’s something ironic in seeing Doctorow here seemingly make the same judgement: abuse and apathy are sufficiently normal that we shouldn’t even bother to try and make the world better, just find ways to shelter ourselves and the people we care about from the consequences. And hell, I guess even there I’m not immune to it. There are reasons why I’m posting here and not waiting out front of a hotel with some engraved brass. Still, on the continuum of such things I’m disappointed that the guy who wrote that scene is stuck in the normalization blues.

      • o7___o7@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        59 minutes ago

        It sucks. :(

        Honestly, the article reminds me of Scott Alexander, but succinct. “Here are several true things and an absolutely batshit wrong thing, presented together with equal earnestness.”

        The wrong thing being “Believing that LLMs are trash is a mental disorder (not really but wink wink).”

        Why do this now, when it’s all coming apart? It’s baffling.

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      13 hours ago

      The one-shotting phenomenon (or how a positive initial experience with the technology seems to lead to a heavily biased view of its merits) should probably be considered a distinct cognitive bias at this point.

      Turns out a lot of bright people can’t deal with a technology being utterly subjective in its efficiency, and also how that’s specifically the part that reduces it to being so narrowly useful as to force the existential question, given the insane resource burn and the socioeconomic disruption that’s part and parcel, even if like Doctorow you think that their rape and pillage of artist’s rights and intellectual property in general isn’t an especially big deal.

      Also, local LLMs are hardly extricable from the whole mess, they are basically a byproduct, and updated versions only will keep coming as long as their imperial size online counterparts remain a viable concern.

    • mirrorwitch@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      ·
      21 hours ago

      It’s true that these analogies can be stigmatizing, but they needn’t be. As someone with an autoimmune disorder, I am not bothered by people who describe ICE as an autoimmune disorder in which antibodies attack the host, threatening its very life.

      This bothers me more than I can explain.

      ICE as autoimmune disorder presupposes that it’s normally a good thing to have ICE around and it’s just malfunctioning as an exceptional state of things. If ICE is an immune system (malfunctional or not), what are we immigrants?

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        15 hours ago

        Yeah. When it comes down to it, the libs think the problem with Trump isn’t the fundamentals of what he is doing, it is that he is doing it without decorum or checking all the legal boxes or saying the usual lib pabulum to justify American imperialism. Skipping the legal checks and decorum is also bad, but in fact kids in cages was horrible when Obama was doing it the “right” way.

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        13 hours ago

        It is nuts to deny the experiences these people are having. They’re not vibe-coding mission-critical AWS modules. They’re not generating tech debt at scale:

        https://pluralistic.net/2026/01/06/1000x-liability/#graceful-failure-modes

        They’re just adding another automation tool to a highly automated practice, and using it when it makes sense. Perhaps they won’t always choose wisely, but that’s normal too. There’s plenty of ways that pre-AI automation tools for software development led programmers astray. A skilled, centaur-configured programmer learns from experience which automation tools they should trust, and under which circumstances, and guides themselves accordingly.

        Wow, the whole thing is indefensibly capital-W wrong, just an utterly weird rose-tinted view of the current corporate experience.

    • jaschop@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      24 hours ago

      Kind of wild that the guy who popularized “enshittification” as a term will die on the hill that the technology which drives the industrial enshittification of all human media is fine actually, because some people find the plugins useful.

    • Anisette [any/all]@quokk.au
      link
      fedilink
      English
      arrow-up
      7
      ·
      23 hours ago

      He knows how LLMs work, right? This really is just cope because he got called out for being weird about using them. Really fucking disappointing

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        10 hours ago

        In the original post he kept referring to Ollama like it was an LLM instead of a server app that hosts LLMs so I’d say the jury’s out on that.

        edit: Also, throughout this piece he keeps equivocating between local LLMs and their behemoth online counterparts with their heavily proprietary tooling that occasionally wraps them into a somewhat useful product.

        I think he assumes that because he can load up a modest speech-to-text model locally and casually transcribe several hours of video resources in somewhat short order (this was apparently his major formative experience with modern AI) it works the same with e.g. coding.

        Like, hey gpt-oss please make sense of these ten thousand lines of context without access to a hundred bespoke MCP intermediaries and one or three functioning RAG systems as I watch the token generation rate slow to a trickle while the context window gradually fills up.

    • mirrorwitch@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      1 day ago

      Take “Morgellons Disease,” a psychosomatic belief that you have wires growing in your body, which causes sufferers to pick at their skin to the point of creating suppurating wounds. Morgellons emerged in the 2000s, but the name refers to a 17th-century case-report of a patient who suffered from a similar delusion:

      Nitpick but this is unusually sloppy for Doctorow. 1) People with Morgellon’s don’t believe they have wires growing out of sores, but fibres (which upon examination turn out to be cotton for clothes). 2) The original Morgellons is a putative children’s disease «wherein they critically break out with harsh Hairs on their Backs, which takes off the Unquiet Symptomes of the Disease, and delivers them from Coughs and Convulsions.» Which is quite different from the modern condition, whose sufferers have skin sores anywhere in the body with fibrous material looking like lint, dandelion fluff etc., and not particularly associated with convulsions. And 3) The association between the two was made by Miriam Leitao, a mother who believes her son suffers from the disease, and has gone to countless doctors and media trying to prove it’s real. So it’s an attempt to legitimise the postulated disease by cherry-picking something “historical” that vaguely resembles it.

  • lurker@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    1 day ago

    the Pentagon’s CTO has AI psychosis now. sighhhhhhhhh

    The whole argument can just be countered with “if the Pentagon believes Claude is sentient and a danger to the military, then why make a deal with OpenAI to use ChatGPT, another LLM similar to Claude? Wouldn’t that also be a danger of becoming sentient? and why are Pete Hegseth and Donald Trump planning to force Anthropic to comply after 6 months if they believe Claude shouldn’t be in the military?? Why did you ask Anthropic to let you use Claude for mass surveillance and autonomous weapons if you believed it was sentient and a danger??”

    It just reeks of bullshit. “uhm actually we made Anthropic a supply chain risk because Claude is actually very dangerous and not because we’re doing banana republic shit to anyone who disagrees with us. we are a very responsible and safe government. please dont impeach trump.”

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      15 hours ago

      I wonder if one of the reasons Pete Hegseth is going so hard after Anthropic is that he and other idiots in the Pentagon unironically believes shit like AI 2027 and so wants to soft nationalize the frontier companies so to control the coming AGI. Considering that one of the uses the DoD allegedly wants LLMs for is fully autonomous weapons that at the very least have a very distorted view of what the technology is capable of. Or they want an accountability sink so they can kill people with even less accountability. …probably both.

      I find it darkly hilarious that the doomer crit-hype is finally coming around to bite them, not in the form of heavy handed shut-it-all-down regulation to stop skynet, but in the form of authoritarian wackos wanting to make sure they are the ones “in charge” of skynet.

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        10 hours ago

        It’s possible the attempt to shove AI in every nook and cranny in the pentagon didn’t especially pan out and since his face was all over that project, he’s desperate for a scapegoat.

        Like for sure he’d have had the logistics of the entire US army running smoothly despite layoffs by now, if it weren’t for the wokies in anthropic acting up.

  • samvines@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 day ago

    It turns out we didn’t need that list of AI-corrupted open source projects after all…

    At this rate it’s actually going to be easier to make a list of projects that don’t have AI…

    Systemd and libuv now on the slop hype train

    • mirrorwitch@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      Systemd

      Jesus.

      I’ve been advocating for a hall of fame of projects that explicitly reject LLMs; ctrl+f “Gentoo” on this very comment thread for the few examples I heard about.

    • jaschop@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 day ago

      Eh, straight pip with venv and pip-tools for support worked fine anyway. wrong uv!

      As for systemd… time to look at the BSDs? Was Debian among the anti-slop projects? Would be nice if they took an interest in preventing the slopification of one of their core system.

      • samvines@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        1 day ago

        Different UV! Libuv is the event loop/scheduler that powers node.js. could be a funky new way to compromise a whole bunch of node applications

        Edit: typo - although “nose applications” being compromised sounds bad too.

    • jaschop@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      23 hours ago

      I was low-key hoping for a technical philosophical article, which argues that to find any of this shit useful you need a distinctly american understanding of reality.

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      Actually the race-realism use last week, combined with this one, makes me realize that for them it’s just a fancy way of saying “world-view” [or what they consider to exist, and be true, which is not the craziest use of the word, but I would say unhelpful, and probably a small in-group marker].

      It’s just a way of calling biases/prejudice legitimate.

      And you know what, inasmuch the models have a “world-view” it IS annoyingly american in many ways. (at least the wrong kind of american.)

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      I mean I guess given how the current guy took a chainsaw to American soft power, industrial capacity, economic prospects, and so on I guess our wildly over funded military is probably the only comparative advantage we unambiguously hold onto.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 days ago

      “Our lethal capacities. Our ability to fight war.”

      These are two different things. But I fear he doesn’t get that.

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 day ago

    Anybody else having problems with archive.is and its variants? I keep getting into an infinite captcha loop. I already tried making it an dns over https exception in firefox, which worked once.

    E: tried a different browser, and same problem. Same on phone, it does work going from wifi to mobile however.

    E2: I seem to have fixed it, by oddly rebooting my router. Which makes no sense to me.

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 days ago

    Chris Stokel-Walker at Fast Company reports:

    High-level information about the private work of students and staff using ChatGPT Edu at several universities can be viewed by thousands of colleagues across their institutions due to a misunderstanding of what is being shared, according to a University of Oxford researcher who identified the issue.

    The problem affects Codex Cloud Environments in ChatGPT Edu and exposes the names and some metadata associated with the public and private GitHub repositories that users within a university have connected to their ChatGPT Edu accounts. […] “Anyone at the university, or a large number of people at least—including me—can see a number of projects [people have] been working on with ChatGPT,” says Luc Rocher, an associate professor at the University of Oxford, who identified the issue and raised it with both the University of Oxford and OpenAI through responsible disclosure. He later approached Fast Company after what he felt was an inadequate response from both.

    Just one of many reasons that the mere existence of “ChatGPT Edu” means that many people need to be tased in the nads

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    2 days ago

    Julia Angwin:

    I’m suing Grammarly over its paid AI feature that presented editing suggestions as if they came from me - and many other writers and journalists - without consent.

    State law requires consent before someone’s name can be used for commercial purposes.