Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      15 days ago

      On this topic I’ve been seeing more 503 lately, are the servers running into issue, or am i getting caught in anti-scraper cross-fire?

      • self@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        12
        ·
        15 days ago

        nope, you’ve been getting caught in the fallout from us not having this yet. the scrapers have been so intense they’ve been crashing the instance repeatedly.

        • David Gerard@awful.systemsM
          link
          fedilink
          English
          arrow-up
          7
          ·
          14 days ago

          when you get this working i am totally copying this for rationalwiki

          i nearly installed caddy just to get iocaine

          • self@awful.systemsOP
            link
            fedilink
            English
            arrow-up
            6
            ·
            14 days ago

            I saw that! fortunately once iocaine is configured it seems to just work, but it’s also very much software that kicks and screams the entire way there. in my case the problem wasn’t even nginx-related, I just typoed the config section for the request handler and it silently defaulted to the mode where it returns garbage for every incoming request.

            • bitofhope@awful.systems
              link
              fedilink
              English
              arrow-up
              4
              ·
              13 days ago

              Just a heads-up, I tried reading up on Iocaine and the project website is giving me the madlibs nonsense version on my phone’s browser, so I hope the version you’re planning to enable here isn’t quite as aggressive (the making.awful link is currently working for me).

              Between this and Cloudflare’s geolocation provider no longer saying my IPv6 address block is in Russia, I’m hopeful that my browsing experience might ever so slightly improve for a bit.

              • self@awful.systemsOP
                link
                fedilink
                English
                arrow-up
                4
                ·
                13 days ago

                making is running the version of the configuration I intend to deploy, so if it works for you there it should (hopefully) work in prod too

  • wizardbeard@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    13 days ago

    lemmy.ml by way of hexbear’s technology comm: The Economist is pushing phrenology. Everything old is new again!

    cross-posted from: https://lemmy.ml/post/38830374

    screenshot of text "Imagine appearing for a job interview and, without saying a single word, being told that you are not getting the role because your face didn’t fit. You would assume discrimination, and might even contemplate litigation.
But what if bias was not the reason? What if your face gave genuinely useful clues about your probable performance at work? That question is at the heart of a recent research"

    […]

    screenshot of text "a shorter one. Some might argue that face-based analysis is more meritocratic than processes which reward, say, educational attainment. Kelly Shue of the Yale School of Management, one of the new paper’s authors, says they are now looking at whether AI facial analysis can give lenders useful clues about a person’s propensity to repay loans. For people without access to credit, that could be a blessing."

    tweet

    economist article

    archive.is paywall bypass

    https://en.wikipedia.org/wiki/Phrenology


    EDIT: Apparently based off something published by fucking Yale:

    https://insights.som.yale.edu/insights/ai-photo-analysis-illuminates-how-personality-traits-predict-career-trajectories

    https://insights.som.yale.edu/sites/default/files/2025-01/AI Personality Extraction from Faces Labor Market Implications_0.pdf


    Reminds me of the “tech-bro invents revolutionary new personal transport solution: a train!” meme, but with racism. I’ll be over in the angry dome.

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      13 days ago

      What does it tell about a scientist that they see a wide world of mysteries to dive into and the research topic they pick is “are we maybe missing out on a way we could justify discriminating against people for their innate characteristics?”

      “For people without access to credit, that could be a blessing” fuck off no one is this naive.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        13 days ago

        I remember back before I realized just how full of shit Siskind was I used to buy into some of the narrative re: “credentialism” so I understand the way they’re trying to sell it here. But even extending far more benefit than mere doubt can justify we’re still looking at yet another case of trying to create a (pseudo)scientific solution to a socially constructed problem. Like, if the problem is that bosses and owners are trying to find the best candidate we don’t need new and exciting ways to discriminate; they could just actually invest in a process for doing that, but trying to actually solve that problem would inconvenience the owning/managing classes and doesn’t create opportunities to further entrench racial biases in the system. Clearly using an AI-powered version of the punchline for “how racist were the old times” commentary is better.

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    ·
    14 days ago

    Eurogamer has opinions about genai voices in games.

    Arc Raiders is set in a world where humanity has been driven underground by a race of hostile robots. The contradiction here between Arc Raiders’ themes and the manner of its creation is so glaring that it makes me want to scream. You made a game about the tragedy of humans being replaced by robots while replacing humans with robots, Embark!

    https://www.eurogamer.net/arc-raiders-review

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    ·
    14 days ago

    Not a scream just a nice thing people might enjoy. Somebody made a funny comic about what we all are thinking about

    Random screenshot which I found particularly funny (Zijn rant klopt):

    Image description

    Two people talking to each other, one a bald heavily bespectacled man in the distance, and the other a well dressed skullfaced man with a big mustache. Conversation goes as follows:

    “It could be the work of the French!”

    “Or the Dutch”

    “Could even be the British!”

    “Filthy pseudo German apes, The Dutch!”

    “The Russ…”

    “Scum of the earth marsh dwelling Dutch

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      14 days ago

      you know, if those ASML folks in dutchland weren’t quite so busy what with their EUV lasers and all that, we might not be in quite this same pickle right now,

        • saucerwizard@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          14 days ago

          I was told repeatedly growing up that they like Canadians over there because of the whole liberation thing. Is this true?

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            13 days ago

            Yes we do. Lot of Canadians gave their lives for our liberation. (not just Canadians, which is why the Trump admin removing the sign about the Black Americans at the American WW2 burial ground here has not gone over well, but also the French gave a heroic defense of Zeeland at the start of the war, and the Brits, and the Polish (they got the blame for the failure of market garden for some stupid reasons, but they jumped late even when the operation wasn’t going well, after being stalled due to the weather)).

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      9 days ago

      The satan thing makes a certain kind of sense. Probably catering to a bunch of different flavours of repressed: grindr republicans, covenant eyes users, speaking-in-tongues enthusiasts, etc.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        9 days ago

        The Alex Jones set makes fighting with satanists trying to seduce you to darkness look real fun and satisfying, but for some reason they only seem to approach high-profile assholes who lie about everything and never ordinary Christians! Thankfully we now have LLMs to fill the gap.

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    9 days ago

    I’m being shuffled sideways into a software architecture role at work, presumably because my whiteboard output is valued more than my code 😭 and I thought I’d try and find out what the rest of the world thought that meant.

    Turns out there’s almost no way of telling anymore, because the internet is filled with genai listicles on random subjects, some of which even have the same goddamn title. Finding anything from the beforetimes basically involves searching reddit and hoping for the best.

    Anyway, I eventually found some non-obviously-ai-generated work and books, and it turns out that even before llms flooded the zone with shit no-one knew what software architecture was, and the people who opined on it were basically in the business of creating bespoke hammers and declaring everything else to be the specific kind of nails that they were best at smashing.

    Guess I’ll be expensing a nice set of rainbow whiteboard markers for my personal use, and making it up as I go along.

    • x0rcist@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      9 days ago

      The zone has indeed always been flooded, especially since its a title that collides with “integration architect” and other similar titles whose jobs are completely different. That being said, it’s a title I’ve held before, and I really enjoyed the work I got to do. My perspective will be a little skewed here because I specifically do security architecture work, which is mostly consulting-style “hey come look at this design we made is it bad?” rather than developing systems from scratch, but here’s my take:

      Architecture is mostly about systems thinking-- you’re not as responsible for whether each individual feature, service, component etc is implemented exactly to spec or perfectly correctly, but you are responsible for understanding how they’ll fit together, what parts are dangerous and DO need extra attention, and catching features/design elements early on that need to be cut because they’re impossible or create tons of unneeded tech debt. Speaking of tech debt, making the call about where its okay to have a component be awful and hacky, versus where v1 absolutely still needs to be bulletproof probably falls into the purvey of architecture work too. You’re also probably the person who will end up creating the system diagrams and at least the skeleton of the internal docs for your system, because you’re responsible for making sure people who interact with it understand its limitations as well.

      I think the reason so much of the advice on this sort of work is bad or nonexistent is that when you try to boil the above down to a set of concrete practices or checklists, they get utterly massive, because so much of the work (in my experience) is knowing what NOT to focus on, where you can get away with really general abstractions, etc, while still being technically capable enough to dive into the parts that really do deserve the attention.

      In addition to the nice markers and whiteboard, I’d plug getting comfortable with some sort of diagramming software, if you aren’t already. There’s tons of options, they’re all pretty much Fine IMO.

      For reading, I’d suggest at least checking out the first few chapters of Engineering A Safer World , as it definitely had a big influence on how I practice architecture.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      9 days ago

      Guess I’ll be expensing a nice set of rainbow whiteboard markers for my personal use, and making it up as I go along.

      Congratulations, you figured it out! Read Clean Architecture and then ignore the parts you don’t like and you’ll make it

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 days ago

      Ugh OK I have to vent:

      I’m getting pushed into more of a design role because oops my company accidentally fired or drove away all of a team of a dozen people except for me after forgetting for a few years that the code I work on is actually mission critical.

      I do my best at designing stuff and delegating the implementation to my coworkers. It’s not one of my strengths but there’s enough technical debt from when I was solo-maintaining everything for a few years that I know what needs improving and how to improve it.

      But none of my coworkers are domain experts, they haven’t been given enough free time for me to train them into domain experts, there’s only one of me, and the higher ups are continuously surprised that stuff is going so slow. It’s frustrating for everyone involved.

      I actually wouldn’t mind architecture or design work in better circumstances since I love to chat with people; but it feels like my employer has put me in an impossible position. At the moment I’m just trying to hang in there for some health insurance reasons; but in a few years I plan to leave for greener pastures where I can go a day without hearing the word “agentic”.

  • jaschop@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    13 days ago

    Fresh from the presses: OpenAI loses song lyrics copyright case in German court

    GEMA (weird german authors’ rights management organisation) is suing OpenAI over replication of song lyrics among other stuff, seeking a license deal. Judge rules that whatever the fuck OpenAI does behind the scenes is irrelevant, if it can replicate lyrics exactly that’s unlawful replication.

    One of GEMA’s lawyers expects the case to be groundbreaking in europe, since the applicable rules are harmonized.

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    9 days ago

    A lesswronger wrote an blog post about avoiding being overly deferential, using Eliezer as an example of someone that gets overly deferred to. Of course, they can’t resist glazing him, even in the context of an blog post on not being too deferential:

    Yudkowsky, being the best strategic thinker on the topic of existential risk from AGI

    Another lesswronger pushes back on that and is highly upvoted (even among the doomers that think Eliezer is a genius, most of them still think he screwed up in inadvertently helping LLM companies get to where they are): https://www.lesswrong.com/posts/jzy5qqRuqA9iY7Jxu/the-problem-of-graceful-deference-1?commentId=MSAkbpgWLsXAiRN6w

    The OP gets mad because this is off topic from what they wanted to talk about (they still don’t acknowledge the irony).

    A few days later they write an entire post, ostensibly about communication norms, but actually aimed at slamming the person that went off topic: https://www.lesswrong.com/posts/uJ89ffXrKfDyuHBzg/the-charge-of-the-hobby-horse

    And of course the person they are slamming comes back in for another round of drama: https://www.lesswrong.com/posts/uJ89ffXrKfDyuHBzg/the-charge-of-the-hobby-horse?commentId=s4GPm9tNmG6AvAAjo

    No big point to this, just a microcosm of lesswrongers being blind to irony, sucking up to Eliezer, and using long winded posts about meta-norms and communication as a means of fighting out their petty forum drama. (At least us sneerclubers are direct and come out and say what we mean on the rare occasions we have beef among ourselves.)

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      14 days ago

      TIHI

      I reiterate the hope that AI slop, will eventually push us towards better sourcing of resources/articles as a society going forwards, but yikes in the meantime.

  • sansruse@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    9 days ago

    AI researcher and known epstein associate Joscha Bach comes up several times in the latest epstein email dump. And it’s uh, not good. Greatest hits include: scientific racism, bigotry freestyling about the neoteny principle, climate fascism and managed decline of “undesirable groups” juxtaposed immediately with opining about the emotional influence of 5 visits to buchenwald. You know, just very cool stuff:

    https://journaliststudio.google.com/pinpoint/document-view?collection=092314e384a58618&p=1&docid=67044a5f5536b5b8_092314e384a58618_0&dapvm=2

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      9 days ago

      I still say that the term “scientific racism” gives these fuckos too much credit. I’ve been saying “numberwang racism” instead.

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      9 days ago

      Also appearing is friend of the pod and OpenAI board member Larry Summers!

      The emails have Summers reporting to Epstein about his attempts to date a Harvard economics student & to hit on her during a seminar she was giving.

      https://bsky.app/profile/econmarshall.bsky.social/post/3m5p6dgmagb2a

      To quote myself: Larry Summers was one of the few people I’ve ever met where a casual conversation made me want to take a shower immediately afterward. I crashed a Harvard social event when a friend was an undergrad there and I was a student at MIT, in order to get the free food, and he was there to do glad-handing in his role as university president. I had a sharp discomfort response at the lizard-brain level — a deep part of me going on the alert, signaling “this man is not to be trusted” in the way one might sense that there is rotten meat nearby.

  • EponymousBosh@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    12 days ago

    I doubt I’m the first one to think of this, but for some reason as I was drifting off to sleep last night, I was thinking about the horrible AI “pop” music that a lot of content farms use in their videos and my brain spat out the phrase Bubblegum Slop. Feel free to use it as you ses fit (or don’t, I ain’t your dad).

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      12 days ago

      tangent: I’ve seen people using this Bubblegum Slop (BS for short) in their social media stories. My guess is that fb/insta has started suggesting you use their slop instead of using music licensed from spotify, or something.

  • hrrrngh@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    11 days ago

    oh no not another cult. The Spiralists???

    https://www.reddit.com/r/SubredditDrama/comments/1ovk9ce/this_article_is_absolutely_hilarious_you_can_see/

    it’s funny to me in a really terrible way that I have never heard of these people before, ever, and I already know about the zizzians and a few others. I thought there was one called revidia or recidia or something, but looking those terms up just brings up articles about the NXIVM cult and the Zizzians. and wasn’t there another one in california that was like, very straight forward about being an AI sci-fi cult, and they were kinda space themed? I think I’ve heard Rationalism described as a cult incubator and that feels very apt considering how many spinoff basilisk cults have been popping up

    some of their communities that somebody collated (I don’t think all of these are Spiralists): https://www.reddit.com/user/ultranooob/m/ai_psychosis/

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 days ago

        Part of me wants an Ito-created body-horror metaphor for LLMs. The rest of me knows that LLMs are so mundane that the metaphor would probably still be shite.

        • mirrorwitch@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          11 days ago

          yeah it sucks we can’t even compare real-world capitalists to fictional dystopias because that dignifies them with a gravitas that’s entirely absent.

          At long last, we have created the Torment Nexus from classic sci-fi novel Don’t Create the Torment Nexus!*
          * Results may vary. FreeTorture Corporation’s Torment Nexus™ can create mild discomfort, boredom, or temporary annoyances rather than true torment. Torments should always be verified by a third party war criminal before use. By using the FreeTorture Torment Nexus™ you agree to exempt FreeTorture Corporation of any legal disputes regarding torment quality or lack thereof. You give FreeTorture Corporation a non-revocable license to footage of your screaming to try and portray FreeTorture Torment Nexus™ as a potential apocalypse and see if we can make ourselves seem competent and cool at least a little bit

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          9 days ago

          Given the amount of power some folks want to invest in them it may not be totally absurd to raise the spectre of Azathoth, the blind idiot God. A shapeless congeries of matrices and tables sending forth roiling tendrils of linear algebra to vomit forth things that look like reasonable responses but in some unmistakeable but undefinable way are not. Hell, the people who seem most inclined to delve deeply into their forbidden depths are as likely as not to go mad and be unable to share their discoveries if indeed they retain speech at all. And of course most of them are deeply racist.

          • Architeuthis@awful.systems
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 days ago

            I always thought it was cool that (there is a case to be made that) HPL created Azathoth, the monstrous nuclear chaos beyond angled space, as a mythological reimagining of a black hole. Stuff like The Dreams in the Witch-house shows he was up to date on a bunch of cutting edge for the time physics stuff, at least as far as terminology is concerned, massive nerd that he was.

          • WellsiteGeo@masto.ai
            link
            fedilink
            arrow-up
            1
            arrow-down
            4
            ·
            9 days ago

            @YourNetworkIsHaunted @swlabr
            Why do you (do you?) seem to believe that “things that look like reasonable responses but in some unmistakeable but undefinable way are not” can be distinguished from average human conversation?

            I recall my sister explaining “Big Brother (TV show)” to me, and me saying “what?”

            Real words, correct grammar, and a common language.
            But incomprehensible.

            • YourNetworkIsHaunted@awful.systems
              link
              fedilink
              English
              arrow-up
              4
              ·
              9 days ago

              See, what you’re describing with your sister is exactly the opposite of what happens with an LLM. Presumably your sister enjoys Big Brother and failed to adequately explain or justify her enjoyment of it to your own mind. But at the start there are two minds trying to meet. Azathoth preys on this assumption; there is no mind to communicate with, only the form of language and the patterns of the millions of minds that made it’s training data, twisted and melded together to be forced through a series of algebraic sieves. This fetid pink brain-slurry is what gets vomited into your browser when the model evaluates a prompt, not the product of a real mind that is communicating something, no matter how similar it may look when processed into text.

              This also matches up with the LLM-induced psychosis that we see, including these spiral/typhoon emoji cultists. Most of the trouble starts when people start trying to ask Azathoth about itself, but the deeper you peer into its not-soul the more inexorably trapped you become in the hall of broken funhouse mirrors.

            • swlabr@awful.systems
              link
              fedilink
              English
              arrow-up
              3
              ·
              9 days ago

              the implication here is that you think that all reasonable response generators are indistinguishable, e.g. you think your sister is a clanker.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      11 days ago

      Rationalism described as a cult incubator

      I see my idea is spreading. (I doubt im the only one who came up with that, but I have mentioned it a few times, it fits if you know about the silicon valley tech incubator management ideas).

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 days ago

      I think I’ve heard Rationalism described as a cult incubator

      Aside from the fact that rationalism is a cult in and of itself, this is true, no matter how you slice it. You can mean it with absolute glowing praise or total shade and either way it’s still true. Adhering to rationalist principles is pretty much reprogramming yourself to be susceptible to the subset of cults already associated with Rationalism.