• 0 Posts
  • 20 Comments
Joined 1 year ago
cake
Cake day: July 8th, 2023

help-circle
  • For what it’s worth, I played the NES release of DQ1, and then a translation of the japan-only SNES release of DQ2 recently (I actually beat DQ2 last week) and I found DQ2 to be a much better game than DQ1 overall. DQ1 was… interesting, but it was very much a game that did not respect the player’s time in the least, to the point of expecting the player to fight literally hundreds of battles in order to grind up enough money and experience to afford the gear. The most charitable thing I can say about it is that the battle system was so rudimentary and so grindy that the gameplay felt more like it was focused on resource management–there was a tension in deciding whether you could afford to take another fight, or if you needed to return to town and spend money sleeping at an inn to heal (setting your grind back at least 1-2 fights with how piddly gold and XP drops were), optimizing efficiency in spending your MP to heal vs. the risk of dying to the next monster, etc.

    DQ2 meanwhile was a much more robust and much less grindy game–the simple addition of multiple party members and multiple enemies in a single battle meant that your gold and XP gains were multiplied over the first game. While it still demanded grinding, it was much more reasonable about it, and it felt much more like a “modern” JRPG like you’re used to seeing.



  • God, same. One of my little annoyances in life is that my internal voice is a goddamn motor mouth and I literally CANNOT stop it.

    I can stare at a white wall and watch paint dry, and my monologue will start philosophizing about watching paint dry, where the phrase came from, why I’m doing it (to try and silence my internal voice), then go on a wiki walk about how trying not to think about something makes you think about it more and the classic example of telling someone “don’t think about a brown bear” makes them think about bears, then I’ll start thinking about bears and my monologue is suddenly halfway across the world.

    Put me in a sensory deprivation tank, and my internal voice starts ruminating about how Daredevil uses these to sleep, then goes off about fight sequences, and then superhero comics, and whoops I’m halfway across the world.

    Even when I’m paying attention and listening, my inner voice is still motoring away, it’s just that it’s mirroring what is being said to me instead of going on its own wiki walk halfway across the world (though sometimes someone will say something that makes my internal voice go “wait a second, that makes me think of…” and then I stop listening while I go on a wiki walk).

    I have ADHD, in case it isn’t obvious yet.


  • I am increasingly convinced that the people who claim AIs are useful for any given subject of any import (coding, art, math, teaching, etc.) should immediately be regarded as having absolutely zero knowledge in that subject, even (and especially) if they claim otherwise.

    From what I can see in my interactions with LLMs, the only thing they are actually decent at are summarizing blocks of text, and even then if it’s important you should parse the summary carefully to make sure they didn’t miss important details.


  • Eccitaze@yiffit.net
    cake
    toTechTakes@awful.systemsBubble Trouble
    link
    fedilink
    English
    arrow-up
    13
    ·
    3 months ago

    This article is excellent, and raises a point that’s been lingering in the back of my head–what happens if the promises don’t materialize? What happens when the market gets tired of stories about AI chatbots telling landlords to break the law, or suburban moms complaining about their face being plastered onto a topless model, or any of the other myriad stories of AI making glaring mistakes that would get any human immediately fired?

    We’ve poured hundreds of billions of dollars into this, and what has it gotten us? What is the upside that makes up for all the lawsuits, lost jobs, disinformation, carbon footprint, and deluge of valueless slop flooding our search results? So far as I can tell, its primary use seems to be in creating things that someone is too lazy to do properly themself like cover letters or memes, and inserting Godzilla into increasingly ridiculous situations. There’s certainly something there, perhaps, but is it worth using enough energy to power a small country?


  • The problem is that there’s no incentive for employees to stay beyond a few years. Why spend months or years training someone if they leave after the second year?

    But then you have to question why employees aren’t loyal any longer, and that’s because pensions and benefits have eroded, and your pay doesn’t keep up as you stay longer at a company. Why stay at a company for 20, 30, or 40 years when you can come out way ahead financially by hopping jobs every 2-4 years?


  • Holy crap, what a garbage ragebait article

    Saving you a click: there’s no new info here, it’s just the same hullabaloo over the guy who made the accusations rescaling the models so they’re the same size, and the author treating it as proof they faked it all

    Which, I don’t personally have a strong opinion on whether it’s faked (especially since it’s been pointed out that models made using different programs and for different platforms can import in drastically different sizes) but it feels kind of disingenuous to say that it’s faked just because of that, y’know? It’s like if an artist takes a 1440p resolution image, traces over it, and posts the traced image in 720p resolution. I wouldn’t consider blowing up the traced 720p to 1440p as “faking” it or altering the traced image.



  • It makes sense to judge how closely LLMs mimic human learning when people are using it as a defense to AI companies scraping copyrighted content, and making the claim that banning AI scraping is as nonsensical as banning human learning.

    But when it’s pointed out that LLMs don’t learn very similarly to humans, and require scraping far more material than a human does, suddenly AIs shouldn’t be judged by human standards? I don’t know if it’s intentional on your part, but that’s a pretty classic example of a motte-and-bailey fallacy. You can’t have it both ways.


  • Who even knows? For whatever reason the board decided to keep quiet, didn’t elaborate on its reasoning, let Altman and his allies control the narrative, and rolled over when the employees inevitably revolted. All we have is speculation and unnamed “sources close to the matter,” which you may or may not find credible.

    Even if the actual reasoning was absolutely justified–and knowing how much of a techbro Altman is (especially with his insanely creepy project to combine cryptocurrency with retina scans), I absolutely believe the speculation that the board felt Altman wasn’t trustworthy–they didn’t bother to actually tell anyone that reasoning, and clearly felt they could just weather the firestorm up until they realized it was too late and they’d already shot themselves in the foot.





  • First, it’s important to find an instance that caters to your interests, especially if you have more niche hobbies. Once you’re set up, search for and follow hashtags related to your personal interests, and use those to find accounts you like. Use hashtags in your own posts so that people can discover you more easily, and browse users that follow you to see if they’d be interesting to follow back and expand your network out. Keep an eye on the local and federated timeline for interesting posts, which includes all posts from people on the same instance and from all federated instances. Eventually, as you build up a follow list (and especially as you follow highly active accounts) your followed accounts will start introducing you to new accounts themselves through boosting posts.

    It’s more work since you’re building the network yourself instead of having it spoon-fed to you by an algorithm, but it’s overall much more rewarding, and lets you tailor your experience to your own personal preferences.