• 1 Post
  • 70 Comments
Joined 2 years ago
cake
Cake day: June 5th, 2023

help-circle






  • Honestly, I’ve begun to think the upvote/downvote model is a bad fit for the fediverse in general:

    *Different instances have different rules around it, and in some cases (for example, an instance disabling downvoting) this might give a modest advantage in the sorting for content on that instance

    *Instances have to trust votes by other instances, and while an obvious manipulation could be defederated, that has to be noticed first

    *Votes are more publicly visible than on a place like reddit, potentially leading to something like a downvote being a catalyst for incivility towards the downvoter by whoever posted something

    Honestly what I would do with Lemmy voting is just make vote counts mostly not federate. Have instances send a single up, down, or neither vote depending on if the net number on their insurance passes a certain up or downvote threshold, just so people on private instances have something to sort by, and have the score of a post or comment otherwise just go off of whatever the users within an instance vote. Then, an individual instance could have whatever rules or restrictions on voting it wanted, without worry over if that gets its votes drowned out by the wider network or seen as vote manipulation.




  • Are you breeding them, or throwing eggs in through the top, or have some automatic egg throwing mechanism? (I don’t see the latter but it looks modded so I’m not sure what could be there). If it’s throwing eggs related, I think from time to time I’ve had baby chickens spawned that way end up inside the side if a block where they immediately suffocate. I’m not sure glass has that suffocating effect because it’s a transparent block, so maybe some of the chicks spawn glitchily partly inside the glass and some of them manage to get out the other side of it somehow?









  • While I don’t think this scenario likely, something that I can’t help but thinking when this sort of statement comes up is, well, how do we know what it’s doing isn’t thinking? Like, I get that it’s ultimately just using a bunch of statistics to predict the next word or token or whatever, but my understanding was that we have fairly limited knowledge of how our own consciousness and thinking works, and so I keep getting the nagging feeling of “what if what our brains are doing is similar somehow, using a physical system with statistical effects to predict stuff about the world, and that’s what thinking ultimately is?”

    While I expect that it probably isn’t and that creating proper agi will require something fundamentally more complicated than what we’ve been doing with these language models and such, the fact that I can’t prove that to my own satisfaction makes me very uneasy about them, considering what the ethical ramifications of being wrong about it might be.