• 0 Posts
  • 36 Comments
Joined 3 years ago
cake
Cake day: June 25th, 2023

help-circle



  • Oh absolutely, you can only do this so long, and you can still see it if you look closely, but is seems that many investors forgot how to look closely?

    Microsoft specifically had 13 billion capex on AI in January 2025, the last time they mentioned it separately. And like 75-100 billion revenue from cloud iirc, which was growing a lot because of AI financing shenanigans. So that could still hide the growing expenses at that point.


  • Not disagreeing with your overall point, but they can and do fudge the numbers to some extent with reporting tricks.

    Like, you don’t want the public to know how terrible the AI business is? Merge it with the profitable cloud business internally and just report the numbers as a single line item. Overall the numbers are still correct, but you can hide unprofitable stuff inside larger, profitable stuff.

    Or you suddenly increase the depreciation of GPUs from 3 years to 6 years, even though they wont actually last that long, to hide/spread out the huge investments you made.


  • heck if anything, the Fediverse, as it is right now, is more susceptible to this. You can spin up spam instances on new domains, with spam users, and have them federate to existing instances, faster than volunteer run instances can ban/defederate.

    So you end up with not federating by default, but having some trusted web of instances that federate and maybe an approval process for new instances to federate with? But that’ll still lead to centralization with “trusted” instances and new instances having a hard time to join the club (there’s also the scaling problem of the fediverse, but that’s besides the point). So you end up with a few very big instances and the owners of those instances having all the power. Or maybe small isolated islands of mutually trusting instances? Still better than tech oligopoly, but also a far cry from the original dream.





  • Good time to shamelessly plug valetudo, if your vacuum robot is supported.

    With this, it does not access the public internet, and still functions the same as without rooting it. You just can’t manage it if you’re not home, unless you have some VPN set up or home assistant integration. But I don’t know when I ever wanted to manage/watch my vacuum robot when I’m not home. Some sort of offline mode should be legally required for these kinds of devices that don’t really need it. “Does not need an app to work” has become a major selling point for me for things, alongside “has physical buttons”.

    Also drop me a message if you’re in switzerland and need an unsoldered valetudo breakout board, I still have around 5 left.


  • I think there’s still a lot of room to explore without abandoning the utopia setting. like we usually only see the spaceship stuff, but what about a more political drama taking place on member worlds, that kind of thing, i think it could be amazing.

    also, as you say, it’s been done for 60 years. Might as well do the same thing over again for a new generation that hasn’t seen tos/tng/ds9. They don’t know it yet, so it’s not overused, and the TOS audience wouldn’t be the target audience anyways. and could still explore new topics. the audience isn’t the same, our world isn’t the same, making the same show again would still not be boring as it be a completely different thing.

    Both approaches can work imo and have a place, without the need to go more dystopia.





  • I don’t actually know how nostr deals with messages if you’re offline, if at all, not that familiar with the protocol. But your idea sounds workable.

    I tend to come at it from the other side, I like the federated model, but think the “supernodes” could behave more like dedicated relays. Like, a lemmy server right now does a lot of things, like serve a frontend, do expensive database queries to show a sorted feed, etc. and a lot of that does not scale very well. So having different kinds of nodes with more specialization, while still following a federated model makes sense to me. Right now if one of my users subscribes to some community, that community’s instance will start spamming my instance with updates nonstop, even though that user might not be active or might not even read that community anymore. It would be nicer if there was some kind of beefy instance I could request this data from if necessary, without getting each and every update even though 90% of it might never be viewed. But keeping individual instances that could have their own community and themes, or just be hosted for you and your friends to reduce the burden on non-techies having to self-host something.

    Or put another way, instead of making the relays more instance-y, embrace the super instances and make them more relay-y, but tailor made for that job and still hostable by anyone, if they want to spend on the hardware. But I’m still not clear on where you’d draw the line/how exactly you’d split the responsibility. For lemmy, instead of sending 100’s of requests in parallel for each thing that happens, a super-instance could just consolidate all the events and send them as single big requests/batches to sub-instances and maybe that’s a good place to draw the line?




  • What you said is like “i’m going to delete linux and install ubuntu”, but then there’s not really a name for the android that comes with your phone. “stock android” probably is the closest term you get to distinguish between the OS family and the thing actually installed, but all the companies customize their android, so it’s not like there’s just one “stock android”.

    i mean, I’m sure samsung has some term for their android, but i doubt anyone use this outside of samsung.



  • You mean for the referer part? Of course you don’t want it for all urls and there’s some legitimate cases. I have that on specific urls where it’s highly unlikely, not every url. E.g. a direct link to a single comment in lemmy, and whitelisting logged-in users. Plus a limit, like >3 times an hour before a ban. It’s already pretty unusual to bookmark a link to a single comment

    It’s a pretty consistent bot pattern, they will go to some subsubpage with no referer with no prior traffic from that ip, and then no other traffic from that ip after that for a bit (since they cycle though ip’s on each request) but you will get a ton of these requests across all ips they use. It was one of the most common patterns i saw when i followed the logs for a while.

    of course having some honeypot url in a hidden link or something gives more reliable results, if you can add such a link, but if you’re hosting some software that you can’t easily add that to, suspicious patterns like the one above can work really well in my experience. Just don’t enforce it right away, have it with the ‘dummy’ action in f2b for a while and double check.

    And I mostly intended that as an example of seeing suspicious traffic in the logs and tailoring a rule to it. Doesn’t take very long and can be very effective.