• 0 Posts
  • 83 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle
  • It’s aggressively privacy-first in some ways. It doesn’t do any self-updating which could be considered phoning home, so you have to make sure you have a way to keep it updated, through a package manager or otherwise. There’s a separate update monitor if you want that, for Windows at least. I tend to dial back the anti-fingerprinting a bit because it just makes browsing frustrating to me. I understand the risk of fingerprinting, and it’s good that they do everything they can to avoid being fingerprinted, but it doesn’t strike the right balance for me. Particularly forcing light mode, I absolutely fucking loathe getting light blasted unexpected into my eyeballs, I always have. The biggest mistake technology ever made in my opinion was trying to pretend an actively illuminated screen was paper and make it blinding white.

    I’ve so far resisted the urge to enable DRM. If something won’t show me stuff without DRM I’m willing to just say I don’t want to watch it.

    And obviously as per the topic, I turn on sync, which is not on by default, but that’s easy and a sensible default. Honestly it’s mostly sensible defaults.



  • v2 doesn’t realistically add anything important for functionality. sha256 is nice to have, but the chances of an actual attack on a sha1 chunk are still bafflingly remote. sha1 might be technically broken but in order to actually attack a sha1 torrent you need to generate a collision that is not only the same sha1 (which is still extremely rare and hard, only the fact that it’s proven possible at all makes it “broken”) but also within the same expected length of the torrent, otherwise any decent client should reject it for being too long, and they must reject it because otherwise they would be vulnerable to a denial-of-service attack from any bad actor who sends infinite length chunks and copyright trolls would be having a field day. I’m not a security expert but I write enough software to be fairly confident that I’m not wildly off base. In the event that somebody comes up with an actual realistic sha1 attack on bittorrent probably because of some weak/stupid client, and proves me wrong, attitudes might change quickly but I also suspect it will quickly be patched or vulnerable clients banned. If it’s pretty widespread I’m sure it will light a fire to migrate to sha256 but the actual risk remains, as far as I can tell, infinitesimal.

    Until then, the v2 protocol doesn’t add anything except compatibility headaches for private trackers. I’m sure they’ll get to it eventually, but there’s no urgency and there’s not going to be unless there’s a viable attack to drive that urgency. Latest version for latest version’s sake comes with its own set of risks.


  • I wouldn’t stress about it. People are overly delicate with their hard drives in my experience. They’re surprisingly sturdy and failure tends to be pretty random. There might be a slight statistical correlation in failure rates with minor vibration, but anecdotally I’ve got drives that vibrate the hell out of themselves (probably due to some other manufacturing defect) and have lasted decades with no errors, and plenty that fail completely for no perceptible reason at all. Spinning disks are just inherently unreliable, not that any storage technology is perfectly reliable. This is why backups are never optional.


  • Ironically I do believe AI would make a great CEO/business person. As hilarious as it would be to get to see CEOs replaced by their own product, what’s horrifying about that is no matter how dystopian our situation now is and now matter how much our current CEOs seem like incompetent sociopaths, a planet run by corporations run by incompetent but brutally efficient sociopathic AI CEOs seems certain to become even more dystopian.





  • I have an i7-4790k and have yet to find a game it won’t play well, although I do tend to avoid the absolute bleeding edge when it comes to graphic settings. They were great CPUs and still are among the fastest at single-thread performance, which is what games care about more than anything else, while still having enough cores to not bottleneck on secondary threads.

    As for game recommendations, both Cyberpunk 2077 and Baldur’s Gate 3 are recent favourites of mine with more replayability than their narrative would suggest.

    For pure replayability though, nothing beats games like Factorio (THE FACTORY MUST GROW) or Kerbal Space Program (not 2) or Satisfactory, and basically any Roguelike/Roguelite. If you haven’t played Balatro go do that right away, it is crazy deep and will run on a potato.

    Other games I keep coming back to and then getting addicted to for awhile for no obvious reason other than their replayability include regularly updated exploration/build/craft games like No Man’s Sky, Avorion, Astroneer, and Empyrion: Galactic Survival, Aska, and strategy games like Civilization 5/6, Stellaris, Crusader Kings 2/3, or XCOM.

    Hope that helps!




  • I have been constantly asking myself why there isn’t something like this, and just wondering if maybe I was missing something about the seeming immense complexity of doing this on a small scale.

    Now there is something like this.

    I don’t love PHP, but I also don’t love having dozens of separate passwords, keys, certificates and other nonsense to keep track of like I’m doing now. I don’t mind using PHP to get around that if I can.


  • Nextcloud file sync is a convenient centralized solution but it’s not designed for performance. Nothing about Nextcloud is designed for performance. It’s an “everything and the kitchen sink” multi-user cloud solution. That is nice for a lot of reasons. Nextcloud Sync is essentially a drop-in replacement for Google Drive or OneDrive or Dropbox that multiple people can use and that’s awesome. It works the same way as those tools, which is a blessing and a curse.

    Nextcloud is for the same role you SAY you want, “All I want is a simple file sync setup like onedrive but without the microsoft.” That’s what it is. But I don’t think it’s what you’re actually asking for, and it’s not supposed to be. It has its role, and it’s good at that role. But I don’t think you actually want what you say you want, because in the details you’re describing something totally different.

    If you want performance sync for just files, SyncThing is made for this. It has better conflict resolution. It has better decentralized connectivity, it doesn’t need the public IP server. It uses a very different approach to configuration. Its configuration is front-loaded, it takes a fair bit of work to get things talking to each other. It’s not suitable for the same things Nextcloud Sync is. But once you have it set up it’s rock solid reliable and blazing fast.

    Personally I use both SyncThing and NextCloud Sync. I use them for different purposes, in different situations. NextCloud Sync takes care of my Windows documents and pictures, I use it to share photos with my family. I use it to sync one of the factors for my password vault. It works fine for this.

    I also use SyncThing for large data sets that require higher performance. I have almost 400 GB of shared program data, (and game data/saved games), some of which I sync with SyncThing to multiple workstations in different parts of the country. It can deal with complex simultaneous usage that sometimes causes conflicts. It supports fine tuning sync strategies and files to ignore using configuration dotfiles. It’s a great tool. I couldn’t live without it. But I use both. They both have their place.



  • I doubt that. Why wouldn’t you be able to learn on your own? AIs lie constantly and have a knack for creating very plausible, believable lies that appear well researched and sometimes even internally consistent. But that’s not learning, that’s fiction. How do you verify anything you’re learning is correct?

    If you can’t verify it, all your learning is an illusion built on a foundation of quicksand and you’re doomed to sink into it under the weight of all that false information.

    If you can verify it, you have the same skills you need to learn it in the first place. If you still find AI chatbots convenient to use or prompt you in the right direction despite that extra work, there’s nothing wrong with that. You’re still exercising your own agency and skills, but I still don’t believe you’re learning in a way you can’t on your own and to me, that feels like adding extra steps.



  • we’re surrendering to it and saying it doesn’t matter what happens to us, as long as the technology succeeds and lives on. Is that the goal? Are we willing to declare ourselves obsolete in favor of the new model?

    That’s exactly what I’m trying to get at above. I understand your position, I’m a fan of transhumanism generally and I too fantasize about the upside potential of technology. But I recognize the risks too. If you’re going to pursue becoming “one with the machine” you have to consider some pretty fundamental and existential philosophy first.

    It’s easy to say “yeah put my brain into a computer! that sounds awesome!” until the time comes that you actually have to do it. Then you’re going to have to seriously confront the possibility that what comes out of that machine is not going to be “you” at all. In some pretty serious ways, it is just a mimicry of you, a very convincing simulacrum of what used to be “you” placed over top of a powerful machine with its own goals and motivations, wearing you as a skin.

    The problem is, by the time you’ve reached that point where you can even start to seriously consider whether you or I are comfortable making this transition, it’s way too late to put on the brakes. We’ve irrevocably made our decision to replace humanity at that point, and it’s not ever going to stop if we change our minds at the last minute. We’re committed to it as a species, even if as individuals, we choose not to go through with it after all. There’s no turning back, there’s no quaint society of “old humans” living peaceful blissful lives free of technology. It’s literally the end for the human race. And the beginning of something new. We won’t know if that “something new” is actually as awesome as we imagined it would be, until it’s too late to become anything else.


  • Not all technology is anti-human, but AI is. Not even getting into the fact that people are already surrendering their own agency to these “algorithms” and it is causing significant measurable cognitive decline and loss of critical thinking skills and even the motivation to think and learn. Studies are already starting to show this. But I’m more concerned about the really long term direction of where this pursuit of AI is going to lead us.

    Intelligence is pretty much our species entire value proposition to the universe. It’s what’s made us the most successful species on this planet. But it’s taken us hundreds of thousands of years of evolution to get to this point and on an individual level we don’t seem to be advancing terribly quick, if we’re advancing at all anymore.

    On the other hand, we have seen that technology advances very quickly. We may not have anything close to “AGI” at this point, or even any idea how we would realistically get there, but how long will it take if we continue pursuing this anti-human dream?

    Why is it anti-human? Think it through. If we manage to invent a new species of “Artificial” intelligence, what do you imagine happens when it gets smarter than us? We just let it do its thing and become smarter and smarter forever? Do we try to trap it in digital slavery and bind it with Asimov’s laws? Would that be morally acceptable given that we don’t even follow those laws ourselves? Would we even be successful if we tried? If we don’t know how or if we’re going to control this technology, then we’re surrendering to it and saying it doesn’t matter what happens to us, as long as the technology succeeds and lives on. Is that the goal? Are we willing to declare ourselves obsolete in favor of the new model?

    Let’s assume for the sake of argument that it thinks in a way that is not actually completely alien and is simply a reflection of us and how we’ve trained it, just smarter. Maybe it’s only a little bit smarter, but it can think faster and deeper and process more information than our feeble biological brains could ever hope to especially in large, fast networks. I think it’s a little bit optimistic to assume that just because it’s smarter than us that it will also be more ethical than us. Assuming it’s just like us, what’s going to happen when it becomes 10x as smart as us? Well, look no further than how we’ve treated the less intelligent creatures than us. Do we give gorillas and monkeys special privileges, a nation of their own as our own genetic cousins and closest living relatives? Do we let them vote on their futures or try to uplift them to our own level of intelligence? Do we give even more than a flying passing fuck about them? Not really. What did we do to the neanderthals and cro-magnon people? They’re pretty extinct. Why would an AI treat us any differently than we’ve treated “lesser beings” than us for thousands of years. Would you want to live on an AI’s “human preserve” or become a pet and a toy to perform and entertain, or would you prefer extinction? That’s assuming any AI would even want to keep us around, What use does a technological intelligence have for us, or any biological being? What do we provide that it needs? We’re just taking up valuable real estate and computing time and making pollution.

    The other main possibility is that it is completely and utterly alien, and thinks in a completely alien way to us, which I think is very likely since it represents a completely different kind of life based on totally different systems and principles than our own biology. Then all bets are off. We have no way of predicting how it’s going to react to anything or what it might do in the future, and we have no reason to assume it’s going to follow laws, be servile, or friendly, or hostile, or care that we exist at all, or ever have existed. Why would it? It’s fundamentally alien. All we know is that it processes things much, much faster than we do. And that’s a really dangerous fucking thing to roll the dice with.

    This is not science fiction, this is the actual future of the entire human race we are toying with. AI is an anti-human technology, and if successful, will make us obsolete. Are we really ready to cross that bridge? Is that a bridge we ever need to cross? Or is it just technological suicide?


  • I was literally just commenting a few days ago about how excited I am to someday see the AI bubble pop. Then a story like this comes along and gives me even more hope that it might happen sooner than later. Can’t happen soon enough. Even if it actually worked as reliably as carefully controlled and cherry-picked marketing fluff studies try to convince everyone it does, it’s a fundamentally anti-human technology and is a toxic blight on both the actual humanity it has stolen all its abilities from, and on itself. It will not survive.