Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast

  • 0 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle
  • This is a, “it’s turtles all the way down!” problem. An application has to be able to store its encryption keys somewhere. You can encrypt your encryption keys but then where do you store that key? Ultimately any application will need access to the plaintext key in order to function.

    On servers the best practice is to store the encryption keys somewhere that isn’t on the server itself. Such as a networked Hardware Security Module (HSM) but literally any location that isn’t physically on/in the server itself is good enough. Some Raspberry Pi attached to the network in the corner of the data center would be nearly as good because the attack you’re protecting against with this kind of encryption is someone walking out of the data center with your server (and then decrypting the data).

    With a device like a phone you can’t use a networked HSM since your phone will be carried around with you everywhere. You could store your encryption keys out on the Internet somewhere but that actually increases the attack surface. As such, the encryption keys get stored on the phone itself.

    Phone OSes include tools like encrypted storage locations for things like encryption keys but realistically they’re no more secure than storing the keys as plaintext in the application’s app-specific store (which is encrypted on Android by default; not sure about iOS). Only that app and the OS itself have access to that storage location so it’s basically exactly the same as the special “secure” storage features… Except easier to use and less likely to be targeted, exploited, and ultimately compromised because again, it’s a smaller attack surface.

    If an attacker gets physical access to your device you must assume they’ll have access to everything on it unless the data is encrypted and the key for that isn’t on the phone itself (e.g. it uses a hash generated from your thumbprint or your PIN). In that case your effective encryption key is your thumb(s) and/or PIN. Because the Signal app’s encryption keys are already encrypted on the filesystem.

    Going full circle: You can always further encrypt something or add an extra step to accessing encrypted data but that just adds inconvenience and doesn’t really buy you any more security (realistically). It’s turtles all the way down.









  • This might not necessarily be the case for much longer with storage costs finally reaching certain thresholds.

    2TB SSDs only cost ~$100 and you can cram a lot of SSDs into a tiny space with only a minimal amount of cooling (still need a fan but just a fan).

    The next bottleneck to overcome is upload bandwidth. Too many providers offer asynchronous service with weirdly low/slow upload limitations. However, that too might be changing over the next few years as DOCSIS 4.0 supports 10Gbit down/6Gbit up (DOCSIS 3.1 only supported ~1Gbit up). An important note about DOCSIS 4.0 is that in order to take advantage of it’s improved features (on the ISP end) you need to provide more upload bandwidth to the client (well, you can still cap it at the router but at that point the ISP is just being an asshole instead of actually “managing bandwidth”).


  • Riskable@programming.devtoSelfhosted@lemmy.worldWhat's the deal with Docker?
    link
    fedilink
    English
    arrow-up
    81
    arrow-down
    1
    ·
    4 months ago

    Docker containers aren’t running in a virtual machine. They’re running what amounts to a fancy chroot jail… It’s just an isolated environment that takes advantage of several kernel security features to make software running inside the environment think everything is normal despite being locked down.

    This is a very important distinction because it means that docker containers are very light weight compared to a VM. They use but a fraction of the resources a VM would and can be brought up and down in milliseconds since there’s no hardware to emulate.




  • You’re confused, I get it. You only need one factory factory as long as you sprinkle Inversion of Craziness (IoC) all over everything. Also, for this to work you must spread your code into as many files/directories as possible and also make sure you use really, really strict and verbose XML that doesn’t just define how your code runs but instead generates code itself.

    I highly suspect the reason why Java didn’t seem to have as much code is because the authors were using proper enterprise Java which is mostly XML that can only be understood if your IDE takes at least 5 minutes to open and another 5 to open your project.






  • FACEIT is yet another completely useless, doesn’t-actually-work, trust-the-client anti-cheating tool. Basically, it makes it so that cheaters (and the game publisher) can claim cheating isn’t happening because, “there’s an anti-cheat tool” but in reality it doesn’t stop actual cheaters.

    The entire purpose of anti-cheat tools appears to be to stop casual Linux gamers from being able to play the game. Microsoft has a big part in it as well because the very same intentional vulnerabilities in Windows that hackers use to install undetectable rootkits are what get used by anti-cheat software.

    If Microsoft wanted they could close those vulnerabilities by making all privilege levels above administrator (of which Windows has two which is insane) inaccessible to anyone but Microsoft. Instead they just collect money from 3rd party vendors to sign their driver encryption keys, inherently trusting those vendors not to make software with vulnerabilities. It’s a recipe for insecurity and Microsoft likes it that way. It acts as a form of vendor lock-in.

    Anti-cheat tools pretty much all work with the same basic assumption: Trust the client. What’s the first rule of network programming? Never trust the client!