• 1 Post
  • 95 Comments
Joined 1 year ago
cake
Cake day: June 19th, 2023

help-circle
  • For anything that is related to my backup scheme, it’s printed out hard copy, put in an envelope in a fire safe in my house. I can tell you from experience there is nothing more stressful than “oh fuck I need my backups but the key to unlock the backups is in the backups fuck fuck fuck”.

    And for future reference, anyone thinking about breaking into my house to get access to my backups just DM me, I’m sure we can come to an arrangement that’s less hassle for both of us


  • I was in the same place as you a few years ago - I liked swarm, and was a bit intimidated by kubernetes - so I’d encourage you to take a stab at kubernetes. Everything you like about swam kubernetes does better, and tools like k3s make it super simple to get set up. There _is& a learning curve, but I’d say it’s worth it. Swarm is more or less a dead end tech at this point, and there are a lot more resources about kubernetes out there.


  • They are, but I think the question was more “does the increased speed of an SSD make a practical difference in user experience for immich specifically”

    I suspect that the biggest difference would be running the Postgres DB on an SSD where the fast random access is going to make queries significantly faster (unless you have enough ram that Postgres can keep the entire DB in memory where it makes less of a difference).

    Putting the actual image storage on SSD might improve latency slightly, but your hard drive is probably already faster than your internet connection so unless you’ve got lots of concurrent users or other things accessing the hard drive a bunch it’ll probably be fast enough.

    These are all Reckons without data to back it up, so maybe do some testing








  • As in, hardware RAID is a terrible idea and should never be used. Ever.

    With hardware RAID, you are moving your single point of failure from your drive to your RAID controller - when the controller fails, and they fail more often then you would expect - you are fucked, your data is gone, nice try, play again some time. In theory you could swap the controller out, but in practice it’s a coin flip if that will actually work unless you can find exactly the same model controller with exactly the same firmware manufactured in the same production line while the moon was in the same phase and even then your odds are still only 2 in 3.

    Do yourself a favour, look at an external disk shelf/DAS/drive enclosure that connects over SAS and do RAID in software. Hardware RAID made sense when CPUs were hewn from granite and had clock rates measures in tens of megahertz so offloading things to dedicated silicon made things faster, but that’s not been the case this century.


  • True randomness is really really hard to do in software; bigger CPUs often have hardware random number generators that exploit some sort of quantum or otherwise non-determanistic phenomena, but in software the best you can do is pseudo-random. These are algorithms that generate a sequence of randomly distributed numbers, but in a deterministic way - from a given starting state, it will always generate the same sequence of numbers. Good algorithms are designed to make it hard to infer the starting state just by observing the sequence (if you can do that, you can run the algorithm in parallel and predict the next number), but that’s an active area of research.

    At a guess, the calculator was programmed to initialise the random number generator from something that it is hard for the user to control (milliseconds since power on would be a good one) the first time you used it, but maybe TI got lazy and just initialised it to a constant value





  • I’d considered doing something similar at some point but couldn’t quite figure out what the likely behaviour was if the workers lost connection back to the control plane. I guess containers keep running, but does kubelet restart failed containers without a controller to tell it to do so? Obviously connections to pods on other machines will fail if there is no connectivity between machines, but I’m also guessing connections between pods on the same machine will be an issue if the machine can’t reach coredns?




  • I’ve started a similar process to yours and am moving domains as they come up for renewal, with a slightly different technical approach:

    • I’m using AWS Route 53 as my registrar. They aren’t the cheapest, but still work out at about half the price of Gandi and one of my key requirements was to be able to use Terraform to configure DS records for DNSSEC and NS records in the parent zone
    • I run an authoritative nameserver on an OCI free tier VM using PowerDNS, and replicate the zones to https://ns-global.zone/ for redundancy. I’m investigating setting up another authoritative server on a different cloud provider in case OCI yank the free tier or something
    • I use https://migadu.com/ for email

    I have one .nz domain which I’ll need to find a different registrar for, cos for some reason route53 doesn’t support .nz domains, but otherwise the move is going pretty smoothly. Kinda sad where Gandi has gone - I opened a support ticket to ask how they can justify being twice the price of their competitors and got a non-answer