• 2 Posts
  • 39 Comments
Joined 29 days ago
cake
Cake day: June 9th, 2024

help-circle








  • I have watchtower configured to update most, but not all containers.

    It runs after the nightly backup of everything runs, so if something explodes, I’ve got a backup that’s recent and revertible. I also don’t update certain types of containers (databases, critical infrastructure, etc.) automatically so that the blast radius of a bad update when I’m not there doing it is limited.

    In the last ~3 years I’ve had exactly zero instances of ‘oops shit’s fucked!’, but I also don’t run anything that’s in a massive state of flux and constantly having breaking changes (see: immich).


  • As someone with recent platforms from both Intel and AMD, man, I do not like my 7700x’s platform.

    It’s just sporadically unreliable: sometimes it posts, sometimes it doesn’t, sometimes the memory decides it needs to reset back to jedc standards instead of the expo settings, sometimes it doesn’t. Even a successful POST can take upwards of a minute sometimes, and the system may or may not reset in the middle of it, resulting in two extended delays.

    Perfectly stable once the OS gets booted (memtest is fine, prime95 is fine and it boosts like crazy up to about 5.5ghz all-core), but getting there is such a pain on occasion.

    I realize more than a little of this is probably attributable to the motherboard manufacturer/efi settings, but the last few AMD platforms I’ve had are just wonky and less than 100% reliable compared to the last several Intel ones, which have typically just worked, correctly, every time.


  • Yeah, exactly: if you know how it works, then you know how to fix it. I don’t think you need a comprehensive knowledge about how everything you run works, but you should at least have good enough notes somewhere to explain HOW you deployed it the first time, if you had to make any changes as well as anything you ran into that required you to go figure out what the blocking issue was.

    And then you should make sure that documentation is visible in a form that doesn’t require ANYTHING to actually be working, which is why I just put pages of notes in the compose file: docker doesn’t care, and darn near any computer on earth made in the last 40 years can read a plan text file.

    I don’t really think there’s any better/worse reverse proxy for simple configurations, but I’m most familiar with nginx, which means I’ve spent too long fixing busted shit on it so it’s the choice primarily because, well, when I break it, I already probably know how to fix what’s wrong.


  • I’m a grumpy linux greybeard type, so I went with… plain text files.

    Everything is deployed via docker, so I’ve got a docker-compose.yml for each stack, and any notes or configuration things specific to that app is a comment in the compose file. Those are all backed up in a couple of places, since all I need to do is drop them on a filesystem, and bam, complete restoration.

    Reverse proxy is nginx, because it’s reliable, tested, proven, works, and while it might not have all those fancy auto-config options other things have, it also doesn’t automatically configure itself into a way that I’d prefer it didn’t, either.

    I don’t use any tools like portainer or dockge or nginx proxy manager at this point, because dealing with what’s just a couple of config files on the filesystem is faster (for me) and less complicated (again, for me) than adding another layer of software on top (and it keeps your attack surface small).

    My one concession to gui shit for the docker is an install of dozzle because it certainly makes dealing with docker logs simple, and it simplifies managing the ~40 stacks and ~85 containers that I’ve got setup at the moment.







  • Nope, that curl command says ‘connect to the public ip of the server, and ask for this specific site by name, and ignore SSL errors’.

    So it’ll make a request to the public IP for any site configured with that server name even if the DNS resolution for that name isn’t a public IP, and ignore the SSL error that happens when you try to do that.

    If there’s a private site configured with that name on nginx and it’s configured without any ACLs, nginx will happily return the content of whatever is at the server name requested.

    Like I said, it’s certainly an edge case that requires you to have knowledge of your target, but at the same time, how many people will just name their, as an example, vaultwarden install as vaultwarden.private.domain.com?

    You could write a script that’ll recon through various permuatations of high-value targets and have it make a couple hundred curl attempts to come up with a nice clean list of reconned and possibly vulnerable targets.



  • That’s the gotcha that can bite you: if you’re sharing internal and external sites via a split horizon nginx config, and it’s accessible over the public internet, then the actual IP defined in DNS doesn’t actually matter.

    If the attacker can determine that secret.local.mydomain.com is a valid server name, they can request it from nginx even if it’s got internal-only dns by including the header of that domain in their request, as an example, in curl like thus:

    curl --header 'Host: secret.local.mydomain.com' https://your.public.ip.here -k

    Admittedly this requires some recon which means 99.999% of attackers are never even going to get remotely close to doing this, but it’s an edge case that’s easy to work against by ACLs, and you probably should when doing split horizon configurations.