Considering a lot of people here are self-hosting both private stuff, like a NAS and also some other is public like websites and whatnot, how do you approach segmentation in the context of virtual machines versus dedicated machines?

This is generally how I see the community action on this:

Scenario 1: Air-gapped, fully Isolated Machine for Public Stuff

Two servers one for the internal stuff (NAS) and another for the public stuff totally isolated from your LAN (websites, email etc). Preferably with a public IP that is not the same as your LAN and the traffic to that machines doesn’t go through your main router. Eg. a switch between the ISP ONT and your router that also has a cable connected for the isolated machine. This way the machine is completely isolated from your network and not dependent on it.

Scenario 2: Single server with VM exposed

A single server hosting two VMs, one to host a NAS along with a few internal services running in containers, and another to host publicly exposed websites. Each website could have its own container inside the VM for added isolation, with a reverse proxy container managing traffic.

For networking, I typically see two main options:

  • Option A: Completely isolate the “public-facing” VM from the internal network by using a dedicated NIC in passthrough mode for the VM;
  • Option B: Use a switch to deliver two VLANs to the host—one for the internal network and one for public internet access. In this scenario, the host would have two VLAN-tagged interfaces (e.g., eth0.X) and bridge one of them with the “public” VM’s network interface. Here’s a diagram for reference: https://ibb.co/PTkQVBF

In the second option, a firewall would run inside the “public” VM to drop all inbound except for http traffic. The host would simply act as a bridge and would not participate in the network in any way.

Scenario 3: Exposed VM on a Windows/Linux Desktop Host

Windows/Linux desktop machine that runs KVM/VirtualBox/VMware to host a VM that is directly exposed to the internet with its own public IP assigned by the ISP. In this setup, a dedicated NIC would be passed through to the VM for isolation.

The host OS would be used as a personal desktop and contain sensitive information.

Scenario 4: Dual-Boot Between Desktop and Server

A dual-boot setup where the user switches between a OS for daily usage and another for hosting stuff when needed (with a public IP assigned by the ISP). The machine would have a single Ethernet interface and the user would manually switch network cables between: a) the router (NAT/internal network) when running the “personal” OS and b) a direct connection to the switch (and ISP) when running the “public/hosting” OS.

For increased security, each OS would be installed on a separate NVMe drive, and the “personal” one would use TPM with full disk encryption to protect sensitive data. If the “public/hosting” system were compromised.

The theory here is that, if properly done, the TPM doesn’t release the keys to decrypt the “personal” disk OS when the user is booted into the “public/hosting” OS.

People also seem to combine both scenarios with Cloudflare tunnels or reverse proxies on cheap VPS.


What’s your approach / paranoia level :D

Do you think using separate physical machines is really the only sensible way to go? How likely do you think VM escape attacks and VLAN hopping or other networking-based attacks are?

Let’s discuss how secure these setups are, what pitfalls one should watch out for on each one, and what considerations need to be addressed.

  • Breve@pawb.social
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    2
    ·
    14 days ago

    I just run Docker and my router maps ports to it. Container isolation and a basic firewall is more than enough for me.

    Like are we talking what’s good enough security for hosting an anime waifu tier list blog or good enough security for a billion dollar corporation?

    • TCB13@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      14 days ago

      are we talking what’s good enough security for hosting an anime waifu tier list blog or good enough security for a billion dollar corporation?

      You tell me. :)

      What would you do/trust in both situations?

      • Breve@pawb.social
        link
        fedilink
        English
        arrow-up
        12
        ·
        edit-2
        14 days ago

        Well from personal experience with a small website the biggest things you have to deal with are web crawlers trying to vacuum up every last ounce of data they can find and web crawlers trying to find obvious backdoors like trying default WordPress logins (even if you’re not running WordPress). Make sure your software is properly configured and up to date and you’re safe. Some isolation is still a good idea but don’t lose sleep on which one because they’re all still overkill in this case.

        On the other hand if you’re running a service that would be actively targeted by a large government enforcement agency or some other very wealthy and highly motivated entity, then complete physical isolation would be the only acceptable answer but with even more protocols to prevent contamination or identification as there have been attacks demonstrated that could infiltrate even air-gapped environments and that’s assuming you could hide it well enough for them not to just come physically compromise it (without you even knowing).

        Keep in mind if you want to use any of these technologies because you want to learn them or just think they’re neat, then please do! I suspect a lot of people with these types of home setups are doing it mostly for that reason and not because it is absolutely necessary for security purposes.

        • TCB13@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          13 days ago

          because you want to learn them or just think they’re neat, then please do! I suspect a lot of people with these types of home setups are doing it mostly for that reason

          That’s an interesting take.

      • Possibly linux@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 days ago

        Your billion dollar corporations aren’t running dedicated hardware. That would be very expensive and impossible to manage.

        • TCB13@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          13 days ago

          Are you sure? A big bank usually does… It’s very common to see groups of physical machines + public cloud services that are more strictly controlled than others and serve different purposes. One group might be public apps, another internal apps and another HVDs (virtual desktops) for the employees.

  • gaylord_fartmaster@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    14 days ago

    I don’t have anything publically accesible on my network (other than wireguard), but if I did I’d just put whatever it was on its own VLAN, run a wireguard server on it, and use a VPS as a reverse proxy that connects to it.

    I only use unprivileged LXCs and everything I host on my network runs in its own LXC, so I’m not really worried about someone getting access to the host from there.

    • TCB13@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      14 days ago

      So you do trust LXC isolation to the point of thinking that it would be close to impossible to compromise your host?

      • gaylord_fartmaster@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        14 days ago

        I’m not really worried about it. Each LXC runs as its own user on the host, and they only have access to what they need to run each service.

        If there’s an exploit found that makes that setup inherently vulnerable then a lot of people would be way more screwed than I would.

        • TCB13@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          14 days ago

          If there’s an exploit found that makes that setup inherently vulnerable then a lot of people would be way more screwed than I would.

          Fair enough ahah

      • macroplastic@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        12 days ago

        Nothing is impossible to compromise. It’s about making it not worth it (why go after some home lab when they can use the same methods to extort milliions of dollars?).

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    14 days ago

    A VM is practically as secure as a dedicated machine. I mean in theory it isn’t. But in practice, that’s how everybody does it, including the big tech companies. And there’s rarely any dangerous vulnerabilities.

    It all depends on how you set it up. If the machines lack proper firewalling and can reach internal services. If you didn’t set some permissions right. Or there is a vulnerability in the software or the way it’s installed… I think that’d be the main concerns. And it doesn’t matter too much which exact virtualization or containerization setup you choose, as long as it provides isolation, and also isolation from the networks it isn’t supposed to access.

  • schizo@forum.uncomfortable.business
    link
    fedilink
    English
    arrow-up
    6
    ·
    14 days ago

    What’s your concern here?

    Like who are you envisioning trying to hack you, and why?

    Because frankly, properly configured and permissioned (that is, stop using root for everything you run) container isolation is probably good enough for anything that’s not a nation state (barring some sort of issue with your container platform and it having an escape), and if it is a nation state you’re fucked anyways.

    But more to your direct question: I actually use dns scopes and nginx acls to seperate public from private. I have a *.public and a *.private cname which points to either my external or internal IP, and ACLs in the nginx site configuration to scope where access is allowed.

    You can’t access a *.private host outside the network, but can access either from inside it, and so (again, barring nginx having an oopsie somewhere) it’s reasonably secure and not accessible, and leaves a very clear set of logs (and I’m pulling those logs in and parsing them for anything suspicious and doing automated alerting if I find anything I would not otherwise expect) so I’m happy enough with the level of security that this is, when paired with the services built-in authentication options.

    • TCB13@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      14 days ago

      What’s your concern here?

      No specific concern, I do like in scenario 2, option B. I was just listing the most common options and getting feedback on what others think about those.

      I personally believe the setup 2B is more than enough if a nation state isn’t after you, but who knows? :)

      • schizo@forum.uncomfortable.business
        link
        fedilink
        English
        arrow-up
        7
        ·
        14 days ago

        Then the correct answer is ‘the one you won’t screw up’, honestly.

        I’m a KISS proponent with security for most things, and uh, the more complicated it gets the more likely you are to either screw up unintentionally, or get annoyed at it, and do something dumb on purpose, even though you totally were going to fix it later.

        Pick the one that makes sense, is easy for you to deploy and maintain, and won’t end up being so much of a hinderance you start making edge-case exceptions because those are the things that will 100% bite you in the ass later.

        Seen so many people turn off a firewall or enable port forwarding or set a weak password or change permissions to something too permissive and just end up getting owned that have otherwise sane, if maybe over-complicated, security designs and do actually know what they’re doing, but just getting burned by wandering off from standards because what they implemented originally ends up being a pain to deal with in day-to-day use.

        So yeah, figure out your concerns, figure out what you’re willing to tolerate in terms of inconvenience and maintenance, and then make sure you don’t ever deviate from there without stopping and taking a good look at what you’re doing, what could happen if you do it, and coming up with a worst-case scenario first.

        • TCB13@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          14 days ago

          the more complicated it gets the more likely you are to either screw up unintentionally, or get annoyed at it, and do something dumb on purpose, even though you totally were going to fix it later. (…) Pick the one that makes sense, is easy for you to deploy and maintain

          This is an interesting piece of advice.

          Anyway maybe I wasn’t clear enough, I’m not looking to pick a setup, I’ve been doing 2.B. for a very long time and I do work on tech and know my way around. Just gauging what others are doing and maybe find a few blind spots :).

          Thanks.

  • jet@hackertalks.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    14 days ago

    Scenario 1.5

    Pin a sensitive VM to a specific CPU that has no other VMs on it. This providers more isolation against known side channel attacks across VMs.

    • TCB13@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      14 days ago

      I’m curious is there documented attacks that could’ve been prevented by this?

      From my understanding CPU pinning shouldn’t be used that much, the host scheduler is aware that your VM threads are linked and will schedule child threads together. If you pin cores to VM’s, you block the host scheduler from making smart choices about scheduling. This is mostly only an issue if your CPU is under constraint, IE its being asked to perform more work than it can handle at once. Pinning is not dedicated, the host scheduler will schedule non-VM work to your pined cores.

      I’m under the impression that CPU pinning is an old approach from a time before CPU schedulers were as sophisticated, and did not handle VM threads in a smart manner. This is not the case anymore and might there be a negative performance impact with it.

    • TCB13@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      13 days ago

      ~~Is that still… self-hosting? In that case you would be hosting in a cloud company so… ~~

      misread comment.

        • TCB13@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          edit-2
          13 days ago

          If you’re using a VPS from Amazon, Digital Ocean or wtv you’re by definition not self-hosting. Still dependent on some cloud company, so not self-hosting in a pure sense… misread comment.

          • sepi@piefed.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            13 days ago

            What part of “self hosting” that I mentioned above goes through a provider? Or do you only know like NordVPN?

            • TCB13@lemmy.worldOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              13 days ago

              Sorry, I misread your first comment. I was thinking you said “VPS”. :)

  • zod000@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    13 days ago

    I go with scenario 1 because it radically reduces the ways I can screw things up for myself.

  • perry@aussie.zone
    link
    fedilink
    English
    arrow-up
    4
    ·
    12 days ago

    I appreciate the sentiment here, though I would agree that it is certainly paranoid 😅. I think if you’re careful with that you self host, where you install it from, how you install it and then what you expose, you can keep things sensible and reasonably secure without the need for strong isolation.

    I keep all of my services in my k3s cluster. It spans 4 PCs and sits in its own VLAN. There isn’t any particular security precautions I take here. I’m a developer and can do a reasonable job verifying each application I install, but of course accept the risk of running someone else’s software in my homelab.

    I don’t expose anything except Plex publicly. Everything else goes over Tailscale. I practise 3-2-1 backups with local disks and media as well as offsite to Backblaze. I occasionally offsite physical media backups as well.

    I’d be interested to see what others think about this… most hosting solutions leave it all open my default. I think there’s a lot of small and easy ways one can practice good lab hygiene without air-gapping.

    • TCB13@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 days ago

      You’re on a scenario 2.B mostly, same as me. That’s the most flexible yet secure design.

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    13 days ago

    Use defense in depth when possible. What you are describing wouldn’t work for any bigger setup as Proxmox clusters trust the underlying hosts. Also the chances of a hypervisor escape is very very small as in almost impossible. Chances are your weakest point will not be the hypervisor. I would focus on the network level with containers to separate workloads.

      • Possibly linux@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        12 days ago

        That’s what pretty much everyone uses

        Also your probably don’t even need vlans. Sure it is nice but you are being way paranoid. You need to define a threat model before you do anything or else you are just going to spend lots of the protecting the wrong things.

        I normally am all for security but if you are this paranoid you probably shouldn’t be putting things on the internet. Realistically there is a low threat even if someone manages to exploit a service. Try to come up with where a exploit is likely to happen and how you would prevent lateral movement. From an attack perspective it is extremely unlikely there is a person behind the attacks. If anything a container will be exploited and then it will be used as either a proxy or cryptominer.

    • TCB13@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 days ago

      Wow hold your horses Edward Snowden!.. but at the end of the day Qubes is just a XEN hypervisor with a cool UI.

  • foggy@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    14 days ago

    I have a server that I run services through traefik/docker on.

    It ALSO has a drive that is a MIRROR of my NAS.

    that NAS has a lil slavey twin, an external 14tb USB HDD. It’s on my laptop.

    Every time my laptop is idle, It does a little rsync with the servers NAS to stay current.

    I keep a 3rd copy (mirroring server NAS) in the cloud.