• 0 Posts
  • 21 Comments
Joined 2 months ago
cake
Cake day: March 17th, 2025

help-circle
  • suicidaleggroll@lemm.eetoSelfhosted@lemmy.worldVersion Dashboard
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    12 days ago

    Just FYI - you’re going to spend far, FAR more time and effort reading release notes and manually upgrading containers than you will letting them run :latest and auto-update and fixing the occasional thing when it breaks. Like, it’s not even remotely close.

    Pinning major versions for certain containers that need specific versions makes sense, or containers that regularly have breaking changes that require you to take steps to upgrade, or absolute mission-critical services that can’t handle a little downtime with a failed update a couple times a decade, but for everything else it’s a waste of time.



  • I had something almost identical to this happen to me on Friday. Last year our company moved to a super locked down version of Teams, to the point where I couldn’t even open images that people put in the chat because of security issues, instead the image they posted would be replaced with an error image saying that I wasn’t allowed to open images, blah blah blah. That problem was resolved a long time ago though.

    On Friday I was trying to send an image of some data processing to a colleague, and every time I put it in Teams, it would show up as that stupid error message. I spent a solid hour trying to figure out why that problem was back, was my computer not authenticating with MS properly, etc. Turns out my file browser was sorting by time order instead of reverse time order, and the screenshot at the top of the list from May 2 2024, was a screenshot of the error message that I used to send to IT when they were investigating the problem.


  • They likely streamed from some other Plex server in the past, and that’s why they’re getting the email. The email specifically states that if the server owner has a plex pass, you don’t need one.

    I got the email earlier today and it couldn’t be clearer:

    As a server owner, if you elect to upgrade to a Plex Pass, anyone with access to your server can continue streaming your server content remotely as part of your subscription benefits.


  • I run all of my Docker containers in a VM (well, 4 different VMs, split according to network/firewall needs of the containers it runs). That VM is given about double the RAM needed for everything it runs, and enough cores that it never (or very, very rarely) is clipped. I then allow the containers to use whatever they need, unrestricted, while monitoring the overall resource utilization of the VM itself (cAdvisor + node_exporter + Promethus + Grafana + Alert Manager). If I find that the VM is creeping up on its load or memory limits, I’ll investigate which container is driving the usage and then either bump the VM limits up or address the service itself and modify its settings to drop back down.

    Theoretically I could implement per-container resource limits, but I’ve never found the need. I have heard some people complain about some containers leaking memory and creeping up over time, but I have an automated backup script which stops all containers and rsyncs their mapped volumes to an incremental backup system every night, so none of my containers stay running for longer than 24 hours continuous anyway.


  • People always say to let the system manage memory and don’t interfere with it as it’ll always make the best decisions, but personally, on my systems, whenever it starts to move significant data into swap the system starts getting laggy, jittery, and slow to respond. Every time I try to use a system that’s been sitting idle for a bit and it feels sluggish, I go check the stats and find that, sure enough, it’s decided to move some of its memory into swap, and responsiveness doesn’t pick up until I manually empty the swap so it’s operating fully out of RAM again.

    So, with that in mind, I always give systems plenty of RAM to work with and set vm.swappiness=0. Whenever I forget to do that, I will inevitably find the system is running sluggishly at some point, see that a bunch of data is sitting in swap for some reason, clear it out, set vm.swappiness=0, and then it never happens again. Other people will probably recommend differently, but that’s been my experience after ~25 years of using Linux daily.


  • I self-host Bitwarden, hidden behind my firewall and only accessible through a VPN. It’s perfect for me. If you’re going to expose your password manager to the internet, you might as well just use the official cloud version IMO since they’ll likely be better at monitoring logs than you will. But if you hide it behind a VPN, self-hosting can add an additional layer of security that you don’t get with the official cloud-hosted version.

    Downtime isn’t an issue as clients will just cache the database. Unless your server goes down for days at a time you’ll never even notice, and even then it’ll only be an issue if you try to create or modify an entry while the server is down. Just make sure you make and maintain good backups. Every night I stop and rsync all containers (including Bitwarden) to a daily incremental backup server, as well as making nightly snapshots of the VM it lives in. I also periodically make encrypted exports of my Bitwarden vault which are synced to all devices - those are useful because they can be natively imported into KeePassXC, allowing you to access your password vault from any machine even if your entire infrastructure goes down. Note that even if you go with the cloud-hosted version, you should still be making these encrypted exports to protect against vault corruption, deletion, etc.



  • I don’t like the fact that I could delete every copy using only the mouse and keyboard from my main PC. I want something that can’t be ransomwared and that I can’t screw up once created.

    Lots of ways to get around that without having to go the route of burning a hundred blu-rays with complicated (and risky) archive splitting and merging. Just a handful of external HDDs that you “zfs send” to and cycle on some regular schedule would handle that. So buy 3 drives, backup your data to all 3 of them, then unplug 2 and put them somewhere safe (desk at work, friend or family member’s house, etc.). Continue backing up to the one you keep local for the next ~month and then rotate the drives. So at any given time you have a on-site copy that’s up-to-date, and two off-site copies that are no more than 1 and 2 months old respectively. Immune to ransomware, accidental deletion, fire, flood, etc. and super easy to maintain and restore from.


  • Main reason is that if you don’t already have the right key, VPN doesn’t even respond, it’s just a black hole where all packets get dropped. SSH on the other hand will respond whether or not you have a password or a key, which lets the attacker know that there’s something there listening.

    That’s not to say SSH is insecure, I think it’s fine to expose once you take some basic steps to lock it down, just answering the question.







  • The nice thing about docker is all you need to do is backup your compose file, .env file, and mapped volumes, and you can easily restore on any other system. I don’t know much about CasaOS, but presumably you have the ability to stop your containers and access the filesystem to copy their config and mapped volumes elsewhere? If so this should be pretty easy. You might have some networking stuff to work out, but I suspect the rest should go smoothly and IMO would be a good move.

    When self-hosting, the more you know about how things actually work, the easier it is to fix when something is acting up, and the easier it is to make known good backups and restore them.



  • Sure, it’s a bit hack-and-slash, but not too bad. Honestly the dockcheck portion is already pretty complete, I’m not sure what all you could add to improve it. The custom plugin I’m using does nothing more than dump the array of container names with available updates to a comma-separated list in a file. In addition to that I also have a wrapper for dockcheck which does two things:

    1. dockcheck plugins only run when there’s at least one container with available updates, so the wrapper is used to handle cases when there are no available updates.
    2. Some containers aren’t handled by dockcheck because they use their own management system, two examples are bitwarden and mailcow. The wrapper script can be modified as needed to support handling those as well, but that has to be one-off since there’s no general-purpose way to handle checking for updates on containers that insist on doing things in their own custom way.

    Basically there are 5 steps to the setup:

    1. Enable Prometheus metrics from Docker (this is just needed to get running/stopped counts, if those aren’t needed it can skipped). To do that, add the following to /etc/docker/daemon.json (create it if necessary) and restart Docker:
    {
      "metrics-addr": "127.0.0.1:9323"
    }
    

    Once running, you should be able to run curl http://localhost:9323/metrics and see a dump of Prometheus metrics

    1. Clone dockcheck, and create a custom plugin for it at dockcheck/notify.sh:
    send_notification() {
    Updates=("$@")
    UpdToString=$(printf ", %s" "${Updates[@]}")
    UpdToString=${UpdToString:2}
    
    File=updatelist_local.txt
    
    echo -n $UpdToString > $File
    }
    
    1. Create a wrapper for dockcheck:
    #!/bin/bash
    
    cd $(dirname $0)
    
    ./dockcheck/dockcheck.sh -mni
    
    if [[ -f updatelist_local.txt ]]; then
      mv updatelist_local.txt updatelist.txt
    else
      echo -n "None" > updatelist.txt
    fi
    

    At this point you should be able to run your script, and at the end you’ll have the file “updatelist.txt” which will either contain a comma-separated list of all containers with available updates, or “None” if there are none. Add this script into cron to run on whatever cadence you want, I use 4 hours.

    1. The main Python script:
    #!/usr/bin/python3
    
    from flask import Flask, jsonify
    
    import os
    import time
    import requests
    import json
    
    app = Flask(__name__)
    
    # Listen addresses for docker metrics
    dockerurls = ['http://127.0.0.1:9323/metrics']
    
    # Other dockerstats servers
    staturls = []
    
    # File containing list of pending updates
    updatefile = '/path/to/updatelist.txt'
    
    @app.route('/metrics', methods=['GET'])
    def get_tasks():
      running = 0
      stopped = 0
      updates = ""
    
      for url in dockerurls:
          response = requests.get(url)
    
          if (response.status_code == 200):
            for line in response.text.split("\n"):
              if 'engine_daemon_container_states_containers{state="running"}' in line:
                running += int(line.split()[1])
              if 'engine_daemon_container_states_containers{state="paused"}' in line:
                stopped += int(line.split()[1])
              if 'engine_daemon_container_states_containers{state="stopped"}' in line:
                stopped += int(line.split()[1])
    
      for url in staturls:
          response = requests.get(url)
    
          if (response.status_code == 200):
            apidata = response.json()
            running += int(apidata['results']['running'])
            stopped += int(apidata['results']['stopped'])
            if (apidata['results']['updates'] != "None"):
              updates += ", " + apidata['results']['updates']
    
      if (os.path.isfile(updatefile)):
        st = os.stat(updatefile)
        age = (time.time() - st.st_mtime)
        if (age < 86400):
          f = open(updatefile, "r")
          temp = f.readline()
          if (temp != "None"):
            updates += ", " + temp
        else:
          updates += ", Error"
      else:
        updates += ", Error"
    
      if not updates:
        updates = "None"
      else:
        updates = updates[2:]
    
      status = {
        'running': running,
        'stopped': stopped,
        'updates': updates
      }
      return jsonify({'results': status})
    
    if __name__ == '__main__':
      app.run(host='0.0.0.0')
    

    The neat thing about this program is it’s nestable, meaning if you run steps 1-4 independently on all of your Docker servers (assuming you have more than one), then you can pick one of the machines to be the “master” and update the “staturls” variable to point to the other ones, allowing it to collect all of the data from other copies of itself into its own output. If the output of this program will only need to be accessed from localhost, you can change the host variable in app.run to 127.0.0.1 to lock it down. Once this is running, you should be able to run curl http://localhost:5000/metrics and see the running and stopped container counts and available updates for the current machine and any other machines you’ve added into “staturls”. You can then turn this program into a service or launch it @reboot in cron or in /etc/rc.local, whatever fits with your management style to start it up on boot. Note that it does verify the age of the updatelist.txt file before using it, if it’s more than a day old it likely means something is wrong with the dockcheck wrapper script or similar, and rather than using the output the REST API will print “Error” to let you know something is wrong.

    1. Finally, the Homepage custom API to pull the data into the dashboard:
            widget:
              type: customapi
              url: http://localhost:5000/metrics
              refreshInterval: 2000
              display: list
              mappings:
                - field:
                    results: running
                  label: Running
                  format: number
                - field:
                    results: stopped
                  label: Stopped
                  format: number
                - field:
                    results: updates
                  label: Updates
    


  • Anything on a separate disk can be simply remounted after reinstalling the OS. It doesn’t have to be a NAS, DAS, RAID enclosure, or anything else that’s external to the machine unless you want it to be. Actually it looks like that Beelink only supports a single NVMe disk and doesn’t have SATA, so I guess it does have to be external to the machine, but for different reasons than you’re alluding to.