• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle




  • Just another option. If you know already or are willing to learn how to write documents in markdown format (like how lemmy supports), and learn some of infrastructure set-up and it can be between free and very cheap to have a blog on something like netlify.app, github pages or others. There are plenty of static site generators out there that can be both relatively easy and very powerful.

    I currently have a private blog set up on a cloud provider that just takes markdown documents and builds those along with some templates and webpage code to create a site like this. Although I have mine hosted on a VPS with my own domain, it’s completely possible to use something like github pages, netlify.app, etc. for that. They’re both free afaik to host on, but if you want to pay for a dedicated service they are usually between 2 and 5 USD per month.

    Edit: The option above isn’t activitypub software, sorry for not realizing that immediately, but it is federated in a way I suppose.




  • The way I have my monitoring set up is to poll the containers from behind the proxy layer. Ex. if I’m trying to poll Portainer for example:

    ---
    services:
        portainer:
        ...
    

    with the service name portainer

    from uptime-kuma within the same docker network it would look like this:

    Can confirm this is working correctly to monitor that the service is reachable. This doesn’t however ensure that you can reach it from your computer, because that depends on if your reverse proxy is configured correctly and isn’t down, but that’s what I wanted in my case.

    Edit: If you’re wanting to poll the http endpoint you would add it before like http://whatever_service:whatever_port


  • I believe the Pictrs is a hard dependency and Lemmy just won’t work without it, and there is no way to disable the caching

    I’ll have to double check this but I’m almost certain pictrs isn’t a hard dependency. Saw either the author or one of the contributors mention a few days ago that pictrs could be discarded by editing the config.hjson to remove the pictrs block. Was playing around with deploying a test instance a few days ago and found it to be true, at least prior to finalizing the server setup. I didn’t spin up the pictrs container at all, so I know that it will at least start and let me configure the server.

    The one thing I’m not sure of however is if any caching data is written to the container layer in lieu of being sent to pictrs, as I didn’t get that far (yet). I haven’t seen any mention that the backend even does local storage, so I’m assuming that no caching is taking place when pictrs is dot being used.

    Edit: Clarifications


  • Thanks for sharing! I’ll definitely be looking into adding this to my infra alerting stack. Should pair well with webhooks using ntfy for notifications. Currently just have bash scripts push to uptime-kuma for disk usage monitoring as a dead man trigger, but this should be better as a first-line method. Not to mention all the other functionalities it has baked in.

    Edit: Would also be great if there was an already compiled binary in each release so I can use bare-metal, but the container on ghcr.io is most-likely what I’ll be using anyway. Thanks for not only uploading to docker hub.



  • Agree completely. In the grand scheme of things the damage that appears to have happened here is small potatoes, but it brought attention to the vulnerability so it was patched quickly. Going forward now, the authors and contributors to the project might be a bit more focused on hardening the software against these types of vulnerabilities. Pen testing is invaluable on wide user-base internet accessible platforms like this because it makes better, more secure software. Unfortunately this breech wasn’t under the “ethical pen testing” umbrella but it sure as hell brought the vulnerability to the mindshare of everyone with a stake in it, so I view it as a net win.






  • My long and mostly complete list:

    • Audiobookshelf (GH)
      • Using for audiobooks. Ebooks, comics, and podcast support in early stages.
    • Authelia (GH)
      • Using for two-factor authentication in front of all of my services. Critical infrastructure.
    • Bazarr (GH)
      • Using for automated subtitle management. Have not needed to rely on it much.
    • Code-Server (GH)
      • Using for a plethora of things. I could write an entire post on this alone.
    • Courier
      • Using (occasionally) for package-tracking from various carriers.
    • EmulatorJS
      • Using for retro-emulation.
    • Gitea (GH) x2
      • Using as a git repo server, package repository, and for CI/CD automation. Is critical infrastructure in my lab. Could also write an entire post on this one.
    • Headscale with Headscale-UI. Tailscale clients on various VMs LXCs, etc.
      • Using to securely network with my remote servers.
    • Homepage
      • Using as a “single-pane-of-glass” to get an overview of service health with links to the various services.
    • Invidious
      • Using in-place of YouTube.
    • IT-Tools (GH)
      • Using for the myriad of various useful tools it offers.
    • Jellyfin (GH)
      • My media player of choice. Using for movies and television, but supports music, ebooks, and photos in addition.
    • Kopia Server (GH)
      • Using for data backups to my Minio instance on local NAS and Wasabi. Simple, fast, and reliable.
    • Librespeed (GH)
      • Using for the occasional speedtest to my remote servers.
    • Matrix stack using Conduit back end and Element-Web front end
      • Federated Discord essentially. Using as a private instance for friends and family.
    • Minio
      • Using primarily as a gateway to storing backups, also serves git-lfs for Gitea.
    • N8N (GH)
      • Using for home-automation, backing up my Reddit saved posts to a database, deal-alerts, and part of a CI/CD pipeline.
    • NTFY (GH)
      • Using for infrastructure notifications mostly. Very simple and versatile alerting solution.
    • NZBGet
      • Using for getting “usenet articles”.
    • Paperless-NGX
      • Using for document archival. Important receipts, documentation, letters, etc. live here.
    • Portainer (GH) with multiple agents on VM’s LXCs and VPSs
      • High level management of my various docker containers.
    • Prowlarr
      • Using to provide torznab API to websites that dont natively have it. Integrates with Radarr and Sonarr
    • Radarr (GH)
      • Using for movie management.
    • Radicale
      • Using for contacts and calendar server.
    • Raneto (GH)
      • Using as a knowledge base. Lab documentation, lists, recipes, lots of things live here. Using with with code-server and Gitea.
    • Readarr (GH)
      • Using for book management
    • Recyclarr (GH)
      • Using for Radar and Sonarr to sync search terms for their automations. Very useful, hard to summarize.
    • Requestrr
      • Using (very rarely) as a requests bot for Radarr and Sonarr.
    • SFTP-Go
      • Using mostly in-place of Nextcloud. Used to back up phones mostly.
    • Shaarli (GH)
      • Using as a read-it-later service. Went through lots of these, and Shaarli has been good enough.
    • Singlefile-Archive
      • A hacky way of presenting pages saved with the singlefile browser extension. Not exactly happy with the solution, but for my ocasional use it does work.
    • Sonarr (GH)
      • Using as TV series manager
    • Speedtest-Tracker (GH)
      • Using to get periodic speedtests. Plan to automate results to blast my ISP if my service speed gets too low.
    • Traefik (GH) on each seperate host
      • Using as a web proxy in front of my various services. Critical infrastructure.
    • Transmission (GH)
      • Using to get “Linux ISOs”
    • Uptime Kuma (GH)
      • Using to monitor site and services status along with a few others. Integrated with NTFY for alerts.
    • Vaultwarden
      • Using as my password manager. Have been using for years, cannot recommend enough.
    • A handful of static websites served with NGINX
      • The old standby, its been reliable as a webserver.

    These services are the result of years of development and administrating my lab and while there is still some cruft, it’s mostly services that I think have real utility.

    As far as hardware:

    • Running pfsense on a toughbook laptop as a router-firewall.

    • A SuperMicro 24 bay disk-shelf with Proxmox and ZFS for NAS duties and a couple services.

    • Lenovo Tiny boxes with a Proxmox cluster for the majority of my local services.

    • Dell managed switch

    • A few Raspberry-pi’s with Raspbian for various things.

    • Linksys AP for wifi

    Edit: Spelling is hard.


  • I had this long post typed out on Jeroba about why I wouldn’t recommend it, but maybe I hit the character limit? IDK. Anyway, the point I wanted to get across is that I’ve been down that road, and up until February it was going ok, but one should absolutely not trust the Oracle free tier for any service that should be reliable long term, they can and will take the VM down and take back that generous free tier allotment, and IIRC sometimes without any notice. In my case it was literally because my VM was under utilized. No option to downscale my instance, just a notification that they’re taking my allotment back and deleting the VM a few weeks before it happened.