![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/cd304a0c-e258-4a87-86cd-7ce21eb70191.png)
Yes. Because they’re either making a profit from your meta/data, or it’s a promotion that ends as myriads of “free” services did before it
Yes. Because they’re either making a profit from your meta/data, or it’s a promotion that ends as myriads of “free” services did before it
I prefer having a convenient pull mechanism that I can trigger from a workstation in the lab network. I maintain the setup with Ansible
Knowing your gender is highly required? 😂
PathPrefix no longer being regex stood out
You can read this blog post, authored as a series of tweets instead https://mastodon.social/@pid_eins/112353324518585654
Sharing the network space with another container is the way to go IMHO. I use podman and just run the main application in one container, and then another VPN-enabling container in the same pod, which is essentially what you’re achieving with with the network_mode: container:foo
directive.
Ideally, exposing ports on the host node is not part of your design, so don’t have any --port
directives at all. Your host should allow routing to the hosted containers and, thus, their exposed ports. If you run your workloads in a dedicated network, like 10.0.1.0/24
, then those addresses assigned to your containers need to be addressable. Then you just reach all of their exposed ports directly. Ultimately, you then want to control port exposure through services like firewalld, but that can usually be delayed. Just remember that port forwarding is not a security mechanism, it’s a convenience mechanism.
If you want DLNA, forget about running that workload in a “proper” container. For DLNA, you need the ability to open random UDP ports for communication with consuming devices on the LAN. This will always require host networking.
Your DLNA-enabled workloads, like Plex, or Jellyfin, need a host networking container. Your services that require internet privacy, like qBittorrent, need their own, dedicated pod, on a dedicated network, with another container that controls their networking plane to redirect communication to the VPN. Ideally, all your manual configuration then ends up with a directive in the Wireguard config like:
PostUp = ip route add 192.168.1.0/24 via 192.168.19.1 dev eth0
Wireguard will likely, by default, route all traffic through the wg0
device. You just then tell it that the LAN CIDR is reachable through eth0
directly. This enables your communication path to the VPN-secured container after the VPN is up.
Europeans left the people slaving on site and only brought over the products. They probably wanted everything to happen out of sight and not risk slaves rioting at home.
https://en.m.wikipedia.org/wiki/Atrocities_in_the_Congo_Free_State
I do not. As far as I’m aware, this is usually countered through a proper way to follow through on reports. If you host user-generated content, have an abuse contact who will instantly act on reports, delete reported content, and report whatever metadata came along with the upload to the authorities if necessary.
The bookkeeping code for keeping track of unused uploads has a cost attributed to it. I claim that most providers are not willing to pay that cost proactively, and prefer to act on reports.
I can only extrapolate from my own experience though. No idea how the industry at large really handles or reasons about this.
This is not unique to Lemmy. You can do the same on Slack, Discord, Teams, GitHub, … Finding unused resources isn’t trivial, and you’re usually better off ignoring the noise.
If you upload illegal content somewhere, and then tell the FBI about it, being the only person knowing the URL, let me know how that turns out.
Checking every single image ID against all stored text blobs is not trivial. Most platforms don’t do this. It’s cheaper to just ignore the unused images.
Yeah, I agree. Just wanted to say I get the idea, but Google likely can’t be trusted with implementing a solution.
TLS and this proposal are different though. We don’t usually use client certificates with HTTPS. They are proposing something similar though. They want a way to attest the client. There’s really a ton of bot traffic on the web, and these bots are not browsers, and which is the reason we all solve CAPTCHAs. I get the idea, but I’ll support Mozilla’s stance on the subject.
SO is a shithole, just like Reddit. All the work is done by volunteers. When it was time to cash out with the platform, they also did several things to fuck with their community. I’ve contributed quite a bit to the trilogy sites, and served as a moderator. I regret every second of it. But at least a few people got rich in the process.
I roll out Step CA to my workstation with an Ansible role. All other clients on the lab trust this CA and are allowed to request certificates for themselves through ACME, like LetsEncrypt.
All my services on all clients on the network are exposed through traefik, which also handles the ACME process.
When it comes to Jellyfin, this is entirely counter-productive. Your media server needs to be accessible to be useful. Jellyfin should be run with host networking to enable DLNA, which will never pass through TLS. Additionally, not all clients support custom CAs. Chromecast or the OS on a TV are prime candidates to break once you move your Jellyfin entirely behind a proxy with custom CA certificates. You can waste a lot of time on this and achieve very little. If you only use the web UI for Jellyfin, then you might not care, but I prefer to keep this service out of the fancy HTTPS setup.