Just your normal everyday casual software dev. Nothing to see here.

People can share differing opinions without immediately being on the reverse side. Avoid looking at things as black and white. You can like both waffles and pancakes, just like you can hate both waffles and pancakes.

been trying to lower my social presence on services as of late, may go inactive randomly as a result.

  • 0 Posts
  • 211 Comments
Joined 3年前
cake
Cake day: 2023年8月15日

help-circle
  • sadly, it’s a little more complex than just enabling it. The supported self host deployment uses docker, and the docker containers that are available don’t contain the interfaces for voice or video calling as they are not up to date.

    If I understand it right, to enable it would mean you need to either pull the source yourself and run it off of docker, or make a custom docker image using a version of stoat web that contains the ability to do voice calls.

    reading the draft of the linked issue, it looks like the author isn’t doing voice call for the reason that they don’t know the proper way to integrate it into the docker image.

    So to answer it: yes it looks like you can use voice servers on the current self hosted model, but you can’t use pre-existing docker images, and it will require you to manually add the new web UI in and patch where needed.



  • Personally, it seems like it’s trustworthy again. The previous owner of the repo did eventually admit that they authorized the transfer, but, The entire transfer process was extremely sketchy and had no chain of custody or trust. It was just the repository got deleted, and then a few days later showed under a whole blank state again with a user with no profile, no contribution history, and it was just a trust me bro, I knew the original maintainer look I have the keys to prove it.

    The maintainer of the Google Play build of it seems to trust them though, and they are established in the community, plus they archived their sync thing builds again in favor of just using one repo, so it’s likely fine.

    For future people wondering about it as well, it doesn’t help that the new maintainer of the app has deleted every issue that had to do with the migration, so you no longer can research the issue for yourself. The only information you have available to you is the discussion chain listed on the community forums, But any type of issue that they link to were deleted.

    Personally though, I plan on keeping my current version pinned to prior to the transfer until either I’m forced to update due to bugs or I feel comfortable with the current maintainer again. I’m not sure how long that will be.

    For an app that contains very sensitive information, I was not impressed with how the transfer process underwent.



  • I don’t think thats a unfair ask. One local representative in each country seems perfectly fair for me.

    Being said? the user information part? strictly locked to their own content. If the user account is registered in that country they have access. Providers could 100% do that with most operational databases out there. It’s a requirement for stores in order to do payment information. Steam and Epic already do this as it is.

    Should they be able to access that information in the first place is a different discussion, that needs to be had in that corresponding country, but if the country has already decided it needs access to continue, there’s no reason it should have access to all user data. The only thing they really have claim to is their own countries data.


  • my issue with what would happen if this ruling solidifies is the precident that it causes.

    I could not care less about reaction videos, they are really low effort videos that I don’t understand why are so popular.

    My issue entirely is that if the plaintiff wins in this case, it’s effectively saying any type of downloaded video on youtube would classify as circumventing DRM, which would open an avenue aside from a fair use violation for studios to go after content creators for.

    Look at lets plays for example. Those operate almost entirely on fair use clauses. I fear that if we start ruling that recording or downloading videos that your computer is able to decode (as this is all that the youtube downloader is doing, just instead of it going to the client its sending to a file), that means by same principle, recording a video game that contains DRM would also be considered circumventing a DRM. Which would outlaw lets plays.

    This is a very bad precedent regardless of if its just low quality trash reaction videos or not.


  • They are very nice. They share kernelspace so I can understand wanting isolation but, the ability to just throw a base Debian container on, assign it a resource pool and resource allocation, and install a service directly to it, while having it isolated from everything without having to use Docker’s emphereal by design system(which does have its perks but I hate troubleshooting containers on it) or having to use a full VM is nice.

    And yes, by Docker file I would mean either the Docker file or the compose file(usually compose). By straight on the container I mean on the container, My CTs don’t run Docker period, aside from the one that has the primary Docker stack. So I don’t have that layer to worry about on most CT’s

    As for the memory thing, I was just mentioning that Docker does the same thing that containers do if you don’t have enough RAM for what’s been provisioned. The way I had taken that original post is that specifying 2 gigs of RAM to the point the system exhausts it’s ram would cause corruption and the system crashes, which is true but docker falls for the same issue if the system exhausts it’s ram. That’s all I meant by it. Also cgroups sound cool, I gotta say I haven’t messed with them a whole lot. I wish proxmox had a better resource share system to designate a specific group as having X amount of max resources, and then have the CT or vm’s be using those pools.



  • I think we might have a different definition of Virtualized and containers. I use IBM’s and Comptias definitions.

    IBM’s definition is

    Virtualization is a technology that enables the creation of virtual environments from a single physical machine, allowing for more efficient use of resources by distributing them across computing environments.
    

    The IBM page themselves acknowledges that containers are virtualization on their Containers vs Virtual Machines page. I call virtualization as an abstraction layer between the hardware and the system being run.

    Comptia’s definition of containers would be valid as well. Which states that containers are a virtualization layer that operates at the OS level and isolates the OS from the file system. Whereas virtual machines are an abstraction layer between the hardware and the OS.

    I grew this terminology from my comptia networking+ book from 12 years ago though, which classifies Virtualization as “a process that adds a layer of abstraction between hardware and the system” which is a dated term since OS level virtualization such as Containers wasn’t really a thing then.



  • Your statements are surprising to me, because when I initially set this system up I tested against that because I had figured similar.

    My original layout was a full docker environment under a single VM which was only running Debian 12 with docker.

    I remember seeing a good 10gb different with ram usage between offloading the machines off the docker instance onto their own CT’s and keeping them all as one unit. I guess this could be chalked down to the docker container implementation being bad, or something being wrong with the vm. It was my primary reason for keeping them isolated, it was a win/win because services had better performance and was easier to manage.




  • I’m not a mod but, to me I see self hosting as maintaining your own setup. If it’s hosted in a cloud you still are maintaining the setup you are just offloading hardware responsibilities to someone else.

    It’s not like you are signing up for google photos and then saying “yo guys I have my own photos self hosted”, you still are putting the pain and suffering into making it work, you just aren’t worrying about the hardware or network requirements (outside of security)

    Being said, some people firmly see "“self-hosting” as you buy the parts, install and configure everything and it’s coming out of your house.

    It’s a sticky situation, imo that type of ideology also throws any type of using a DNS/DDOS host out the window as well., but again YMMV depending on who you ask.

    I definitly think if you are installing -> configuring -> maintaining and then -> using. you meet the definition of self hosting.

    edit: Being said, looking at the log, your deleted post was the one about your current external host provider dropping you due to heavy load(they were eco friendly) right? I can kind of see why they felt this didn’t meet the environment of the community. But i see both sides of the argument.


  • are you are saying running docker in a container setup(which at this point would be 2 layers deep) uses less resources than 10 single layer deep containers?

    I can agree with the statement that a single VM running docker with 10 containers uses less than 10 CT’s with docker installed then running their own containers(but that’s not what I do, or what I am asking for).

    I currently do use one CT that has docker installed with all my docker images, which I wouldn’t do if I had the ability not to but some apps require docker) but this removes most of the benefits you get using proxmox in the first place.

    One of the biggest advantages of using the hypervisor as a whole is the ability to isolate and run services as their own containers, without the need of actually entering the machine. (like for example if I"m screwing with a server, I can just snapshot the current setup and then rollback if it isn’t good) Throwing everything into a VM with docker bypasses that while adding headway to the system. I would need to backup the compose file (or however you are composing it) and the container, and then do my changes. My current system is a 1 click make my changes, if bad one click to revert.

    For resource explanation. Installing docker into a VM on proxmox then running every container in that does waste resources. You have the resources that docker requires to function (which is currently 4 gigs of ram per their website but when testing I’ve seen as low as 1 gig work fine)+ cpu and whatever storage it takes up which is about half a gig or so) in a VM(which also uses more processing and ram than CT’s do as they no longer share resources). When compared to 10 CT’s that are finetuned to their specific app, you will have better performance running the CT’s than a VM running everything, while keeping your ability to snapshot and removing the extra layer and ephemeral design that docker has(this can be a good and bad thing, but when troubleshooting I learn towards good).

    edit: clarification and general visibility so it wasnt bunched together.


  • I don’t like how everything is docker containerized.

    I already run proxmox, which containerizes things by design with their CT’s and VM’s

    Running a docker image ontop of that is just wasting system resources. (while also complicating the troubleshooting process) It doesn’t make sense to run a CT or VM for a container, just to put docker on it and run another container via that. It also completly bypasses everything that proxmox provides you for snapshotting and backup because proxmox’s system is for the entire container, and if all services are running on the same container all services are going to be snapshotted.

    My current system allows me to have per service snapshots(and backups), all within the proxmox webUI, all containerized, and all restricted to their own resources. Docker is just not needed at this point.

    A docker system just adds extra headway that isn’t needed. So yes, just give me a standard installer.


  • X to doubt. Considering that they already had issues with censorship which is labeled as “bugs”.

    The bug wasn’t that the message wasn’t sending, the bug was that you were able to detect that the message wasn’t sending.

    That’s how X/Twitter works. They don’t “censor” anything, they de-prioritize it things that don’t match current ideologies.

    They can call it whatever they want, the fact that the bugs only happens on specific topic points, and not on others, tells you that something was changed that was regarding those topic points which was causing the issue in the first place. What else would they be changing that required those specific topic points if not either a de-prioritization or a censorship.

    Being said, the article makes some good points, but also fails to realize that it’s twitter/X was the same boat. Many people had to choose either to make a political statement or to keep their current friends or influences. I lost access to almost every content creator I followed when I left twitter. plus, you can tell when people leave a platform, the effect is very noticeable. On the individual level they are probably right, but on the majority level it will be noticeable.



  • This is what I currently do with non-specialized services that require Docker. I have one container, which runs Docker Engine, and I throw everything on there, and then if I have a specialized container that needs Docker, I will still run its own CT. But then I use Docker Agent, So I can use one administration panel.

    It’s just annoying because I would rather just remove Docker from the situation because when you’re running Proxmox, you’re essentially running a virtualized system in a virtualized system because you have Proxmox, which is the bare bones running a virtualized environment for the container, which is then running a virtualized environment for the Docker container.


  • For VMs, I fully agree with you, but the best part about Proxmox is the ability to use containers, or CTs, which share system resources. So unlike a VM, if you specify a container has two gigs of RAM, that just means that it has two gigs of RAM that it can use, unlike the VM where it’s going to use that amount (and will crash if it can’t get that amount)

    These CT’s do the equivalent of what docker does, which is share the system space with other services with isolation, While giving an easy to administrate and backup system, while keeping it able to be seperate by service.

    For example, with a Proxmox CT, I can do snapshots of the container itself before I do any type of work, if where if I was using Docker on a primary machine, I would need to back up the Docker container completely. Additionally, having them as CTs mean that I can run straight on the container itself instead of having to edit a Docker file which by design is meant to be ephemeral. If I had to take troubleshooting bare bones versus troubleshooting a Docker container, I’m going to choose bare bones every step of the way.(You can even run an Alpine CT if you would rather keep the average Docker container setup)

    Also for the over committing thing, be aware that your issue you’ve stated there will happen with a Docker setup as well. Docker doesn’t care about the amount of RAM the system is allotted. And when you over-allocate the system, RAM-wise, it will start killing containers potentially leaving them in the same state.

    Anyway, long story short, Docker containers do basically the same thing that a Proxmox CT does. it’s just ephemeral instead of persistent, And designed to be plug-and-go, which I’ve found in the case of running a Proxmox-style setup, isn’t super handy due to the fact that a lot of times I would want to share resources such as having a dedicated database or caching system, Which is generally a pain in the butt to try to implement on Docker setups.