

Disagree on the .gitignore file. If you’re the only developer and you only work off of one machine then it doesn’t need to be committed. In a team setting it’s absolutely imperative to commit it.
Disagree on the .gitignore file. If you’re the only developer and you only work off of one machine then it doesn’t need to be committed. In a team setting it’s absolutely imperative to commit it.
I convert my files to avoid transcoding but my Raspberry Pi 4B handles Jellyfin just fine.
Even if it isn’t an OpenWRT router if you have a hardwired server it can probably do a soft reset of the router or even modem (most modems I’ve used have had a web interface). If your router is in such a bad state it only responds to a hard reset it’s probably reaching EoL.
I’m definitely considering a dumb phone with tethering capabilities to use a less locked-down device.
AI has been good at auto-completing things for me, but it almost always suggests things I already knew without even web searching. If I try to get advice about things I know nothing about (code wise) it’s a really bad teacher, skips steps, and makes suggestions that don’t work at all.
I’m guessing there’s been no software explosion because AI is really only good for the “last 20%” of effort and can’t really breach 51% where it’s doing the majority of the driving.
Apropos to use the term “driving” I feel. Autonomous vehicles have largely been successful because the goal is clear (i.e. “take me to the grocery store”) and there’s a finite number of paths to reach the goal (no off-roading allowed). In programming, even if the goal is crystal clear, there really are an infinite number of solutions. If the driver (i.e. developer) doesn’t have a clear path and vision for the solution then AI will only add noise and throw you off track.
Same. I started going to Flathub in my browser.
Embrace, extend, extinguish.
I would be incredibly weary if someone like Meta, Google, or Microsoft started their own distro. Make a solid distro with lots of bells and whistles few distros have, pre-install it on the hottest gear, poach the best devs away from open-source projects, exert more and more influence over kernel development, wait for a majority to get locked in and then start making parts of the OS proprietary so open-source can’t keep up, and the dominoes fall from there.
Me: Junior! Did you hack a Gibson???
5yo: Maaayyybe…
I swear there was an XKCD for this. Not the frameworks one. One about being a “monster” for making another JS framework.
Seriously? Number 1-4 are just outright stupid claims, or misguided explanations at best. I especially laughed at the “system being in read-only mode” stupidity. WTF do you think EVERY SINGLE Unix-like system has configured, world writable everything?
The important parts of the filesystem are mounted read-only so you’d have to explicitly reboot and mount as read-write. That’s a lot different than marking individual files or folders as read-only which is what you’re referring to.
The atomic updates but is also pretty stupid, since that’s literally just a process difference,
Yes and no. To get the same behavior without an immutable OS you’d need to take a snapshot before every update and update every package on the system every time and install no additional packages.
and unless you’re running a stock base image (which almost nobody generally is), then you’re not getting full atomic updates globally on your system, and certainly claiming they have no problems is dumb as hell. They then try and point out that NOT being able to update a single application is some sort of benefit, which, hey…maybe that’s subjective, but it’s outright just a dumb claim.
I’m guessing you mean “stock” as in never installing anything additional. If using the base packaging system is your jam then absolutely you shouldn’t use an immutable OS. There are plenty of alternatives to doing a apt install
and that’s what you’re encouraged to do because those options don’t usually involve writing to important system dirs.
Lastly, there’s a claim in there seems to sound something like it’s normally a battlefield amongst running applications on a non-immutable system, and that somehow there is problematic interaction between programs which is, again, false and ignorant.
I can’t say I’ve ever run into two packages that, in effect, conflicted with each other, but I’ve absolutely seen packages conflict during installation where I was forced to look for an alternative package without a conflict or complie from source.
Writing “because you’re forced to use containers” doesn’t ring like a feature, so of course they’re going to phrase it the other way.
Saying, “Look at what you can’t do!” is usually not a good idea but depending on your priorities and skill level it’s really about taking riskier options off the table. Yes, some things are more challenging using containers but the likelihood of a container making your machine unbootable is practically zero. I’ve run and administered Linux machines (personally) for over 20 years and not worrying about base packages has frankly been a load off my mind. And because I’m doing more things in containers I’m coming up with solutions I can easily port to any machine.
I would never say immutables are better than standard distros but I don’t think it’s fair to say they don’t provide any advantages or that you can get the same benefits simply by changing your habits.
You’d do most of that stuff inside a container (Distrobox probably). You’d basically have a “clean OS” to start with (doesn’t have to be the same OS as the host even) and install your libraries like normal. Distrobox does a good job of integrating with the host so you mostly won’t know you’re in a container. It’s not perfect though, and if you have little experience with containers you’ll definitely have a hard time doing what you need to.
Bazzite has been my main driver for months and other than a rocky start involving video encoding it’s been a dream. Bluefin has been fine, but some of the “batteries included” stuff would’ve been better left out entirely.
Trying to do more things inside of a container has been a challenge but it’s a challenge I’m willing to accept. I personally think the headaches of integrating container processes with host processes are preferable to the headaches of tweaking files under /etc
or leaving artifacts of configuration and old programs all across my filesystem.
In what way? Any claims of it being unbreakable or rock solid are obviously hype because nobody can guarantee that about any computer. Otherwise I don’t think it’s misinformed.
Wonder if that issue applies to systems using bootc. rpm-ostree is still involved AFAIK but not for booting.
You can still tinker a fair amount, but it’s all very containerized. It’s a trade-off of stability for inflexiblity. I think most Windows users would prefer the first one however.
If the printer isn’t too old it should work fairly easily. I did get a 20 year old printer to work with Bazzite but it was a very fragile setup (so I just bought a new printer).
But I am using Fish. It’s like you don’t even know me!
I have a “center curtain” for this very reason. Depends on how much it hurts your soul to basically turn one window into two smaller windows.
That sounds fair. I hate “tests” that involve things you’d never do on the job.
How do you ensure your teammates don’t start committing their own IDE settings or committing “secrets.json” files or helper scripts or log files?