Wow thanks a lot for that!
Wow thanks a lot for that!
The account isn’t the issue in itself it’s the data transfer that comes with accepting the agreement that comes with that account.
“Free” is straight wrong.
If that’s your intend than it might be better to pick individual arch wiki pages or improve the entry documentation. Many people refer to there from all distro because of its volume.
A “how to read tech documentation” could add value for this target group.
User perspective:
If you want something big I’d pitch nixos. As in the core distribution. It’s a documentation nightmare and as a user I had to go over options search and then trying to figure out what they mean more often than I found a comprehensive documentation.
That would be half writing and half coordinating writers though I suspect.
Another great project with mixed quality documentation is openhab. It fits the bill of more backend heavy side and the devs are very open in my experience. I see it actually as superior in its core concepts to the way more popular home assistant in every aspect except documentation!
That said: thanks for putting the effort in! ♥
Thanks for sharing but I still don’t know what this actually does? As it’s win only at the moment I can’t give it a test run but the readme is so high level that I’m not sure.
Like, does it interface with my ollama models? Is it integrating with remote models? What’s it locating actually?
Ah that would make sense, thanks!
I haven’t found (while cross reading ) details about why the “highly improved” didn’t make it to upstream openwrt?
The screenshot had has the criteria included though. Relevant part: either be for children or for everyone.
From another project I’ve contributed as translator, not as dev:. They had set up a completely different flow, translation tool, bug reporting, deployment, etc. The dev had basically nothing to do with it,except forwarding bug reports.
The tool landscape grew quite a bit in the last decade though, I think - pretty sure you can do the pull request / review between volunteers as well by now without issues. M
The first link goes into amazing detail on that. In short: all your information concerning location as well as current IP and some other metadata gets send to a basically unknown company with no transparency on how that data is handled.
I highly recommend reading the first, linked post though!
Yeah I had a brainfart, meant namespace…
And thanks a lot for this writeup I think with your help I figured out where I went wrong in my train of thought and I’ll give it another try next week when I have a bit downtime.
The time you took to write this is highly appreciated! ♥
Do you have a link at hand on how start a process within a specific veth by chance? Own name spaces are easy enough and a lot of tutorials but I don’t want my programs to ever be not in the vpn space, not at startup not as fail over etc.
That’s the reason why I stuck with the container setup, only for gluetun plus vpned services.
Cups
linux printing server - if you want to share a printer over network or just use one locally on a linux machine.
(not OP but same boat) Doesn’t really matter to me because google knows my servers external IP which is a non-issue: I don’t expect google to try to attack me individually but crawl data about me. There is no automatic link between my server and my personal browsing habits.
In terms of attack vector vs ease of use , self hosting searxng is a nobrainer for me - but I do have an external server available for things like that anyway so no additional overhead needed.
Thanks for the clarification! A wish you an awesome start into the week :)
Preventing teenage pregnancy by obfuscating sex has the same idea.
I agree with the boundaries part. The second part though: they will figure it out either way… At least my brother did when he was young and our parentsgot a nice lawyer in voice for that (fucked up laws, I know, I know).
Personally I want them to learn about ransomware! If that cost me a PC… My fault.
Especially because the thread was dead your answer is highly appreciated!
A Dockerfile itself is the instruction set. There is a certain minimum requirement expected from a server admin that differs from end-user requirements.
The ease of docker obfuscates that quite a bit but if you want to go full bare metal (or full AWS or GCS, etc etc) then you need to manage the full admin part as well - including custom deployments.
No worries I phrased that quite weird I think.
A NAS is only more power efficient if the additional power of a full server is not used. If for some reason the server is still needed than the NAS will be additional power consumption and not save anything.
(for example I run some quite RAM and compute heavy things on my server which no stock NAS could handle I think).
Lemmy.world is blocked by beehaw as well…