Web developer, gamer, reader, and a true ligma male

  • 8 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: June 8th, 2023

help-circle
  • So my first thought is: Download the entire file BEFORE watching it. This ensures that you won’t have to buffer while watching and it’ll run 100% smoothly.

    Downloading files isn’t very difficult generally; just go to some (torrenting) website, copy the magnet or download the torrent and import it into your torrent client.

    When you have your .mp4, .mkv, .whatever file, you can simply click on it and play it on your preferred media player (such as VLC). However, you may want to watch it on some other device… Easy solution (for TVs) is just connect your laptop to your TV with an HDMI cable, duplicate your screen and start watching.

    But if you actually want to stream, you’ll have to tread into the self-hosting zone. Meaning that you run a media server that hosts all your content and your devices (whether it’s a TV, android phone, iPhone, whatever) can access and play the content from your server.

    This is a very, very big topic that I won’t cover in a single comment. I will point you in the right direction and mention Jellyfin; Jellyfin is a free, open-source media server that you can set up to manage and stream your files with






  • I use it to manage my documents, backup my photos from my phone to my server and access all my files from any other device. Basically Nextcloud is my replacement for OneDrive.

    Additionally, I have used it in the past to collaborate on various group projects which require documents. For example, I had to make a presentation with some other people and I could create a PowerPoint in Nextcloud, send a share link to others and then we could edit the PowerPoint in realtime with Nextcloud + Collabora, which is pretty cool. It’s the only FOSS alternative (at least as far as I’m aware of) that can compete with Microsoft 365 / Google Workspaces.


  • Honestly, I’m not really excited about the past couple of major Nextcloud releases.

    Mainly because there’s still one big issue for small-scale Nextcloud servers: performance.

    Mainly the web UI is still too slow for me to properly use, which is why I don’t use it at all (unless I have to update an app).

    It’s a bit disappointing that they’re mainly focused on the large enterprise customers instead of small hobbyists like me, but it’s still understandable; after all, their income is mainly from the enterprise customers, not from selfhosters.

    I also don’t really like how they’ve jumped on the AI hypetrain instead of improving performance. But once again, I guess this generates more income for them than focusing on other things like improving performance.




  • Docker is a container manager, but that doesn’t say anything if you don’t know what containers are.

    Containers are basically isolated apps. For example, take something like Nextcloud. Nextcloud can run in a Docker container, which means that it runs in an isolated environment completely separated from the user’s system. If Nextcloud breaks, the user’s server won’t be affected at all, because it’s running isolated.

    Why is this useful? Well, it’s useful because dependencies and such automatically update. Nextcloud for example, is dependent on PHP and if you install Nextcloud directly on your server, you’ll need to ensure that PHP 8 has been installed and set up properly. If PHP (or the required PHP extensions) aren’t properly installed, Nextcloud won’t work. Or, maybe if there’s a Nextcloud update that requires a new version of PHP (PHP 9 or 10 in the future), you’ll have to manually update PHP to the newer version.

    All that dependency management is completely gone with containers. The container itself automatically installs and sets up a proper environment for the app that’s running. So in the case of Nextcloud, the PHP binaries, extensions, and all the other stuff is all automatically included without the developer having to do anything at all. Just run one command and your entire Nextcloud instance is automatically updated.















  • My ELI5 version:

    Basically, the ‘Web Environment Integrity’ proposal is a new technique that verifies whether a visitor of a website is actually a human or a bot.

    Currently, there are captchas where you need to select all the crosswalks, cars, bicycles, etc. which checks whether you’re a bot, but this can sometimes be bypassed by the bots themselves.

    This new ‘Web Environment Integrity’ thing goes as follows:

    1. You visit a website
    2. Website wants to know whether you’re a human or a bot.
    3. Your browser (or the ‘client’) will send request an ‘environment attestation’ from an ‘attester’. This means that your browser (such as Firefox or Chrome) will request approval from some third-party (like Google or something) and the third-party (which is referred to as ‘attester’) will send your browser a message, which basically says ‘This user is a bot’ or ‘This user is a human being’.
    4. Your browser receives this message and will then send it to the website, together with the ‘attester public key’. The ‘attester public key’ can be used by the website to verify whether the attester (a.k.a. the third-party checking whether you’re a human or not) is trustworthy and will then check whether the attester says that you’re a human or not.

    I hope this clears things up and if I misinterpreted the GitHub explainer, please correct me.

    The reason people (rightfully) worry about this, is because it gives attesters A LOT of power. If Google decides they don’t like you, they won’t tell the website that you’re a human. Or maybe, if Google doesn’t like the website you’re trying to visit, they won’t even cooperate with attesting. Lots of things can go wrong here.