• 0 Posts
  • 15 Comments
Joined 1 year ago
cake
Cake day: June 7th, 2023

help-circle
  • Absolutely. I’m a fan of a team which is not in my “local market”. As best I can tell, there isn’t actually a way for me to stream all of the games for that team. Even looking at the Sunday Ticket service, it seemed like it was a mess of “you can stream some games, except for cases A, B, C and when the Moon is in the House of Scorpio on the third Sunday after Venus transits Leo”. And there seemed to be weird device restrictions with similarly arcane timing.

    I’d be happy to pay for “Steam all games of Team X for $Y on any device”. Even if the only choice was “Pay $Z to stream all the games on any device”. But, being dicked around to actually follow one team has meant that I only watch games when they randomly line up with streaming services I do have. Otherwise, I catch the highlights the next day on Youtube (the NFL’s official channel posts them).

    I can absolutely understand folks using pirate streams. The official service is pretty terrible versus the pirates services, which are pretty functional.




  • This is going to suck for a lot of people. I’m all for encryption. If any of the laptops, in the business I work for, lack encryption, I’m going to throw a fit. But, for home use the situation is not the same. I’d argue that the risk of device theft leading to critical data compromise is pretty low and the risk of the user needing someone to perform offline data recovery for that user is much higher. And the number of users who will actually have the key saved in a location they can get to it, and provide to the data recovery tech, can probably be counted without taking off my shoes.

    This is dumb. It’s yet another case of Microsoft picking a default for users which helps Microsoft but isn’t good for users.


  • Have you considered just beige boxing a server yourself? My home server is a mini-ITX board from Asus running a Core i5, 32GB of RAM and a stack of SATA HDDs all stuffed in a smaller case. Nothing fancy, just hardware picked to fulfill my needs.

    Limiting yourself to bespoke systems means limiting yourself to what someone else wanted to build. The main downside to building it yourself is ensuring hardware comparability with the OS/software you want to run. If you are willing to take that on, you can tailor your server to just what you want.



  • No, but you are the target of bots scanning for known exploits. The time between an exploit being announced and threat actors adding it to commodity bot kits is incredibly short these days. I work in Incident Response and seeing wp-content in the URL of an attack is nearly a daily occurrence. Sure, for whatever random software you have running on your normal PC, it’s probably less of an issue. Once you open a system up to the internet and constant scanning and attack by commodity malware, falling out of date quickly opens your system to exploit.


  • Short answer: yes, you can self-host on any computer connected to your network.

    Longer answer:
    You can, but this is probably not the best way to go about things. The first thing to consider is what you are actually hosting. If you are talking about a website, this means that you are running some sort of web server software 24x7 on your main PC. This will be eating up resources (CPU cycles, RAM) which you may want to dedicated to other processes (e.g. gaming). Also, anything you do on that PC may have a negative impact on the server software you are hosting. Reboot and your server software is now offline. Install something new and you might have a conflict bringing your server software down. Lastly, if your website ever gets hacked, then your main PC also just got hacked, and your life may really suck. This is why you often see things like Raspberry Pis being used for self-hosting. It moves the server software on to separate hardware which can be updated/maintained outside a PC which is used for other purposes. And it gives any attacker on that box one more step to cross before owning your main PC. Granted, it’s a small step, but the goal there is to slow them down as much as possible.

    That said, the process is generally straight forward. Though, there will be some variations depending on what you are hosting (e.g. webserver, nextcloud, plex, etc.) And, your ISP can throw a massive monkey wrench in the whole thing, if they use CG-NAT. I would also warn you that, once you have a presence on the internet, you will need to consider the security implications to whatever it is you are hosting. With the most important security recommendation being “install your updates”. And not just OS updates, but keeping all software up to date. And, if you host WordPress, you need to stay on top of plugin and theme updates as well. In short, if it’s running on your system, it needs to stay up to date.

    The process generally looks something like:

    • Install your updates.
    • Install the server software.
    • Apply updates to the software (the installer may be an outdated version).
    • Apply security hardening based on guides from the software vendor.
    • Configure your firewall to forward the required ports (and only the required ports) from the WAN side to the server.
    • Figure out your external IP address.
    • Try accessing the service from the outside.

    Optionally, you may want to consider using a Dynamic DNS service (DDNS) (e.g. noip.com) to make reaching your server easier. But, this is technically optional, if you’re willing to just use an IP address and manually update things on the fly.

    Good luck, and in case I didn’t mention it, install your updates.




  • The answer to that will be everyone’s favorite “it depends”. Specifically, it depends on everything you are trying to do. I have a fairly minimal setup, I host a WordPress site for my personal blog and I host a NextCloud instance for syncing my photos/documents/etc. I also have to admit that my backup situation is not good (I don’t have a remote backup). So, my costs are pretty minimal:

    • $12/year - Domain
    • $10/month - Linode/Akamai containers

    The Domain fee is obvious, I pay for my own domain. For the containers, I have 2 containers hosted by the bought up husk of Linode. The first is just a Kali container I use for remote scanning and testing (of my own stuff and for work). So, not a necessary cost, but one I like to have. The other is a Wireguard container connecting back to my home network. This is necessary as my ISP makes use of CG-NAT. The short version of that is, I don’t actually have a public IP address on my home network and so have to work around that limitation. I do this by hosting NGinx on the Wireguard container and routing all traffic over a Wireguard VPN back to my home router. The VPN terminates on the outside interface and then traffic on 443/tcp is NAT’d through the firewall to my “server”. I have an NGinx container listening on 443 and based on host headers traffic goes to either the WordPress or NextCloud container which do their magic respectively. I also have a number of services, running in containers, on that server. But, none of those are hosted on the internet. Things like PiHole and Octoprint.

    I don’t track costs for electricity, but that should be minimal for my server. The rest of the network equipment is a wash, as I would be using that anyway for home internet. So overall, I pay $11/month in fixed costs and then any upgrades/changes to my server have a one-time capital cost. For example, I just upgraded the CPU in it as it was struggling under the Enshrouded server I was running for my Wife and I.


  • Attempt at serious answer (warning: may be slightly offensive)

    Wow, you are a fucking moron. But, there is an interesting question buried in there, you just managed to ask it in a monumentally stupid way. So, let’s pick this apart a bit. Assuming Trump gets re-elected and speed-runs the US into global irrelevancy, what happens to the various standards and standards bodies? tl;dr: Not much.

    • FIPS - This will be the most effected. If companies no longer need to care about working with the US Government (USG), no one is going to bother with FIPS. FIPS is really only a list of cryptographic standards which are considered “secure enough” for USG use. The standards won’t actually change and the USG may still continue to update FIPS, people would just stop noticing.
    • UNICODE - Right so UNICODE is a code page maintained by the Unicode Consortium. Maybe with the US being less dominant, we see the inclusion of more stuff; but, it’s just a way to define printable characters. It works incredibly well and there’s no reason such would be abandoned. Also, there are already plenty of other code pages, Unicode is just popular because it covers so much. Maybe the headquarters for the consortium ends up elsewhere.
    • ANSI - Isn’t a standard, it’s a US Government Body. So, assuming it stops being good at it’s job, other countries/organizations would likely stop listening to it’s ideas. The ANSI standards which exist will continue to exist, if ANSI continues to exist, it’ll probably keep publishing standards but only the US would care about them.
    • ISO - Again, this isn’t a standard, it’s a Non-Governmental Organization, headquartered in Switzerland. Also, ISO is not an acronym, it’s borrowed from Greek. And ya, this one would almost certainly keep chugging along. Probably a bit more Euro-centric than they are now, but mostly unchanged.

    For this reason, and a lot of other reasons, I am in favor of liberterianism because then, it would not be a government ran by octogenarians deciding standards for communication,

    It’s ok, I was young and stupid once too. The fact is that, while many telecommunications standards started off in the US, and some even in the USG, most of them have long since been handed off to industry groups. The Internet Engineering Task Force is responsible for most of the standards we follow today. They were spun off from the USG in 1993 and are mostly a consensus driven organization with input from all over the world. In a less US centric world, the makeup of the body might change some. But, I suspect things would keep humming along much as they have for the last few decades.

    Will we live in a post-standard world?

    This depends on the level of fracturing of networks. Over time, there has been a move towards standardization because it makes sense. Sure, companies resist and all of them try to own the standard, but there has been a lot of pushback against that and often from outside the US. For example, the EU’s law to require common charging ports. In many ways, the EU is now doing more for standardization than the US.

    Worse, cryptography. Well, for ‘serious shit’, people roll their own crypto because…

    Tell me you know fuck all about security without saying you know fuck all about security. There is a well accepted maxim, called “Schneier’s law” based on this classic essay. It’s often shortened to “Don’t roll your own crypto”. And this goes back to that FIPS standard mentioned earlier. FIPS is useful mostly because it keeps various bits of the USG from picking bad crypto. The algorithms listed in FIPS are all bog-standard stuff, from things like the Advanced Encryption Standard (AES) process. The primitives and standards are the primitives and standards because they fucking work and have been heavily tested and shown to be secure over a lot of years of really smart people trying to break them. Ironically, it was that same sort of open testing that resulted in the NSA being caught trying to create a crypto backdoor.
    So no, for ‘serious shit’ no one rolls their own crypto, because that would be fucking dumb.

    But what about primitives? For every suite, for every protocol, people use the same primitives, which are standardized.

    And ya, they would continue to be, as said above, they have been demonstrated over and over again to work. If they are found not to work, people stop using them (se:e SHA1, MD5, DES). Its funny that, for someone who is “in favor of liberterianism” you seem to be very poorly informed of examples where private groups and industry are actually doing a very good job of things without government oversight.

    Overall, you seem to have a very poor understanding of how these standards get created in the modern world. Yes, the US was behind a lot of them. But, as they have been handed over to private (and often international) organizations, they have moved further and further away from US Government control. Now, that isn’t to say that US Based companies don’t have a lot of clout in those organizations. Let’s face it, we are all at the mercy of Microsoft and Google way too often. But, even if those companies fall to irrelevance, the organizations they are part of will likely continue to do what they already do. It’s possible that we’d see a faster balkanization of the internet, something we already see a bit of. Countries like China, Iran or Russia may do more to wall their people off from US/EU influence, if they don’t have an economic interest in some communications. Though, it’s just as likely that trade will continue to keep those barriers to the flow of information as open as possible.

    The major change could really be in language. Without the US propping it up, English may lose it’s standing as the lingua franca of the world. As it stands right now, it’s not uncommon for two people, neither of which speaks English as their native language, to end up conversing in English as that is the language the two of them share. If a new superpower rises, perhaps the lingua franca shifts and the majority of sites on the internet shift with it. Though, that’s likely to be a multi-generational change. And it could be a good thing. English is a terrible language, it’s less a language and more three languages dressed up in a trench coat pretending to be one.

    So yes, there would likely be changes over time. But, it’s likely more around the edges than some wholesale abandoning of standards. And who knows, maybe we’ll end up with people learning to write well researched and thought out questions on the internet, and not whatever drivel you just shat out. Na, that’s too much to hope for.




  • Yup, OP doesn’t seem to understand that the Fediverse is still in an “early adopter” stage. For tech stuff, that’s a demographic which is usually dominated by male tech nerds. I suspect the age range is more diverse than the OP claims. Though, I imagine it centers somewhere near the 25-35 range, as those will be the people with the drive and time to engage in online mental masturbation argue with people on the internet.

    Also, the math will make the median age skew older. Each older person will have a larger effect on moving the average age up. For example, consider 3 people with ages 45, 20, and 18. they have an average age of of just a bit over 27. There is a lot of room for us old farts to drag the average up while there is only a small range for the kids to drag it down. There might be the rare 9-10 year old who has the wherewithal to make an account and comment. But, I don’t suspect the user base will get all that much younger. Whereas, there’s probably lots of late 30’s and 40 year old users and some even older. Take those same three ages listed above and add in a precocious 10 year old and a 60 year old grey beard. The average is now just over 30. That older person had a larger effect on the average than the kid.

    Also, age doesn’t matter all that much. The important thing is that the early adopters are here and making content. Ideally, this will build out the site and others may follow (or not, not every trend works out). But hopefully, this will end up like the Great Digg Migration before it, which fed the Reddit beast. And we’ll have something a new, different and maybe actually better this time.