

It’s one of the more permissive licenses - who the hell is going to have a problem with lgpl? You can ship it with proprietary applications.
It’s one of the more permissive licenses - who the hell is going to have a problem with lgpl? You can ship it with proprietary applications.
I wouldn’t justify using any language based on this metric alone.
It’s a good question, but I think the amount of time spent compiling a language is going to be pretty tiny compared to the amount of time the application is running.
Still - “energy efficiency” may be the worst metric to use when choosing a language.
Love the “I reject your empirical data and substitute my emotions” energy.
Swamp-ass is real - raise awareness!
It’s in all the food you eat! Big Food lobbies to protect their profits by using it in food production!
You got the basic idea from other posters, but there’s also a lot of weird crap in there as well.
Basically you only need multiple IPs when dealing with services that only really operate on “well known ports”. DNS and SMTP being the usual culprits. For most home users there this is no big deal - even if you wanted to host those services it’s unlikely that you would need more than one ip to do so. HTTP solved this in '97 with HTTP/1.1 which allowed for host headers, which let’s a single server host multiple sites.
This isn’t something new that nginx solved. 😂
By “modern” do you mean “the late 90s”? HTTP 1.1 was adopted in '97 and allowed for the host header. NAT and port forwarding have been around since '94 - 2000ish.
Many services worked on any ports at the time as well. SMTP and DNS are probably the only ones that were (and remain) difficult to run on non-standard ports.
I guess that’s “a lot simpler” than 6 lines of config?
Kids seem to think host name based routing is "new’… It worked fine in the 2000s with Apache.
That’s nuts.
OMG lemmy… The commieboos are ridiculous.
Were they not paid?
The author has completely misunderstood the advice to “not reinvent the wheel”. Or they’re just being needlessly literal in their interpretation.
If your job isn’t making wheels, then you use somebody else’s wheels when you need wheels, so long as those wheels do what you need them to do for a cost that is acceptable.
There is a high cost to reinventing things. So you don’t tend do so unless there is a compelling reason to do so.
If you’re just exploring and learning nobody will tell you not to.
I’m a big fan of both AI and automation but this is just 😬
Generally speaking I would avoid combining critical networking infrastructure with other services. Just from a reliability standpoint.
Let your router be just a router. Simple = reliable.
My dude, I very code other humans write. Do you think I’m not verifying code written by AI?
I highly recommend using AI. It’s much better than a Google search for most things.
The problem is that you really only see two sorts of articles.
AI is going to replace developers in 5 years!
AI sucks because it makes mistakes!
I actually see a lot more of the latter response on social media to the point where I’m developing a visceral response to the phrase “AI slop”.
Both stances are patently ridiculous though. AI cannot replace developers and it doesn’t need to be perfect to be useful. It turns out that it is a remarkably useful tool if you understand its limitations and use it in a reasonable way.
It’s exactly the sort of “tedious yet not difficult” task that I love it for. Sometimes you need to clean things up a bit but it does the majority of the work very nicely.
Okay quacks, time to go data-mining for anomalies!