SmegmaScript is just too close
SmegmaScript is just too close
Reddit is free. Other people paying for your free service is a very weak argument to bring up. If Lemmy dies today, nobody but hobbyists and amateurs will care. Just like with LE.
I’ve been there. Not every CA is equal. Those kind of CAs were shit. LE is convenient. There are more options though.
I actually agree. For the majority of sites and/or use cases, it probably is sufficient.
Explaining properly why LE is generally problematic, takes considerable depth of information, that I’m just not able to relay easily right now. But consider this:
LE is mostly a convenience. They save an operator $1 per month per certificate. For everyone with hosting costs beyond $1000, this is laughable savings. People who take TLS seriously often have more demands than “padlock in the browser UI”. If a free service decides they no longer want to use OCSP, that’s an annoying disruption that was entirely not worth the $1 https://www.abetterinternet.org/post/replacing-ocsp-with-crls/
LE has no SLA. You have no guarantee to be able to ever renew your certificate again. A risk not anyone should take.
Who is paying for LE? If you’re not paying, how can you rely on the service to exist tomorrow?
It’s not too long ago that people said “only some sites need HTTPS, HTTP is fine for most”. It never was, and people should not build anything relevant on “free” security today either.
People who have actually relevant use cases with the need for a reliable partner would never use LE. It’s a gimmick for hobbyists and people who suck at their job.
If you have never revoked a certificate, you don’t really know what you’re doing. If you have never run into rate-limiting issues with LE that block a rollout, you don’t know what you’re doing.
LE works until it doesn’t, and then it’s like every other free service on the internet: no guarantees If your setup relies on the goodwill of a single entity handing out shit for free, it’s not a robust setup. If you rely on that entity to keep an OCSP responder alive for free so all your consumers can verify the validity of your certificate, that’s not great. And people do this to save their company $1 a month for the real thing? Even running the shitty certbot in compute has a larger cost. People are so blindly in love with this “free” garbage. The fanboys will never die off
the claims in some media that Telegram is some sort of anarchic paradise are absolutely untrue. We take down millions of harmful posts and channels every day,
Gotcha. Millions of harmful posts every day. That really does sound like a great place.
Following along with the style of your own post: YAML doesn’t suck, because I feel so.
Thanks for asking.
I’d be more worried about media than the ability to pirate it.
Music has adapted to generate plays. Platforms are already being polluted with genAI music.
TV was replaced by streaming services. Series come and go and are very specifically tailored to get people to subscribe. Exclusives are the standard. Single season productions are not uncommon. People are also already investigating ways to pollute this pool with genAI as well.
Movies are a stream of Marvel and Disney garbage that was already more CGI than acting. Now genAI and upscaled classics are on the menu.
Piracy will not go away. People used to record movies with camcorders in the cinema, now they pull raw files from CDN nodes. There is always the scene. The platforms that try to profit from the scene come and go.
The only time I came across this subject was when there were malicious commits in a code base. When else would this matter? The commit contains your name and email address. Who cares about time zone? Just as anything in a commit, these metadata can be freely manipulated and serve purely as information for other developers. Who are you scared of seeing your time zone in a commit on a seemingly public code repository? This is such a pointless non-discovery
Depends on the product. It’s just something to think about when signaling errors. There is information for the API client developer, there is information for the client code, and there’s information for the user of the client. Remembering these distinct concerns, and providing distinct solutions, helps. I don’t think there is a single approach that is always correct.
I don’t necessarily disagree, but I have spent considerable time on this subject and can see merit in decoupling your own error signaling from the HTTP layer.
No matter how you design your API, if you’re passing through additional layers, like load balancers and CDNs, you no longer have full control over all responses your clients receive. At this point it may be viable to always signal a successful backend connection with a 200, even if the process resulted in a failure.
Going further, your API may include partial success scenarios, think batch processing, then the result could be a mix of success and failure that doesn’t translate to HTTP status.
You could even argue that there is really no reason to couple your API so tightly with a concept of the transport layer it uses.
So you fucked everyone because of a beef you had with AWS. Go fuck yourselves. Moving people off Elastic products is the right move either way. Don’t look back.
So a documented core aspect of the tool is a leak. Impressive research
Respect the Accept header from the client. If they need JSON, send JSON, otherwise don’t.
Repeating an HTTP status code in the body is redundant and error prone. Never do it.
Error codes are great. Ensure to prefix yours and keep them unique.
Error messages can be helpful, but often lead developers to just display them in the frontend, breaking i18n. Some people supply error messages in multiple languages, depending on the Accept-Language header.
To add to that for clarity: With the original Mono, you could run a regular Windows .net application on non-Windows without any additional work (with limitations, as native Windows API calls were unsupported). With the modern dotnet, you can compile new applications from source that will run anywhere
That is some next-level Minecraft you are playing over there
Messing with the computer is pretty important though
Everyone who has ever heard of deno has read this irrelevant blog post. It was even stupid at the time he wrote it. People had long been containerizing their node payloads to solve most of his concerns and building ts-node into your js engine as a preprocessors was also beyond redundant. Everything is such a gimmick and people actually followed the marketing and went through years of unstable development for nothing. And now bun people are recycling the same hype approach to gain relevance
That’s not a standard library for JS. Those are builtin modules. A standard library should be available for inclusion in various consumers.
Especially because TypeScript compiles down to
JavaScriptJS