• 1 Post
  • 91 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle

  • Unless you are gunning for a job in infrastructure you don’t need to go into kubernetes or terraform or anything like that,

    Even then knowing when not to use k8s or similar things is often more valuable than having deep knowledge of those - a lot of stuff where I see k8s or similar stuff used doesn’t have the uptime requirements to warrant the complexity. If I have something that just should be up during working hours, and have reliable monitoring plus the ability to re-deploy it via ansible within 10 minutes if it goes poof maybe putting a few additional layers that can blow up in between isn’t the best idea.






  • It has been a while since I touched ssmtp, so take what I’m saying with a grain of salt.

    Problem with ssmtp and related when I was testing it was its behaviour in error conditions - due to a lack of any kind of spool it doesn’t fail very gracefully, and if the sending software doesn’t expect it and implement a spool itself (which it typically doesn’t have a reason to, as pretty much the only situation where something like sendmail would fail is a situation where it also wouldn’t be able to write a spool) this can very easily lead to loss of mails.

    I already had a working SMTP client capable of fishing mails out of a Maildir at that point, so I ended up just doing a simple sendmail program throwing whatever it receives into a Maildir, and a cronjob to send this forward. This might be the most minimalistic setup for reliably sending out mail (and I’m using it an all my computers behind Emacs to do so) - but it is badly documented, so if you don’t care about reliability postfix might be a better choice, or if you don’t just go with ssmtp or similar. Or if you do want to dig into that message me, and I’ll help making things more user friendly.



  • It surely is a bubble - so probably a bit different than many other bubbles.

    I think OpenAI made the right call (for them) to commercialize when they did - as that pretty much was their only chance to do so. Things has moved fast over the last 1.5 years - and what used to take a decade in tech has happened within months: OpenAI is the dinosaur company grandfathered in, while for already about a year it’s been more sensible for anybody wanting to do something with LLM to selfhost (or buy hosting capacity, but put up own data) one of the more open language models, and possibly adjust or re-train it.

    As a company owner I get a ridiculous amount of spam for a year already from all kinds of companies building products on top of OpenAI stack, or are trying to sell training or conferences. All those companies will be left with nothing once all the slower users realize technology has moved on. It’s like somebody trying to build all their product offerings based on VMWare stack nowadays.

    If you as a company want to offer something around AI right now the safest option is probably offering hosting, or if you want to do more hands on, adjustment of open models. Both of those are very risky, and many will go bust in years to come - but not as suicidal as building on top of a closed dinosaur.





  • They used to link to my dig wrapper on my homepage for having their clients debug DNS problems for many years - even with translations of my UI in the various language help sites. I always found it amusing that a hoster of their size does that, instead of spending a lunchbreak to throw something together that integrates with their help page.

    There also was a non significant number of users which didn’t understand that my homepage had nothing to do with OVH, and ended up mailing me about their DNS problems.







  • Actually I can’t think of anything that raspberry pi does that can’t be done better by a less expensive alternative.

    That has been true even before the price increase - what still makes me use pis now and then is that just so many people are familiar with them, the standardized form factor with lots of extension modules, and the software support - pretty much any software targeting that kind of use has been tested on pi variants.

    I’d nowadays go for using compute modules, though - they’re smaller, and you can get them with flash, eliminating the SD card problem many pis had. You can get carrier boards for the compute modules in the classic pi form factor, so you can have the best of both worlds.