• 0 Posts
  • 236 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle





  • If you want remote access to your home services behind a cgnat, the best way is with a VPS. This gives you a static public IP that your services connect to, and that you can connect to when out and about.

    If you don’t want the traffic decrypted on the VPS, then tunnel the VPN back to your homelab.
    As the VPN already is encrypted, there is no point in re-encrypting it between the vps and homelab.

    Rathole https://github.com/rapiz1/rathole is one of the easiest I have found for this.
    Or you can do things with ssh tunnels.

    For VPN, wireguard is very good


  • Worth reading the article, but for the TL:drs and comment readers:

    • A patent attorney has narrowed down the list of potential candidates that could be central to Nintendo’s lawsuit against Palworld developer Pocketpair to 28 patents.
    • Out of those, one particular intellectual property describing creature-capture mechanics was labeled as a “killer patent” that would be difficult not to infringe when making a game with monster-taming elements.
    • The said property is part of a recently approved patent family consisting of three more patents, all of which were approved mere weeks before Nintendo and The Pokemon Company sued Pocketpair.

  • Like I said, impressive work.
    Converting science to shaders is an art.

    I guess your coding standards follows scientific standards.
    And I guess it depends on your audience.

    I guess the perspective is that science/maths formulae are meant to be manipulated. So writing out descriptive names is only done at the most basic levels of understanding. Most of the workings are done on paper/boards, or manually. Extra letters are not efficient.
    Whereas programming is meant to be understood and adapted. So self-describing code is key! Most workings are done within an IDE with autocomplete. Extra letters don’t matter.

    If you are targeting the science community with this, a paragraph about adapting science to programming will be important.
    Scientists will find your article and go “well yeh, that’s K2”. But explaining why these aren’t named as such will hopefully help them to produce useful code in the future.

    The fun of code that spans disciplines!

    Edit;
    Om a side note, I am terrible at coding standards when I’m working with a new paradigm.
    First is “make it work”, after which it’s pretty much done.
    Never mind consistent naming conventions and all that.
    The fact you wrote up an article on it is amazing!
    Good work!


  • GPT and the whole AI bs we have at the moment excels at being convincing. It’s even prepared to back up what it says.
    The problem is, that all of that is generated. Not necessarily fact.
    It will generate API methods, entire libraries, sources, legal cases, and science publications.
    And it will be absolutely convincing as it presents and backs up those claims.

    For example, GPT gives some API function of some library that magically solves your issue. Maybe you aren’t hugely familiar with the library, but you don’t trust GPT - so you research this made up API method and find the actual way to do it. Except you have GPT saying this exists and it works the way you want it to. So you research more, dig deeper.
    Eventually you end up reading the source code, have a deeper understanding of the API in general and how to actually find useful answers (IE how to search query for it), and end up using the method you found while trying to find the mythical perfect API method.
    I mean, I guess that’s a win? You learned some documentation, you solved the problem… Who cares?

    Maybe I’m just bitter because that was how I first tried any of the new AI things. And I wasted 2-3 hours instead of actually solving the fucking problem by consulting the facts.


  • Reminds me of the story of the old engineer asked to come in and fix some machine in a factory.

    The engineer inspects the machine, marks it with some chalk, then strikes the chalk mark with a hammer.
    The machine works again.
    The company asks for an itemised invoice after seeing the initial invoice for $10k.
    To which they received:

    • hitting chalk mark with hammer: $1.
    • knowing where to place the chalk mark: $9,999

    GPT suffers from garbage-in garbage-out just as much as a search engine does.
    Knowing how to find search results to fix your specific situation is a skill.
    Utilising GPT for such a task is equally a skill. With the added bonus of GPT randomly pulling the perfect API/Library out of its ass


  • Interesting.
    I love creative applications of shaders. They are very powerful.

    In my opinion only, but willing to discuss.
    And I’ll preface this by saying if I tried to publish a scientific paper and my formulas used a bunch of made up symbols that are not standardised, I imagine it would get a lot of corrections on peer review.

    So, from a programming perspective, don’t use abbreviations.
    Basically working on naming.

    I can read that TAU is the diffusion rate due to a comment. Then I dig further into the code as I am trying to figure something out and I encounter tau. Now I have to remember that tau is explained by a comment, instead of the name of the variable. Why not call it diffusionRate then have a comment indicating this is TAU.
    A science person will be able to find the comment indicating where it is initialised and be able to adjust it without having to know programming. A programming person will be able to understand what it does without having to know science things.
    Programming is essentially writing code to be read.
    It’s written once and read many times.

    Similar with the K variables.
    K is reactionRate.
    K1 is reactionKillRate.
    K2 is reactionFeedRate.
    Scientists know what these are. But I would only expect to see variables like this in some bizarre nested loop, and I would consider it a code smell.

    The inboundFlow “line” has a lot going on with little explanation (except in comments). The calculation is already happening and going into memory. Why not name that memory with variables?
    Things like adjacentFlow and diagonalFlow to essentially name those respective lines.
    Could even have adjacentFlowWeight and diagonalFlowWeight for some of those “magic numbers”.
    Comments shouldn’t explain what is happening, but why it’s happening.
    The code already explains what is happening.
    So a comment indicating what the overall formula is, how that relates to the used variables, then the variables essentially explain what each part of it is.
    If a line is getting too complicated to be easily understood, then parting it out into further variables (or even function call, tho not applicable here) will help.
    I would put in an editted example, however I’m on mobile and I know I will mess up the formatting.

    A final style note, however I’m not certain on this.
    I presume 1. and 1.0 are identical representing the float value of 1.0?
    In which case, standardise to 1.0
    There are instances of 2.0 and 2.
    While both are functionally identical, something like (1.0, 1.0, 1.0) is going to be easier to spot that these are floats, as well as spotting typos/commas - when compared to (1., 1., 1.,).
    IMO, at least






  • At the homelab scale, proxmox is great.
    Create a VM, install docker and use docker compose for various services.
    Create additional VMs when you feel the need. You might never feel the need, and that’s fine. Or you might want a VM per service for isolation purposes.
    Have proxmox take regular snapshots of the VMs.
    Every now and then, copy those backups onto an external USB harddrive.
    Take snapshots before, during and after tinkering so you have checkpoints to restore to. Copy the latest snapshot onto an external USB drive once you are happy with the tinkering.

    Create a private git repository (on GitHub or whatever), and use it to store your docker-compose files, related config files, and little readmes describing how to get that compose file to work.

    Proxmox solves a lot of headaches. Docker solves a lot of headaches. Both are widely used, so plenty of examples and documentation about them.

    That’s all you really need to do.
    At some point, you will run into an issue or limitation. Then you have to solve for that problem, update your VMs, compose files, config files, readmes and git repo.
    Until you hit those limitations, what’s the point in over engineering it? It’s just going to over complicate things. I’m guilty of this.

    Automating any of the above will become apparent when tinkering stops being fun.

    The best thing to do to learn all these services is to comb the documentation, read GitHub issues, browse the source a bit.


  • Bitwarden is cheap enough, and I trust them as a company enough that I have no interest in self hosting vaultwarden.

    However, all these hoops you have had to jump through are excellent learning experiences that are a benefit to apply to more of your self hosted setup.

    Reverse proxies are the backbone of hosting and services these days.
    Learning how to inspect docker containers, source code, config files and documentation to find where critical files are stored is extremely useful.
    Learning how to set up more useful/granular backups beyond a basic VM snapshot in proxmox can be applied to any install anywhere.

    The most annoying thing about a lot of these is that tutorials are “minimal viable setup” sorta things.
    Like “now you have it setup, make sure you tune it for production” and it just ends.
    And finding other tutorials that talk about the next step, to get things production ready, often reference out dated versions, or have different core setups so doesn’t quite apply.

    I understand your frustrations.