• 0 Posts
  • 72 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle

  • So, if you just use the system API, then this means logging with syslog(3). Learn how to use it.

    This is old advice. These days just log to stdout, no need for your process to understand syslog, systemd, containers and modern systems just capture stdout and forward that where it needs to do. Then all applications can be simple and it us up to the system to handle them in a consistent way.

    NOTICE level: this will certainly be the level at which the program will run when in production

    I have never see anyone use this log level ever. Most use or default to Info or Warn. Even the author later says

    I run my server code at level INFO usually, but my desktop programs run at level DEBUG.

    If your message uses a special charset or even UTF-8, it might not render correctly at the end, but worst it could be corrupted in transit and become unreadable.

    I don’t know if this is true anymore. UTF-8 is ubiquitous these days and I would be surprised if any logging system could not handle it, or at least any modern one. I am very tempted to start adding some emoji to my logs to find out though.

    User 54543 successfully registered e-mail user@domain.com

    Now that is a big no no. Never ever log PII data if you don’t want a world of hurt later on.

    2013-01-12 17:49:37,656 [T1] INFO c.d.g.UserRequest User plays {‘user’:1334563, ‘card’:‘4 of spade’, ‘game’:23425656}

    I do not like that at all. The message should not contain json. Most logging libraries let you add context in a consistent way and can output the whole log line in Json. Having escaped json in json because you decided to add json manually is a pain, just use the tools you are given properly.

    Add timestamps either in UTC or local time plus offset

    Never log in local time. DST fucks shit up when you do that. Use UTC for everything and convert when displayed if needed, but always store dates in UTC.

    Think of Your Audience

    Very much this. I have seen far too many error message that give fuck all context to the problem and require diving through source code to figure out the hell went wrong. Think about how logs will be read without the context of the source code at hand.



  • Whatever language you chose you might want to also look at the htmx JS library. It lets you write in your html snippets better interactivity without actually needing to write JS. It basically lets you do things like when you click on an element, it can make a request to your server and replace some other element with the contents your server responds with - all with attributes on HTML tags instead of writing JS. This lets you keep all the state on the backend and lets you write more backend logic without only relying on full page refreshes to update small sections of the page.

    For a backend language I would use rust as that is what I am most familiar with now and enjoy using the most. Most languages are adequate at serving backend code though so it is hard to go wrong with anything that you enjoy using. Though with rust I tend to find I have fewer issues when I deploy something as appose to other languages which can cause all sorts of runtime errors as they let you ignore the error paths by default.



  • Yup, this is part of what’s lead me to advocate for SRP (the single responsibility principle).

    Even that gets overused and abused. My big problem with it is what is a single responsibility. It is poorly defined and leads to people thinking that the smallest possible thing is one responsibility. But when people think like that they create thousands of one to three line functions which just ends up losing the what the program is trying to do. Following logic through deeply nested function calls IMO is just as bad if not worst than having everything in a single function.

    There is a nice middle ground where SRP makes sense but like all patterns they never talk about where that line is. Overuse of any pattern, methodology or principle is a bad thing and it is very easy to do if you don’t think about what it is trying to achieve and when applying it no longer fits that goal.

    Basically, everything in moderation and never lean on a single thing.


  • Refactoring should not be a separate task that a boss can deny. You need to do feature X, feature X benefits from reworking some abstraction a bit, then you rework that abstraction before starting on feature X. And then maybe refactor a bit more after feature X now you know what it looks like. None of that should take substantially longer, and saves vast amounts of time later on if you don’t include it as part of the feature work.

    You can occasionally squeeze in a feature without reworking things first if time for something is tight, but you will run into problems if you do this too often and start thinking refactoring is a separate task to feature work.


  • “Best practices” might help you to avoid writing worse code.

    TBH I am not sure about this. I have seen many “Best practices” make code worst not better. Not because the rules themselves are bad but people take them as religious gospel and apply them to every situation in hopes of making their code better without actually looking at if it is making their code better.

    For instance I see this a lot in DRY code. While the rules themselves are useful to know and apply they are too easily over applied removing any benefit they originally gave and result in overly abstract code. The number of times I have added duplication back into code to remove a layer of abstraction that was not working only to maybe reapply it in a different way, often keeping some duplication.

    Suddenly requirements change and now it’s bad code.

    This only leads to bad code when people get to afraid to refactor things in light of the new requirements.Which sadly happens far to often. People seem to like to keep what was there already and follow existing patterns even well after they are no longer suitable. I have made quite a lot of bad code better by just ripping out the old patterns and putting back something that better fits the current requirements - quite often in code I have written before and others have added to over time.



  • And they never ger into the got worktree subcommand and probably don’t know you can have multiple things checkedout at once in a different Dir but still all pointing to the same local git clone.

    Also with them you can have a bare repo with multiple worktrees at once. Have been working this way for a few months now and it is very nice when switching branches as that becomes a simple cd instead of a git switch or checkout. Which means you don’t need to stash any pending changes before you switch branch.



  • nous@programming.devtoSteam@lemmy.mlIntroducing Steam Families
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    1
    ·
    4 months ago

    If a family member gets banned for cheating while playing your copy of a game, you will also be banned in that game.

    This sucks.

    Yeah, but I an see why as it would be easy to abuse. Only need one copy of the game and you could cycle accounts that never owned the game out of the family sharing when they get banned.

    Might be other ways to limit that, but would also likely need more restrictions on the feature that might be more annoying.


  • Any field in a DB can be vulnerable to SQL injection. Filtering out characters is a terrible way to mitigate that attack, you should be using prepared queries where it does not matter what chars you have in your username or password. You should never form a query with string concatenation.

    You may want to limit chars in a username to ones allowed in URLs (or even ones that don’t need escaping) if you ever want it to appear in a URL though. Or any other places the user name might be used, but a entry in a DB should not matter.



  • I suspect the answer for most people is habbit. It is what they are used to and have always done.

    Personally I lean towards more linear history as it creates a cleaner commit graph and is close enough to what really happened and the exact details of small meaningless commits and merges that were created during development of something don’t matter that much. If anything they add a lot of noise to commit graphs that make it harder to see what actually happened and when. I have seen code bases where the git commit graph takes up more than a full screen to render because of all the merging and it is not fun to navigate to understand what happened at some point.

    Far nicer to just step back in a mostly linear history that jump around back and forth on various difference branches and merge commits. Even if it is not technically what really happened during development - as generally I only care what happened when something was merged into master, not on individual branches.

    But I also prefer to work on small feature sets at a time and frequently merge back into master and not spend weeks or months working on a different branch that contains loads of changes not yet in master. My goto command for pulling from upstream is git pull --rebase=interactive --autostash origin master (no need to fetch with a pull, though my local master tends to get behind origin - not that it matters) and I will quite often git commit --amend or squash/reorder/edit things to give a simpler history before creating a PR.

    But at the same time, on smaller projects it really does not matter that much. Follow the conventions of the projects you work on and do whatever you want on your own projects.




  • I would start by learning rust at a user level via the rust book to get you familiar with the language without the extra layers that embedded systems tend to add on top of things. Keep in mind that the embedded space in rust is still maturing, though it is maturing quite quickly. However one of the biggest limitations ATM is the amount of architectures available - if you need to target one not supported then you cannot use rust ATM (though there is quite a few different projects bringing in support for more architectures).

    If you only need to use architectures that rust supports than once you have the basics of rust down take a look at the embedded book, the Discovery book and the Embedonomicon. Then there are various crates for different boards such as ruduino for the arduino uno, or the rp-pico for the raspberry pi pico, or various other crates for specific boards. There are also higher and lower level crates for things - like ones specific to a dev board and ones that are specific to a chipset.

    Lastly, there are embedded frameworks like Embassy that are helpful for more complex applications.


  • I’m still forgetting things I learned 3 or even 4 times like how to do a for each loop.

    I have been programming for decades now and still have to look up how an if statement works in bash - or other similar things, especially when switching between languages. It takes 5 seconds to look up and remember so I would not bother worrying about it. Far better to know when you need to use an if or for loop and quickly look up the syntax then to know the syntax but not when to use it.

    I see tutorials say to make a tic tac toe game or a calculator or to contribute to open source code. Which is good I suppose but all of it feels too advanced and I get lost on how to begin.

    Break problems down into simpler problems, then break those down into simpler problems until you have a trivial problem you can solve. Then build up from there. Like take a tic tac toe game - lots of things to consider that can all be dealt with in isolation. Like rendering the game to the screen, that is one problem you can start with, and can be broken down even further to maybe how to draw a grid to the screen, which again can be broken down to how to draw a box or line.

    You might even want to look at the book " Think Like a Programmer: An Introduction to Creative Problem Solving by V. Anton Spraul" which goes into this way of thinking in more detail.