• 0 Posts
  • 40 Comments
Joined 1 year ago
cake
Cake day: February 6th, 2025

help-circle
  • The course I’m in uses Algorithms (Fourth Edition) by Sedgwick and Wayne[1], and I consider it pretty good. A large focus is on clear implementations that demonstrate the core parts of each algorithm, without getting bogged down in specialization, which I can appreciate. The book also has very good visualizations (they call them traces) if you learn better visually. The only real downside is it’s entirely Java oriented material. But since you’re working with C# this probably isn’t a deal breaker.

    The other recommendation in the thread is Introduction to Algorithms, which I’ve read chapters of (used as reference) — personally it’s ok, definitely more abstract and math heavy, so if that’s something you want or appreciate then it’s a good option.

    There’s also The Art of Computer Programming by Knuth, which to me is grad level stuff, very very math heavy, but also brilliant, if you can keep up.


    1. Theres a book, supplemental video courses, and example implementations: https://algs4.cs.princeton.edu/home ↩︎


  • Yeah, I mean that’s true of any social space though, if you say something agreeable (definitionally) you’re going to get agreement. If you view upvoting as consensus building (i.e “I like this” / “I agree”) it’s just a more concise representation of a reply saying as much.

    But that is scrutable.

    What becomes a problem is content getting surfaced/buried on non-scrutable metrics (typically engagement) — ragebait isn’t anything new, online or in societies. But when algorithms target content that gets engagement, ragebait is naturally surfaced in higher proportions. Often time such platforms completely bury content or make it impossible to find something not explicitly surfaced (YouTube search for example is widely known to be terrible here, FB rabidly buries comments on posts).

    WRT communities, there definitely are instances and communities with very different rules, values and expected behaviors. Federation allows communities to pick and choose what other communities they think they’ll get along with. This includes banning individual remote users if they don’t follow local rules, or defederating entirely if other instances have drastically different values.

    The federation model as described does well by my metrics. I can pick an instance that shares my values, participate in communities (in the Lemmy technical sense) that share them as well — and largely avoid or choose not to engage with people from communities (in the instance sense) that I don’t share values with. This is extending “freedom of association” to online spaces in a way that large platforms largely cannot and willingly do not enable.


  • I would say scrutability in itself doesn’t automatically make an algorithm good. “Demote everything that doesn’t support Trump” is perfectly scrutable but leads to a skewed discussion.

    This is mostly getting into normative vs descriptive philosophy. If it’s scrutable that a site/instance is demoting everything non-aligned with a worldview; then on the Fediverse it’s users’ choice to leave (and part of ‘community values’).

    In fact I would say any content boosting algorithm at all leads to skew and what you call sycophancy. That includes upvotes/downvotes that affect what posts users see first. So I would get rid of all that stuff and just show purely chronologically.

    To some degree, yes. New Reddit is particularly bad about this, it actively buries unpopular replies (but it goes further, and doesn’t just use upvotes) — Software like Lemmy is better, you can easily set Sort by New or sort by Top as the default. There’s also no ‘Karma’ system that propagates across the site.

    Sycophancy is a human trait, so it’ll always emerge in social systems; but normatively, our systems should not cater to these negative traits (e.g. Twitter).


  • For algorithms, anything that isn’t a straightforward scrutable way of presenting user content is bad, IMO.
    Algorithms that promote engagement, monetization, and sycophants are bad.

    As for community of communities, that’s how the Fediverse works — you have a home instance which communicates with other instances. An instance has (nominally) rules, and expected conduct, and is often centered around a particular interest (game dev, programming, cities or countries, etc) then these communities interact with each other.

    Having home instances with shared values and a subset of the entire userbase allows for recognizing and connecting with other “local” users. The same way people would trust their immediate neighbors more than random people from the city over. It helps form webs of trust, and establish natural networks.
    This is how human society has functioned up until very recently — it’s what the brain evolved to do.

    We can see the consequence of systems that don’t respect that fact, sites that try catering to everyone and put us in the same tent, it destroys social regulation, you cannot possibly hope to explain yourself to tens of thousands of angry people on the Internet, nor should people be exposed to such vitriol.


  • It’s not the point of the article, but I think it nonetheless speaks to the power that the community-of-communities model provides.

    The algorithmic content surfacing models are what primarily rot online interaction. Having all-encompassing sites is another cause. Letting people join communities with shared values, and those communities collectively deciding who they interact with, is a fundamental working model of human societies since prehistory.


  • This has been an extolled benefit of the new Hall/TMR design keyboard/switches.

    Because they deal with a continuous activation level, you can define in software when the “press down” signal gets fired in the key travel, including immediately stopping the press once it stops traveling down, and resuming it in the reverse; effectively eliminating pre-travel.

    These boards apparently started getting banned in comp play even, from what I’ve heard. Caveat emptor, I’m not into the comp gaming scene.


  • My experience as well.

    I’ve been writing Java lately (not my choice), which has boilerplate, but it’s never been an issue for me because the Java IDEs all have tools (and have for a decade+) that eliminate it. Class generation, main, method stubs, default implementations, and interface stubs can all be done in, for example: Eclipse, easily.

    Same for tooling around (de)serialization and class/struct definitions, I see that being touted as a use case for LLMs; but like… tools have existed[1] for doing that before LLMs, and they’re deterministic, and are computationally free compared to neural nets.


    1. e.g. https://transform.tools/json-to-java ↩︎









  • I typically just specify the height of the video and let the browser figure out the width and aspect ratio. The most annoying layout shift is the vertical kind anyway, so that solves it to my satisfaction.

    That said, I also use the poster feature of the video tag and set preload to none, this produces vastly faster page loading, as images are a fast-path compared to browsers loading a video chunk and then decoding it just to display a cover image. I have a set of scripts that generate the poster images for me, I just specify the frame number I want to use in the video and ffmpeg produces an avif.




  • Multi-cloud is far from trivial, which is why most companies… don’t.

    Even if you are multi-cloud, you will be egressing data from one platform to another and racking up large bills (imagine putting CloudFront in front of a GCS endpoint lmao), you are incentivized to stick on a single platform. I don’t blame anyone for being single-cloud with the barriers they put up, and how difficult maintaining your own infrastructure is.

    Once you get large enough to afford tape libraries then yeah having your own offsite for large backups makes a lot of sense, but otherwise the convenience and reliability (when AWS isn’t nuking your account) of managed storage is hard to beat — cold HDDs are not great, and m-disc is pricey.


  • In this guy’s specific case, it may be financially feasible to back up onto other cloud solutions, for the reasons you stated.

    However public cloud is used for a ton of different things. If you have 4TiB of data in Glacier, you will be paying through the absolute nose pulling that data down into another cloud; highway robbery prices.

    Further as soon as you talk about something more than just code (say: UGC, assets, databases) the amount of data needing to be “egressed” from the cloud balloons, as does the price.