• 2 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle

  • Universal Paperclips is one of the best clicker games.

    In particular: because it isn’t a clicker game. It only starts off as one. There’s only about 2 sections IIRC that are “clicker”, the start (before auto-clippers kick in), and then the quantum computer.

    I guess you have to launch your first 20 or 30 probes at the space stage and that’s done one-click-at-a-time… but I don’t think that counts as a “clicker” game since its so few clicks in the great scheme of things. At no other point is rapid-clicking that useful.


  • I had a pretty standard linear-list scan initially. Each time the program started, I’d check the list for some values. The list of course grew each time the program started. I maximized the list size to like 2MB or something (I forget), but it was in the millions and therefore MBs range. I figured it was too small for me to care about optimization.

    I was somewhat correct, even when I simulated a full-sized list, the program booted faster than I could react, so I didn’t care.


    Later, I wrote some test code that exhaustively tested startup conditions. Instead of just running the startup once, I was running it millions of times. Suddenly I cared about startup speed, so I replaced it with a Hash Table so that my test-code would finish within 10 minutes (instead of taking a projected 3 days to exhaustively test all startup conditions).


    Honestly, I’m more impressed at the opposite. This is perhaps one of the few times I’ve actually taken the linear-list and optimized it into a hash table. Almost all other linear-lists I’ve used in the last 10 years of my professional coding life remain just that: a linear scan, with no one caring about performance. I’ve got linear-lists doing some crazy things, even with MBs of data, that no one has ever came back to me and said it needs optimization.

    Do not underestimate the power of std::vector. Its probably faster than you expect, even with O(n^2) algorithms all over the place. std::map and std::unordered_map certainly have their uses, but there’s a lot of situations where the std::vector is far, far, far easier to think about, so its my preferred solution rather than preoptimizing to std::map ahead of time.



  • How many layers does the Orange Pi Zero pcb have?

    Answer: Good luck finding out. That’s not documented. But based off of the layout and what I can see with screenshots, far more than 4 layers.


    A schematic alone is kind of worthless. Knowing if a BGA is designed for 6, 8, or 10 layers makes a big difference. Seeing a reference pcb-implementation with exactly that layer count, so the EE knows how to modify the design for themselves is key to customization. There’s all sorts of EMI and trace-length matching that needs to happen to get that CPU to DDR connection up-and-running.

    Proving that a 4-layer layout like this exists is a big deal. It means that a relative beginner can work with the SAM9x60’s DDR interface on cheap 4-layer PCBs (though as I said earlier: 6-layers offer more room and is available at OSHPark so I’d recommend a beginner work with 6 instead)


    With regards to SAM9x60D1G-I/LZB SOM vs Orange Pi Zero, the SAM9x60D1G-I/LZB SOM provides you with all remaining pins of access… 152 pins… to the SAM9x60. Meaning a full development board with full access to every feature. Its a fundamentally different purpose. The SOM is a learning-tool and development tool for customization.




  • That’s not what storage engineers mean when they say “bitrot”.

    “Bitrot”, in the scope of ZFS and BTFS means the situation where a hard-drive’s “0” gets randomly flipped to “1” (or vice versa) during storage. It is a well known problem and can happen within “months”. Especially as a 20-TB drive these days is a collection of 160 Trillion bits, there’s a high chance that at least some of those bits malfunction over a period of ~double-digit months.

    Each problem has a solution. In this case, Bitrot is “solved” by the above procedure because:

    1. Bitrot usually doesn’t happen within single-digit months. So ~6 month regular scrubs nearly guarantees that any bitrot problems you find will be limited in scope, just a few bits at the most.

    2. Filesystems like ZFS or BTFS, are designed to handle many many bits of bitrot safely.

    3. Scrubbing is a process where you read, and if necessary restore, any files where bitrot has been detected.

    Of course, if hard drives are of noticeably worse quality than expected (ex: if you do have a large number of failures in a shorter time frame), or if you’re not using the right filesystem, or if you go too long between your checks (ex: taking 25 months to scrub for bitrot instead of just 6 months), then you might lose data. But we can only plan for the “expected” kinds of bitrot. The kinds that happen within 25 months, or 50 months, or so.

    If you’ve gotten screwed by a hard drive (or SSD) that bitrots away in like 5 days or something awful (maybe someone dropped the hard drive and the head scratched a ton of the data away), then there’s nothing you can really do about that.


  • If you have a NAS, then just put iSCSI disks on the NAS, and network-share those iSCSI fake-disks to your mini-PCs.

    iSCSI is “pretend to be a hard-drive over the network”. iSCSI can exist “after” ZFS or BTRFS, meaning your scrubs / scans will fix any issues. So your mini-PC can have a small C: drive, but then be configured so that iSCSI is mostly over the D: iSCSI / Network drive.

    iSCSI is very low-level. Windows literally thinks its dealing with a (slow) hard drive over the network. As such, it works even in complex situations like Steam installations, albeit at slower network-speeds (it gotta talk to the NAS before the data comes in) rather than faster direct connection to hard drive (or SSD) speeds.


    Bitrot is a solved problem. It is solved by using bitrot-resilient filesystems with regular scans / scrubs. You build everything on top of solved problems, so that you never have to worry about the problem ever again.



  • Wait, what’s wrong with issuing “ZFS Scan” every 3 to 6 months or so? If it detects bitrot, it immediately fixes it. As long as the bitrot wasn’t too much, most of your data should be fixed. EDIT: I’m a dumb-dumb. The term was “ZFS scrub”, not scan.

    If you’re playing with multiple computers, “choosing” one to be a NAS and being extremely careful with its data that its storing makes sense. Regularly scanning all files and attempting repairs (which is just a few clicks with most NAS software) is incredibly easy, and probably could be automated.


  • Meanwhile in Ubuntu-land, a Python2 script probably just straight up doesn’t work at all.

    “At least the .NET code continues to run today”. And you can setup a 20-year-old developer VM running VS2008 in practice and code “the old way” to continue to maintain the old code (that still runs on today’s machines). Meanwhile, you’re FORCED to migrate the Python2 stuff in Ubuntu-land due to a litany of incompatible changes to systemd, X.org, Python2 vs 3 issues and more.


    Not just Python2, but also Bash-scripts. (Weird changes to netcat, or ipconfig, or other tools that utterly bork old scripts).

    Microsoft isn’t as good at backwards compatibility as it used to be. But they’re still leagues ahead of the OSS community on this.