Lua.
Don’t call the ambulance, it’s too late for me
Lua.
Don’t call the ambulance, it’s too late for me
I know it won’t happen, but I keep imagining CDDA getting the “Steam” treatment ala Dwarf Fortress. Both games are amazing… Should play CDDA again myself. Always such a pain to remember HOW to play though, and by the time my muscle memory gets good enough that I can actually fluidly play the game instead of staring at keybinds, I’ve kinda run out of gas 😄
You are totally correct, but I feel like pointing out that a surprising number of games use the 4k texture nomenclature in a totally illogical way; they label it 4k because it's meant to look good on a 4k screen, not because the texture itself is at that resolution (or any loosely related resolution).
Which is itself really annoying. But I guess less savvy crowd might not actually understand what 'real' 4k textures even refer to?
Man. That was a good series. Not sure if I can watch it again now, though…
Seeing as it’s their river and they are operating it, not sure what exactly you want as evidence.
Bvckup (not a typo)
Made by a little Swiss company, extremely light but very competent. Stays completely out of your way unless it absolutely must get your attention (which is usually never).
I think it’s paid only but it’s very reasonable. Works great in intermittent situations, I. E. It won’t blow up if it tries to run a scheduled backup and the source or target is disconnected etc… Works very well for me for a decade.
I’m not so sure we’re missing that much personally, I think it’s more just sheer scale, as well as the complexity of the input and output connections (I guess unlike machine learning networks, living things tend to have a much more ‘fuzzy’ sense of inputs and outputs). And of course sheer computational speed; our virtual networks are basically at a standstill compared to the paralellism of a real brain.
Just my thoughts though!
While true that the x nm nomenclature doesn’t match physical feature size anymore, it’s definitely not just marketing bs. The process nodes are very significant and very (very) challenging technological steps, that yield power efficiency gains in the dozens of % usually.
To me, what is surprising is that people refuse to see the similarity between how our brains work and how neural networks work. I mean, it’s in the name. We are fundamentally the same, just on different scales. I belive we work exactly like that, but with way more inputs and outputs and way deeper network, but fundamental principles i think are the same.
They do also use an antireflective coating/paint on the satellites now, which had helped quite a lot.
I can’t give an authorative answer (not my domain), but I think there are two ways these types of things are done.
First is just observing the page or service as an external entity; basically requesting a page or hitting an endpoint, and just tracking whether you get a response (if not, it must be down), or for measuring load level in a very naive way, track the response time. This is easy in the sense that you need no special access to the target. But it’s also limited in its accuracy.
Second way, like what your github example is doing, is having access to special api endpoints for (or direct access to) performance metrics. Since the github status page is literally ran by Github, they obviously have easy access to any metric they could want. They probably (certainly) run services whose entire job is to produce reliable data for their status page.
The minute details of each of these options is pretty open ended; many ways to do it.
Just my 5¢ as a non-web developer.