• 0 Posts
  • 56 Comments
Joined 2 years ago
cake
Cake day: June 20th, 2023

help-circle

  • I’ve had fewer problems with GoG + Lutris in Linux than I’ve had with Steam in Linux, to the point that I had to pirate one of my Steam games in order to be able to run it in Linux (the pirate version runs just fine).

    Mind you, I get the impression that older AAA games are the most problematic ones, thought that’s maybe because I don’t run anything with Kernel Anti-Cheat and nowadays don’t really do online gaming (in fact all my games in Lutris are run inside a firejail sandbox with network access disabled).




  • If only they were just locks.

    I think it’s a better metaphor that they removed all windows, made the walls 2m thick cement and replaced the door with a 10 inch thick heavy steel door.

    Absolutelly, it makes it very well protected from unauthorized outsiders just coming in … at the cost of living in a bunker with no natural sunlight, stale air, mold and having to push a 2 ton door to get in or out.

    Now, some people might be ok with living in such a bunker for their own personal protection, but very few are ok with living in a bunker to protect the software in the computer they have in their bunker from being copied.

    People are pissed because Denuvo makes their life harder whilst having literally zero upsides for them personally.


  • I read it as being particularly good at it, since everybody does indeed do pattern matching and can spot details.

    That format of presentation - especially when the choice was clearly made to go for more points rather than more depth per point - is unsuitable for precise, detailed explanations, so expecting otherwise isn’t exactly logic.

    As somebody who, judging by everything else in there matches that particular part of the spectrum (though never formally diagnosed) I’ve always had an eye for details and am big at figuring things out via pattern matching (I.e. notice that certain combinations of things tend to go along with certain other combinations of things or outcomes) which is also what powers the “skip” thinking (you can jump directly to a list of possibly explanations by recognizing that it shares a pattern with something else whose explanation I already have and then work backwards from there to confirm if indeed one of those possible explanations is the correct one).

    I’ve studied and worked in highly intellectual areas (Science and Technology) and have seldom come across others with a similar style of thinking so to me it makes sense that in that graph those things are there in the sense of more/better than most.






  • Liberalism isn’t the same as Left. It’s not even in the same political axis.

    You can’t really read “more liberal” as being the same as “more leftist”.

    Left would be something like: “I want the greatest good for the greatest number”.

    Liberalism would be something like: “I want people to have the most freedom to do whatever they want”.

    You might notice that these two things collide in things like the existence of the super-rich, were for a liberal that’s a good thing (they have maximum freedom) whilst for a Leftie it’s a bad thing (wealth concentration reduces the access to resources for the many hence it directly goes against the greatest good for the greatest number).

    Similarly centralizing control of part or the whole of the Economy (which decreases trade freedom) to achieve greater equality is absolutelly valid within the Leftwing principles and entirely against Liberal principles.

    it’s only in places like the US, were the entirety of Leftwing is about 4 congressmen, that Liberalism gets confused with Leftwing.


  • Yeah, the 1 in 4 billion seemed exaggerated on the low end when I read it. I went ahead with it anyway since, even if there are 1000 people with an IQ at or above 200, that by itself would not pull the curve upwards much (because it’s 1000 out of 8 billion people) and hence your original claim that the mean is not the same as the median “because the distribution is skewed as IQs can be higher than 200 but not negative” was bollocks.

    My point stands untouched that the justification you originally gave backing your claim that the IQ mean not being the same as the median was mathematically unsupported or, as you so colourfully put it: “opinion dressed as fact”.

    As for this paper you linked, it curiously doesn’t back your claim either. From the abstract, we get that whilst the mean is 100 and the mode is indeed 105, the statistical distribution of IQs is NOT a Normal Distribution but rather the sum of TWO Normal Distributions. This means that you can’t in fact make claims about the median from the mode (as you would be able to for a normal distribution, were mean = median = mode) because a sum of two normal distributions has TWO peaks so you can perfectly have one at 105 and another one below that which can yield a median which is equal to or even below the mean.

    Again from the abstract those two distributions are “one reflecting normal variation in general intelligence and one refecting normal variation in effects of genetic and environmental conditions involving mental retardation”, which seems to imply that the second has a peak at an IQ value below the first.

    That said, I don’t even disagree that your claim that the median is above the mean might be right. What I have yet to see from you so far is something other than “opinion dressed as fact” or quoting of papers which don’t mathematically back your point.



  • Above a certain level of seniority (in the sense of real breadth and depth of experience rather than merely high count of work years) one’s increased productivity is mainly in making others more productive.

    You can only be so productive at making code, but you can certainly make others more productive with better design of the software, better software architecture, properly designed (for productivity, bug reduction and future extensibility) libraries, adequate and suitably adjusted software development processes for the specifics of the business for which the software is being made, proper technical and requirements analysis well before time has been wasted in coding, mentorship, use of experience to foresee future needs and potential pitfalls at all levels (from requirements all the way through systems design and down to code making), and so on.

    Don’t pay for that and then be surprised of just how much work turns out to have been wasted in doing the wrong things, how much trouble people have with integration, how many “unexpected” things delay the deliveries, how fast your code base ages and how brittle it seems, how often whole applications and systems have to be rewritten, how much the software made mismatches the needs of the users, how mistrusting and even adversarial the developer-user relationship ends up being and so on.

    From the outside it’s actually pretty easy to deduce (and also from having known people on the inside) how plenty of Tech companies (Google being a prime example) haven’t learned the lesson that there are more forms of value in the software development process than merely “works 14h/day, is young and intelligent (but clearly not wise)”



  • Making a mistake once in a while on something one does all time is to be expected - even somebody with a 0.1% rate of mistakes will fuck up once in while if they do something with high enough frequency, especially if they’re too time constrained to validate.

    Making a mistake on something you do just once, such as setting up the process for pushing virus definition files to millions of computers in such a way that they’re not checked inhouse before they go into Production, is a 100% rate of mistakes.

    A rate of mistakes of 0.1% is generally not incompetence (dependes on how simple the process is and how much you’re paying for that person’s work), whilst a rate of 100% definitelly is.

    The point being that those designing processes, who have lots of time to do it, check it and cross check it, and who generally only do it once per place they work (maybe twice), really have no excuse to fail the one thing they had to do with all the time in the World, whilst those who do the same thing again and again under strict time constraints definitelly have valid excuse to once in a blue moon make a mistake.


  • If you system depends on a human never making a mistake, your system is shit.

    It’s not by chance that for example, Accountants have since forever had something which they call reconciliation where the transaction data entered from invoices and the like then gets cross-checked with something else done differently, for example bank account transactions - their system is designed with the expectation that humans make mistakes hence there’s a cross-check process to catch those.

    Clearly Crowdstrike did not have a secondary part of the process designed to validate what’s produced by the primary (in software development that would usually be Integration Testing), so their process was shit.

    Blaming the human that made a mistake for essentially being human and hence making mistakes, rather than the process around him or her not having been designed to catch human failure and stop it from having nasty consequences, is the kind of simplistic ignorant “logic” that only somebody who has never worked in making anything that has to be reliable could have.

    My bet, from decades of working in the industry, is that some higher up in Crowdstrike didn’t want to pay for the manpower needed for the secondary process checking the primary one before pushing stuff out to production because “it’s never needed” and then the one time it was needed, it wasn’t there, thinks really blew up massivelly, and here we are today.