• 0 Posts
  • 39 Comments
Joined 9 months ago
cake
Cake day: October 18th, 2023

help-circle




  • I don’t consider myself a never nester, but looking at my code now, I extract all the time and rarely go 4 tabs in. It just makes it more easily maintainable. I also like the idea of putting the failure conditions first. I haven’t looked at this yet but I’m sure there are some times I can use it.

    Sure, sometimes you might not have a choice, but I do think there is a lot of value to what they are saying. I think it kind of goes in line with standard “functions should do one thing” paradigm.






  • I guess it comes down to a philosophical question as to what “know” actually means.

    But from my perspective is that it certainly knows some things. It knows how to determine what I’m asking, and it clearly knows how to formulate a response by stitching together information. Is it perfect? No. But neither are humans, we mistakenly believe we know things all the time, and miscommunications are quite common.

    But this is why I asked the follow up question…what’s the effective difference? Don’t get me wrong, they clearly have a lot of flaws right now. But my 8 year old had a lot of flaws too, and I assume both will get better with age.





  • My question to you is how is it different than a human in this regard? I would go to class, study the material, hope to retain it, so I could then apply that knowledge on the test.

    The ai is trained on the data, “hopes” to retain it, so it can apply it on the test. It’s not storing the book, so what’s the actual difference?

    And if you have an answer to that, my follow up would be “what’s the effective difference?” If we stick an ai and a human in a closed room and give them a test, why does it matter the intricacies of how they are storing and recalling the data?





  • A crowd of 10,000 people means fuck all compared to 158,429,631.

    I agree that it would be a bad data set, but not because it is too small. That size would actually give you a pretty good result if it was sufficiently random. Which is, of course, the problem.

    But you’re missing the point: just because something is obvious to you does not mean it’s actually true. The model could be trained in a way to not be biased by our number choice, but to actually be pseudo-random. Is it surprising that it would turn out this way? No. But to think your assumption doesn’t need to be proven, in such a case, is almost equivalent to thinking a Trump rally is a good data sample for determining the opinion of the general public.


  • “we don’t need to prove the 2020 election was stolen, it’s implied because trump had bigger crowds at his rallies!” -90% of trump supporters

    Another good example is the Monty Hall “paradox” where 99% of people are going to incorrectly tell you the chance is 50% because they took math and that’s how it works.

    Just because something seems obvious to you doesn’t mean it is correct. Always a good idea to test your hypothesis.