• lloram239@feddit.de
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    it couldn’t even begin to make sense of the question “has this poem feline or canine qualities”

    Which is obviously false, as a quick try will show. Poems are just language and LLMs understand that very well. That LLMs don’t have any idea how cats actually look like or move, beyond what they can gather from text books, is irrelevant here, they aren’t tasked with painting a picture (which the upcoming multi-modal models can do anyway).

    Now there can of course be problems that can be expressed in language, but not solve in the realm of language. But I find those to be incredible rare, rare enough that I never really seen a good example. ChatGPT captures an enormous amount of knowledge about the world, and humans have written about a lot of stuff. Coming up with questions that would be trivial to answer for any human, but impossible for ChatGPT is quite tricky.

    And that’s the tip of the iceberg.

    Have you actually ever actually seen an iceberg or just read about them?

    It comes with rules how to learn, it doesn’t come with rules enabling it to learn how to learn

    ChatGPT doesn’t learn. It’s a completely static model that doesn’t change. All the learning happened in a separate step back when it was created, it doesn’t happen when you interact with it. That illusion comes from the text prompt, which includes both your text as well as its output, getting feed into the model as input. But outside that text prompt, it’s just static.

    “oh I’m not sure about this so let me mull it over”.

    That’s because it fundamentally can’t mull it over. It’s a feed forward neural network, meaning everything that goes in on one side comes out on the other in a fixed amount of time. It can’t do loops by itself. It has no hidden internal monologue. The only dynamic part is the prompt, which is also why its ability to problem solve improves quite a bit when you require it to do the steps individually instead of just presenting the answer, as that allows the prompt to be it’s “internal monologue”.

    • barsoap@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Coming up with questions that would be trivial to answer for any human, but impossible for ChatGPT is quite tricky.

      Which is why I came up with the “feline poetry” example. It’s a quite simple concept for a human even if not particularly poetry-inclined, yet, if noone ever has written about the concept it’s going to be an uphill battle for ChatGPT. And, no, I haven’t tried. I also didn’t mean it as some kind of dick measuring contest I simply wanted to explain what kind of thing ChatGPT really has trouble with.

      Have you actually ever actually seen an iceberg or just read about them?

      As a matter of fact yes, I have. North cape, Norway.

      ChatGPT doesn’t learn. It’s a completely static model that doesn’t change.

      ChatGPT is also its training procedure if you ask me, same as humanity is also its evolution.