• shnizmuffinA
    link
    fedilink
    English
    arrow-up
    35
    ·
    6 months ago

    Multiple prompts lead to the same response. No variance.

    • neoman4426@fedia.io
      link
      fedilink
      arrow-up
      5
      ·
      6 months ago

      I tried it a few days ago and got some variance … but it was still exactly the same essential instructions, just a first person summary rather than the second person verbatim

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      1
      ·
      6 months ago

      The cool thing with models who can do this is that you can kinda talk to the LLM behind whatever it is supposed to represent & change things dynamically (with respect to its context size of course). Not all models can do that unfortunately.