• shnizmuffinA
    link
    fedilink
    English
    arrow-up
    35
    ·
    7 months ago

    Multiple prompts lead to the same response. No variance.

    • neoman4426@fedia.io
      link
      fedilink
      arrow-up
      5
      ·
      7 months ago

      I tried it a few days ago and got some variance … but it was still exactly the same essential instructions, just a first person summary rather than the second person verbatim

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      1
      ·
      7 months ago

      The cool thing with models who can do this is that you can kinda talk to the LLM behind whatever it is supposed to represent & change things dynamically (with respect to its context size of course). Not all models can do that unfortunately.