• xthexder@l.sw0.com
    link
    fedilink
    arrow-up
    13
    ·
    edit-2
    8 months ago

    This graph actually shows a little more about what’s happening with the randomness or “temperature” of the LLM.
    It’s actually predicting the probability of every word (token) it knows of coming next, all at once.
    The temperature then says how random it should be when picking from that list of probable next words. A temperature of 0 means it always picks the most likely next word, which in this case ends up being 42.
    As the temperature increases, it gets more random (but you can see it still isn’t a perfect random distribution with a higher temperature value)