• Iconoclast@feddit.uk
    link
    fedilink
    English
    arrow-up
    35
    ·
    10 hours ago

    It’s a Large Language Model designed to generate natural-sounding language based on statistical probabilities and patterns - not knowledge or understanding. It doesn’t “lie” and it doesn’t have the capability to explain itself. It just talks.

    That speech being coherent is by design; the accuracy of the content is not.

    This isn’t the model failing. It’s just being used for something it was never intended for.

    • THB@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      10 hours ago

      I puke a little in my mouth every time an article humanizes LLMs, even if they’re critical. Exactly as you said they do not “lie” nor are they “trying” to do anything. It’s literally word salad that organized to look like language.