• sin_free_for_00_days@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    29
    ·
    9 days ago

    This belongs along articles with titles like “I hit myself in the nuts every day for a month. It hurt and this happened” or “I tried using condiments as clothing for a month. I now have a rash and/or chemical burns, and this AMAZING thing happened”.

  • Lost_My_Mind@lemmy.worldM
    link
    fedilink
    arrow-up
    21
    ·
    9 days ago

    I used AI chatbots as a source of news for a month, and they were unreliable and erroneous

    blink blink

    Oh, I’m sorry. Were you expecting me to be surprised? Was I supposed to act shocked when you said that?

    Ok, ok. Hold on. Let me get my shocked face ready…

    shocked pikachu

    • Instigate@aussie.zone
      link
      fedilink
      arrow-up
      2
      ·
      9 days ago

      The article really isn’t for those of us who already know how terrible ‘AI’ is - it’s for those who treat it like it’s the infallible holy grail of all the answers in the world. Sadly, I’ve met some such people for whom this article might be illuminating.

  • deliriousdreams@fedia.io
    link
    fedilink
    arrow-up
    15
    ·
    9 days ago

    “I used a hammer as a saw for a month and found that it was too dull to get the job done”. That’s what this sounds like. Nobody needed to use AI chatbots as a news source to know that they’re unreliable. The people who do this already know and don’t care. This article isn’t gonna change their minds. They like it.

  • TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    9 days ago

    I’m interested in how they were wrong. Pointedly, was there a Right/MAGA/Nazi bias or trend apparent in the errors? Or, is it just stupid?

    Because, “wrong” is just a mistake, but lying is intentional.

    • deliriousdreams@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      9 days ago

      It’s fair to wonder the specifics, but the word lying implies that the LLM knows that it’s providing a falsehood as a fact. It doesn’t know. It’s “lies” are either hallucinations (where it doesn’t have the information in its data set and can’t return the information requested and so it provides incorrect information because that info is statistically the as close as the thing can get, or it provides incorrect information because the guardrails set by the company engineering it said to provide Y when queried with X.

      There is no thought involved in what the LLM does.

  • fox2263@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 days ago

    The AI built in data will be months out of date, and if it even bothers to grab latest headlines then it can and will cherry pick depending on how it’s been programmed for bias, like grok would probably tell you the world is ending because of “the left”