• jj4211@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    3 days ago

    Problem being is that whoever is checking the result in this case had to do the work anyway, and in such a case… why bother with the LLM that can’t be trusted to pull the data anyway?

    I suppose they could take the facts and figures that a human pulled and have an LLM verbose it up for people who for whatever reason want needlessly verbose BS. Or maybe an LLM can do a review of the human generated report to help identify potential awkward writing or inconsistencies. But delegating work that you have to do anyway to double check the work seems pointless.

    • pseudo@jlai.lu
      link
      fedilink
      arrow-up
      1
      ·
      3 days ago

      Like someone here said “trust is also thing”. Once you check a few time that the process is right and the result are right, you don’t need to check more than ponctually. Unfortunatly, that’s not what happened in this story.