• zeca@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    21 days ago

    by truthful, i meant generating truthful new knowledge, not just performing calculations that we implemented and know well. I agree that i could have phrased this better…

    • aesthelete@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      20 days ago

      It’s amazing that you managed to try to pretend this thing will do what it cannot do.

      AI in general? Sure, maybe at some point.

      LLMs? Nope. Sorry. They’re basically an echo of sorts.

      (As, you know, the study you’re posting under is showing.)

      • zeca@lemmy.ml
        link
        fedilink
        arrow-up
        0
        ·
        19 days ago

        It’s amazing that you managed to try to pretend this thing will do what it cannot do.

        What is it im pretending that LLMs can do? I think you may be misreading me.

          • zeca@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            16 days ago

            Did i say LLMs do generate truthful new knowledge? Of course it doesnt do that

            • aesthelete@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              16 days ago

              Dude, I quoted you.

              If you’re trying to dance around and say that you were saying “AI would do that” instead of LLMs…1) why? the AI=LLMs train has left the station and 2) are you lost? Because the article link you’re replying under is talking about LLMs

              • zeca@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                15 days ago

                You’re completely misreading me. You qouted me saying the words “truthful new knowledge”, but not me saying AI/LLMs generate that. What i was saying at the begining is that it cant do thst and be useful at the same time. Because it can only be reliable when severely controlled and in this case it loses its utility. Knowledge that can be reliably and automatically verified is not truly novel or interesting, its just old knowledge rehashed.