• anus@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    Are you saying that humans don’t parrot what seems like what they heard before?

    • milicent_bystandr@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Oh we absolutely do. And we tell lies, and we misunderstand, and miscommunicate.

      But not all the time, and not everyone. So if your friend if they’d like dinner, you expect the answer to be true to what they want, not just whatever sounds good to the general population. If you read a scientific journal, you expect the scientists to represent the facts and even the meaning of their research, not parrot some ideas from a half-forgotten textbook. And if you see a professional counsellor, you expect them to have a good understanding of human nature, and to genuinely empathise with your situation, and have good ways to help you out.

      And of course all three of those examples fail sometimes, which is why as part of life we learn who we can trust and to what extent.

      • anus@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        I would argue that all of the cases you presented fail at a comparable rate compared to foundational LLMs

          • anus@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            I would argue that you’ve clearly formed your opinion without spending significant time giving foundational LLMs a chance

            • milicent_bystandr@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 month ago

              Nah, more that I forget how dumb people can be sometimes: I was reminded recently that there’s plenty of examples of people spouting LLM-like answers; but I still contend that even most people, trusted in their proper areas, talk with meaning and comprehension.

              As to LLMs, perhaps I haven’t given them enough chance. But I have experimented a while myself, read reports of others, and delved into the understanding of how their mathematical models work. So I’m not exactly clueless.

              • anus@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 month ago

                That’s impressive for someone who seems clueless

                I would encourage you to give foundational large models a chance

                I think you’ll find that (barring intentionally subversive inputs) the largest and most powerful models basically don’t hallucinate

                O1 in particular is better than humans in my experience