It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.

  • mansfield@lemmy.world
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    7 months ago

    This stuff is literally a bullshit(1) machine. How can you fix it without making something else entirely?

    (1)

    • tinwhiskers@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      7 months ago

      When they hallucinate, they don’t do it consistently, so one option is running the same query through multiple times (with different “expert” base prompts), or through different LLMs and then rejecting it as “I don’t know” if there’s too much disagreement between them. The Q* approach is similar, but baked in. This should dramatically reduce hallucinations.

      Edit: added bit about different experts