Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

solanaceous ,

Sure, it’s hard to say whether a computer program can “know” anything or what that even means. But the paper isn’t arguing that. It assumes very little about how how LLMs actually work, and it defines “hallucination” as “not giving the right answer” with no option for the machine to answer “I don’t know”. Then the proof follows basically from the fact that the LLM-or-whatever can’t know everything.

The result is not very surprising, and saying that it means hallucination is inevitable is an oversell. It’s possible that hallucinations, or at least wrong answers, are inevitable for different reasons though.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@beehaw.org
  • random
  • incremental_games
  • meta
  • All magazines