Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

atrielienz ,

I understand the gist but I don't mean that it's actively like looking up facts. I mean that it is using bad information to give a result (as in the information it was trained on says 1+1 =5 and so it is giving that result because that's what the training data had as a result. The hallucinations as they are called by the people studying them aren't that. They are when the training data doesn't have an answer for 1+1 so then the LLM can't do math to say that the next likely word is 2. So it doesn't have a result at all but it is programmed to give a result so it gives nonsense.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • random
  • incremental_games
  • meta
  • All magazines