Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

jj4211 ,

You just run the same query a bunch of times and see how consistent the answer is.

A lot of people are developing what I'd call superstitions on some way to overcome LLm limitations. I remember someone swearing they fixed the problem by appending "Ensure the response does not contain hallucinations" to every prompt.

In my experience, what you describe is not a reliable method. Sometimes it's really attached to the same sort of mistakes for the same query. I've seen it double down, when instructed a facet of the answer was incorrect and to revise, several times I'd get "sorry for the incorrect information", followed by exact same mistake. On the flip side, to the extent it "works", it works on valid responses too, meaning an extra pass to ward off "hallucinations" you end up gaslighting the model and it changes the previously correct answer as if it were a hallucination.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • random
  • incremental_games
  • meta
  • All magazines