Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

OpenStars ,
@OpenStars@discuss.online avatar

I am not sure what you mean. e.g. https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence) says:

In natural language processing, a hallucination is often defined as "generated content that appears factual but is ungrounded". The main cause of hallucination from data is source-reference divergence... When a model is trained on data with source-reference (target) divergence, the model can be encouraged to generate text that is not necessarily grounded and not faithful to the provided source.

e.g., I continued your provided example of when "socks are edible" is a band name, but the output ended up in a cooking context.

There is a section on https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)#Terminologies but the issue seems far from settled that hallucinations is somehow a bad word. And it is not entirely illogical, since AI, like humans, necessarily has a similar tension between novelty and creativity - i.e. going beyond either of our training to deal with new circumstances.

I suspect that the term is here to say. But I am nowhere close to an authority and could definitely be wrong:-). Mostly I am saying that you seem to be arguing a niche viewpoint, not entirely without merit obviously but one that we here in the Fediverse may not be as equipped to banter back and forth on except in the most basic of capacities.:-)

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • random
  • incremental_games
  • meta
  • All magazines