Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

nevemsenki ,

LLMs don’t do this though, it doesn’t do a lookup of past SAT questions it’s seen and answer it, it uses some process of “reasoning” to do it.

The "reasoning" in LLM is literally statistical probability of which word would follow which word. It has no real concept of what it talks about beyond the pre-built relationship matrices between words and language rules. That's why LLMs confidently hallucinate obvious bullshit time to time - to them there's no meaning to either truthful or absolute bonkers text, it's just words that should probably follow each other.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • random
  • incremental_games
  • meta
  • All magazines