Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

kaffiene ,

I find this extraordinarily unconvincing. Firstly it's based on the idea that random graphs are a great model for LLMs because they share a single superficial similarity. That's not science, that's poetry.
Secondly, the researchers completely misunderstand how LLMs work. The assertion that a sentence could not have appeared in the training set does not prove anything. That's expected behaviour.
"stochastic parrot" wasn't supposed to mean that it only regurgitates text that it's already seen, rather that the text is a statistically plausible response to the input text based on very high dimensional feature vectors. Those features definitely could relate to what we think of as meaning or concepts, but they're meaning or concepts that were inherent in the training material.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • random
  • incremental_games
  • meta
  • All magazines