Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

blindsight ,

That's not how LLMs work.

Super short version is that LLMs probabilistically determine the next word most likely to occur in a sequence. They do this using Statistical Models (like what your cell phone's auto complete uses); Transformers (rating the importance of preceding words, so the model can "focus" on the most important words); and Relatedness (a measure of how closely linked different words/phrases are to reach other in meaning).

With increasingly large models, LLMs can build a more accurate representation of Relatedness across a wider range of topics. With enough examples, LLMs can infinitely generate content that is closely Related to a query.

So a small LLM can make sentences that follow writing conventions but are nonsense. A larger LLM can write intelligibly about topics that are frequently included in the training materials. Huge LLMs can do increasingly nuanced things like "explain" jokes.

LLMs are not capable of evaluating truth or facts. It's not part of the algorithm. And it doesn't matter how big they get. At best, with enough examples to build a stronger Relatedness dataset, they are more likely to "stay on topic" and return results that are actually similar to what is being asked.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@beehaw.org
  • random
  • incremental_games
  • meta
  • All magazines