Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

General_Effort ,

This makes some very strong assumptions about what's going on inside the model. We don't know that we can think of concepts as being internally represented or that these concepts would make sense to humans.

Suppose a model sometimes seems to confuse the concept. There will be wrong examples in the training data. For all we know, it may have learned that this should be done if there was an uneven number of words since the last punctuation mark.

To feed text into an LLM, it has to be encoded. The normal schemes are for different purposes and not suitable. A text is broken down into tokens. A token can be a single character or an emoji, part of a word, or even more than a word. A token is represented by numbers and that's what the model takes as input and gives as output. A text, turned into numbers, is called an embedding.

The process of turning a text into an embedding is quite involved. It uses its own neural net. The numbers should already relate to the meaning. Because of the way these are trained, English words are often a single token, while words from other languages are dissected into smaller parts.

If an LLM "thinks" in tokens, then that's something it has learned. If it "knows" that a token has a language, then it has learned that.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • technology@lemmy.world
  • incremental_games
  • meta
  • All magazines