Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

kromem , (edited )

It's really so much worse than this article even suggests.

For example, one of the things it doesn't really touch on is the unexpected results emerging over the last year that a trillion parameter network may develop capabilities which can then be passed on to a network with less than a hundredth the parameter size by generating synthetic data from the larger model to feed into the smaller. (I doubt even a double digit percentage of researchers would have expected that result before it showed up.)

Even weirder was a result that CoT prompting models to improve their answers and then feeding the questions and final answers into a new model but without the 'chain' from the CoT will still train the second network in the content of the chain.

The degree to which very subtle details in the training data is ending up modeled seems to go beyond even some of the wilder expectations by researchers right now. Just this past week I saw a subtle psychological phenomenon I used to present about appearing very clearly and very by the book in GPT-4 outputs given the correct social context. I didn't expect that to be the case for at least another generation or two of models and hadn't expected the current SotA models to replicate it at all.

For the first time two weeks ago I saw a LLM code switch to a different language when there was a more fitting translation to the concept being discussed. There's no way the most statistical likelihood of discussing motivations in English was to drop into a language barely represented in English speaking countries. This was with the new Gemini, which also seems to have internalized a bias towards symbolic representations in its generation, to the point they appear to be filtering out emojis (in the past I've found examples where switching from nouns to emojis improves critical reasoning abilities of models as it breaks token similarity patterns in favor of more abstracted capabilities).

Adding the transformer's self attention to diffusion models has suddenly resulted in correctly simulating things like fluid dynamics and physics in Sora's video generation.

We're only just starting to unravel some of the nuances of self-attention, such as recognizing the attention sinks in the first tokens and the importance of preserving them across larger sliding context windows.

For the last year at least, especially after GPT-4 leapfrogged expectations, it's very much been feeling as the article states - this field is eerily like the early 20th century in Physics where experimental results were regularly turning a half century of accepted theories on their head and fringe theories generally dismissed were suddenly being validated by multiple replicated results.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • technology@lemmy.world
  • incremental_games
  • meta
  • All magazines