Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

General_Effort ,

Yes, it's BS, like most of the AI takes here.

The kernel of truth is scaling laws:

[T]he Chinchilla scaling law for training Transformer language models suggests that when given an increased budget (in FLOPs), to achieve compute-optimal, the number of model parameters (N) and the number of tokens for training the model (D) should scale in approximately equal proportions.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • incremental_games
  • random
  • meta
  • All magazines