Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

BaroqueInMind ,

OLlama is so fucking slow. Even with a 16-core overclocked Intel on 64Gb RAM with an Nvidia 3080 10Gb VRAM, using a 22B parameter model, the token generation for a simple haiku takes 20 minutes.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@beehaw.org
  • incremental_games
  • meta
  • All magazines