Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

barsoap ,

I don't think any inference engines have actually been optimised to run on CPUs. You're stuck with 32-bit floats but OTOH that just means that you can do gigantic winograd transformations with the excess precision, needing far fewer fmuladds in total and CPUs are better at dealing with the memory access patterns that come with transforming the convolution. Most people have at least around 1TFLOP of compute in their CPU (e.g. a Ryzen 3600 has that much) that's not ever seeing the light of day. About a fifth of what an RX 570 has, it's a difference but not a magnitude and you can run SDXL with that kind of class of card (maybe not the 570 dunno about software support but a 5500 works, despite AMD's best efforts to cripple rocm).

Also from what I gather they're more or less doing summarybot for your browsing history, that's not a ChatGPT or Llama-style giant model you can talk with.

Also to all those people complaining: There's already AI in firefox, the translation models are about 17MB per language pair, gzipped.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • incremental_games
  • random
  • meta
  • All magazines