Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

moonpiedumplings ,

The tldr as I understand it is that Mac M1/M2 devices are unique in that the vram (gpu ram) is the same as the normal ram. This sharing allows LLM models to run on the gpu of those chips, and in their "vram" as well, allowing you to run bigger models on smaller devices.

Llama.cpp was the software that users did this with originalky. I can't find the original guide/article I looked at, but here is a github gist, where the commenters have done benchmarks:

https://gist.github.com/cedrickchee/e8d4cb0c4b1df6cc47ce8b18457ebde0

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • random
  • incremental_games
  • meta
  • All magazines