Welcome to Incremental Social! Learn more about this project here! Check out lemmyverse to find more communities to join from here!
d416 , 1 month ago The easiest way to run local LLMs on older hardware is Llamafile https://github.com/Mozilla-Ocho/llamafile For non-nvidia GPUs, webgpu is the way to go https://github.com/abi/secret-llama
The easiest way to run local LLMs on older hardware is Llamafile https://github.com/Mozilla-Ocho/llamafile
For non-nvidia GPUs, webgpu is the way to go https://github.com/abi/secret-llama