Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

trevron ,

If you just want to use a local llm, using something like gpt4all is probably the easiest. Oobabooga or llama.cpp for a more advanced route.

I use ollama with llama3 on my macbook with open-webui and it works real nice. Mistral7b is another one I like. On my PC I have been using oobabooga with models I get from huggingface and I use it as an api for hobby projects.

I have never trained models, I don't have the vram. My GPU is pretty old so I just use these for random gamedev and webdev projects and for messing around with RP in sillytavern.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@beehaw.org
  • incremental_games
  • meta
  • All magazines