Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

OpticalMoose ,
@OpticalMoose@discuss.tchncs.de avatar

Probably better to ask on !localllama. Ollama should be able to give you a decent LLM, and RAG (Retrieval Augmented Generation) will let it reference your dataset.

The only issue is that you asked for a smart model, which usually means a larger one, plus the RAG portion consumes even more memory, which may be more than a typical laptop can handle. Smaller models have a higher tendency to hallucinate - produce incorrect answers.

Short answer - yes, you can do it. It's just a matter of how much RAM you have available and how long you're willing to wait for an answer.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • selfhosted@lemmy.world
  • incremental_games
  • meta
  • All magazines