Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

h3ndrik , (edited )

It depends on the exact specs of your old laptop. Especially the amount of RAM and VRAM on the graphics card. It's probably not enough to run any reasonably smart LLM aside from maybe Microsoft's small "phi" model.

So unless it's a gaming machine and has 6GB+ of VRAM, the graphics card will probably not help at all. Without, it's going to be slow. I recommend projects that are based on llama.cpp or use it as a backend, for that kind of computers. It's the best/fastest way to do inference on slow computers and CPUs.

Furthermore you could use online-services or rent a cloud computer with a beefy graphics card by the hour (or minute.)

  • All
  • Subscribed
  • Moderated
  • Favorites
  • incremental_games
  • selfhosted@lemmy.world
  • random
  • meta
  • All magazines