Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

passepartout

@passepartout@feddit.de

This profile is from a federated server and may be incomplete. Browse more on the original instance.

passepartout ,

Have a look at self hosted alternatives like Ollama in combination with Open-webui. It can be a hassle to set up, or even excruciatingly painful if you never touched a computer before, but it could be worth a try. I use it daily and like it much more than chatgpt to be honest.

passepartout ,

I have my gaming pc running as ollama host when i need it (RX 6700XT with rocm doing the heavy lifting). PC idles at ~50W and draws up to 200W when generating an answer. It is plenty fast though.

My mini pc home server is running openwebui with access to this "ollama instance" but also OpenAIs api when i just need a quick answer and therefor don't turn on my pc.

passepartout ,

If you're lucky you just set it to the wrong version, mine uses 10.3.0 (see below).

I tried running the docker container first as well but gave up since there are seperate versions for cuda and rocm which comes packaged with this as well and therefor gets unnecessary big.

I am running it on Fedora natively. I installed it with the setup script from the top of the docs:

curl -fsSL https://ollama.com/install.sh | sh

After that i created a service file (also stated in the linked docs) so that it starts at boot time (so i can just boot my pc and forget it without needing to login).

The crucial part for the GPU in question (RX 6700XT) was this line under the [service] section:

Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0"

As you stated, this sets the environment variable for rocm.
Also to be able to reach it from outside of localhost (for my server):

Environment="OLLAMA_HOST=0.0.0.0"

passepartout ,

You can get different results, sometimes better sometimes worse, most of the time differently phrased (e.g. the gemma models by google like to do bulletlists and sometimes tell me where they got that information from). There are models specifically trained / finetuned for different tasks (mostly coding, but also writing stories, answering medical questions, telling me what is on a picture, speaking different languages, running on smaller / bigger hardware, etc.). Have a look at ollamas library of models which is outright tiny compared to e.g. huggingface.

Also, i don't trust OpenAI and others to be confidential with company data or explicit code snippets from work i feed them.

passepartout ,

Glad i could help ;)

passepartout ,

also probably for the planet and a lot of animals.

passepartout ,

It is great that you could bring up the curiosity to bust the soy myth.

There is no moral consumption of animal products. Many people a lot smarter than both of us (or at least mor dedicated / funded / in their jobs) have made the research and come to this conslusion as well.

The most people who oppose this fact feel attacked at first because it can't coexist with their own behaviour. It is the same as with every debate where emotion gets brought up as a reasoning though (e.g. refugees, climate change, homeopathy, etc.).

passepartout ,

Is that Florida in your atlantic ocean or are you just happy to see me?

passepartout OP ,

I hope it went well :) i was completely ready to go back changing the image tag to v2 but didn't need to.

passepartout ,

If you're ready to tinker a bit i can recommend Ollama for the backend and Open web UI for the frontend. They can also both run on the same machine.

The advantage is that you can use your GPU to compute, which is a lot faster.

passepartout ,

I think the whole "men don't know where the clitoris is" in reality means either

  • they don't know what to do with it
  • they don't care
passepartout ,

Yes, because

  • developers can't be bothered to optimize the game if management breathes into their necks
  • DRM like Denuvo tanks performance, ironically only for the people not pirating and thus paying for the game
passepartout ,

This is what current implementations like Revanced do. The endgame will be fullblown DRM. Until then, it will be a cat and mouse game.

passepartout ,

Vanced died because they tried to generate revenue from it and made themselves vulnerable.

Also, unlike Vanced, Revanced doesn't distribute modded youtube apks themselves.

passepartout ,

Kind of funny if you read it like that, and while it certainly doesn't make them immortal, it at least may make them last a while longer i hope.

passepartout ,

Coming to germany as well if you believe our minister of transport. But he only tells this to scare the public.

passepartout ,

Where does the ...media get sourced from? Looks like pornhub gifs. You could think of a integration for some nsfw subreddits as well (if you can get past the api barrier they built up, like with redlib).

passepartout ,

I had a friend come over to my place to fix her laptops wifi. After about an hour searching for any setting in windows that i could have missed, i coincidentally found a forum where one pointed out this could be due to a hardware wifi switch...

passepartout ,

Bricked my pc twice because of the bootloader and couldn't repair it. From now on i just nuke my system if something is fucky and have a shell script do the installing of packages etc.

passepartout ,

Im so looking forward to this. When i tried to use tmpfs / ramdisk, the transcoding would simply stop because there was no space left.

passepartout ,

I tried Huggingface TGI yesterday, but all of the reasonable models need at least 16 gigs of vram. The only model i got working (on a desktop machine with a amd 6700xt gpu) was microsoft phi-2.

passepartout ,

Huggingface TGI is just a piece of software handling the models, like gpt4all. Here is a list of models officially supported by TGI, although they state that you can try different ones as well. You follow the link and look for the files section. The size of the model files (safetensors or pickele binaries) gives a good estimate of how much vram you will need. Sadly this is more than most consumer graphics cards have except for santacoder and microsoft phi.

passepartout ,

Yes, since we have similar gpus you could try the following to run it in a docker container on linux, taken from here and slightly modified:

#!/bin/bash

model=microsoft/phi-2
# share a volume with the Docker container to avoid downloading weights every run
volume=<path-to-your-data-directory>/data

docker run -e HSA_OVERRIDE_GFX_VERSION=10.3.0 -e PYTORCH_ROCM_ARCH="gfx1031" --device /dev/kfd --device /dev/dri --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.4-rocm --model-id $model

Note how the rocm version has a different tag and that you need to mount your gpu device into the container. The two environment variables are specific to my (any maybe yours also) gpu architecture. It will need a while to download though.

alvaro , to Self Hosted - Self-hosting your services.

I have a disk for local backups (that is the only purpose of that disk). I was wondering what would make it last longer:

  • Keep it mounted to my server permanently (current solution)
  • Keep it unmounted most of the time, mount it when I'm going to do a backup (either daily or every 3 days, I don't mind changing that) and unmounting after the backup is done.

What would be the best strategy?

cc @selfhost @selfhosted

passepartout ,

I recently bought a second external drive to do a backup of the first one. In the process I'm going to switch to btrfs. It can do data scrubbing which allows for self repair of corrupted data, which can occur if you leave a drive unpowered in a closet for some years.

passepartout ,

Meanwhile 4tb sata ssd is 300€ in germany

passepartout , (edited )

It's true that you shouldn't open ports to the internet. If you still want your services to be accessible from outside the local network you can install a wireguard server on your thin client that has access to the services you want. And if you really want to harden it you can restrict wireguard clients from ssh and other admin things.

You will need to open one port on the router to your wireguard server though. Wireguard is UDP though and ignores packages without an established connection, so attackers will not even know there is an open port on your router.

Edit: tailscale and zerotier are good external solutions to this as well without needing to open a port at all.

passepartout ,

Shoutout to this guy for maintaining my mainboards temperature sensors and pwn fan headers: https://github.com/Fred78290/nct6687d

Without this and https://github.com/codifryed/coolercontrol my PC was either a jet engine from the sounds or a nuclear reactor from heat constipation.

passepartout ,

Wow, thats just plain stupid. I hope someone forked it.

passepartout ,

They are develeoping it, but it's slow because there are not as many people contributing as they would need to i think.

Anyways, if you want a more recent version, they are preparing an App store launch. One of the developers publishes more recent builds on his fork, see this comment.

https://github.com/Goooler/LawnchairRelease/releases

  • All
  • Subscribed
  • Moderated
  • Favorites
  • incremental_games
  • meta
  • All magazines