Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

Audalin

@Audalin@lemmy.world

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Audalin , to Selfhosted in Cloudflare is bad. Youre right.

It would. But it's a good option when you have computationally heavy tasks and communication is relatively light.

Audalin , to Selfhosted in Cloudflare is bad. Youre right.

Once configured, Tor Hidden Services also just work (you may need to use some fresh bridges in certain countries if ISPs block Tor there though). You don't have to trust any specific third party in this case.

Audalin , to Technology in Researchers claim GPT-4 passed the Turing test

If config prompt = system prompt, its hijacking works more often than not. The creators of a prompt injection game (https://tensortrust.ai/) have discovered that system/user roles don't matter too much in determining the final behaviour: see appendix H in https://arxiv.org/abs/2311.01011.

Audalin , to Comic Strips in Heathcliff without Heathcliff 6/10/2024
Audalin , to Comic Strips in Heathcliff without Heathcliff 6/10/2024
Audalin , to Technology in Chrome: 72 hours to update or delete your browser.

xkcd.com is best viewed with Netscape Navigator 4.0 or below on a Pentium 3±1 emulated in Javascript on an Apple IIGS at a screen resolution of 1024x1. Please enable your ad blockers, disable high-heat drying, and remove your device from Airplane Mode and set it to Boat Mode. For security reasons, please leave caps lock on while browsing.

Audalin , to Technology in Chrome: 72 hours to update or delete your browser.

CVEs are constantly found in complex software, that's why security updates are important. If not these, it'd have been other ones a couple of weeks or months later. And government users can't exactly opt out of security updates, even if they come with feature regressions.

You also shouldn't keep using software with known vulnerabilities. You can find a maintained fork of Chromium with continued Manifest V2 support or choose another browser like Firefox.

Audalin , to Technology in ‘Let yourself be monitored’: EU governments to agree on Chat Control with user “consent” [updated]

Very cool and impressive, but I'd rather be able to share arbitrary files.

And looks like you can only send images in DMs, but not in groups/forums.

Audalin , to Selfhosted in Any of you have a self-hosted AI "hub"? (e.g. for LLM, stable-diffusion, ...)

If your CPU isn't ancient, it's mostly about memory speed.
VRAM is very fast, DDR5 RAM is reasonably fast, swap is slow even on a modern SSD.

8x7B is mixtral, yeah.

Audalin , (edited ) to Selfhosted in Any of you have a self-hosted AI "hub"? (e.g. for LLM, stable-diffusion, ...)

Mostly via terminal, yeah. It's convenient when you're used to it - I am.

Let's see, my inference speed now is:

  • ~60-65 tok/s for a 8B model in Q_5_K/Q6_K (entirely in VRAM);
  • ~36 tok/s for a 14B model in Q6_K (entirely in VRAM);
  • ~4.5 tok/s for a 35B model in Q5_K_M (16/41 layers in VRAM);
  • ~12.5 tok/s for a 8x7B model in Q4_K_M (18/33 layers in VRAM);
  • ~4.5 tok/s for a 70B model in Q2_K (44/81 layers in VRAM);
  • ~2.5 tok/s for a 70B model in Q3_K_L (28/81 layers in VRAM).

As of quality, I try to avoid quantisation below Q5 or at least Q4. I also don't see any point in using Q8/f16/f32 - the difference with Q6 is minimal. Other than that, it really depends on the model - for instance, llama-3 8B is smarter than many older 30B+ models.

Audalin , to Selfhosted in Any of you have a self-hosted AI "hub"? (e.g. for LLM, stable-diffusion, ...)

Have been using llama.cpp, whisper.cpp, Stable Diffusion for a long while (most often the first one). My "hub" is a collection of bash scripts and a ssh server running.

I typically use LLMs for translation, interactive technical troubleshooting, advice on obscure topics, sometimes coding, sometimes mathematics (though local models are mostly terrible for this), sometimes just talking. Also music generation with ChatMusician.

I use the hardware I already have - a 16GB AMD card (using ROCm) and some DDR5 RAM. ROCm might be tricky to set up for various libraries and inference engines, but then it just works. I don't rent hardware - don't want any data to leave my machine.

My use isn't intensive enough to warrant measuring energy costs.

Audalin , to Technology in Why mathematics is set to be revolutionized by AI

The article isn't about automatic proofs, but it'd be interesting to see a LLM that can write formal proofs in Coq/Lean/whatever and call external computer algebra systems like SageMath or Mathematica.

Audalin , to Technology in Motherboard makers apparently to blame for high-end Intel Core i9 CPU failures | Ars Technica

I see, thanks. Will check. I just thought perhaps you figured out something other than those from your experience.

Audalin , to Technology in Motherboard makers apparently to blame for high-end Intel Core i9 CPU failures | Ars Technica

Any guidance on choosing appropriate conservative settings for i7-13700K? I may be hit with the same as you in the future (sometimes I have to do some heavy multithreaded combinatorial computations which run several days with 100°C temperature, using all cores). The motherboard has options for customising pretty much everything there is, but I didn't touch anything overclocking-related, so I have Asus defaults.

Audalin , to Technology in Reddit embracing all out enshittification

I'm still waiting for the day when actual ads across the internet drown in AI-generated advertisements pointing to no real product or service. Perhaps that'll make attention industry collapse?

If you're looking for a side project idea, here's one.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • incremental_games
  • meta
  • All magazines