Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

@xcjs@programming.dev avatar

xcjs

@xcjs@programming.dev

This profile is from a federated server and may be incomplete. Browse more on the original instance.

xcjs ,
@xcjs@programming.dev avatar

Google was working on a feature that would do just that, but I can't recall the name of it.

They backed down for now due to public outcry, but I expect they're just biding their time.

xcjs ,
@xcjs@programming.dev avatar

Thank you! I was struggling to remember the proposal name.

xcjs ,
@xcjs@programming.dev avatar

Not with this announcement, but it was.

xcjs ,
@xcjs@programming.dev avatar

It depends on the model you run. Mistral, Gemma, or Phi are great for a majority of devices, even with CPU or integrated graphics inference.

xcjs ,
@xcjs@programming.dev avatar

No offense intended, but are you sure it's using your GPU? Twenty minutes is about how long my CPU-locked instance takes to run some 70B parameter models.

On my RTX 3060, I generally get responses in seconds.

xcjs ,
@xcjs@programming.dev avatar

Unfortunately, I don't expect it to remain free forever.

xcjs ,
@xcjs@programming.dev avatar

Ok, so using my "older" 2070 Super, I was able to get a response from a 70B parameter model in 9-12 minutes. (Llama 3 in this case.)

I'm fairly certain that you're using your CPU or having another issue. Would you like to try and debug your configuration together?

xcjs ,
@xcjs@programming.dev avatar

It should be split between VRAM and regular RAM, at least if it's a GGUF model. Maybe it's not, and that's what's wrong?

xcjs ,
@xcjs@programming.dev avatar

Good luck! I'm definitely willing to spend a few minutes offering advice/double checking some configuration settings if things go awry again. Let me know how things go. :-)

xcjs , (edited )
@xcjs@programming.dev avatar

I think there was a special process to get Nvidia working in WSL. Let me check... (I'm running natively on Linux, so my experience doing it with WSL is limited.)

https://docs.nvidia.com/cuda/wsl-user-guide/index.html - I'm sure you've followed this already, but according to this, it looks like you don't want to install the Nvidia drivers, and only want to install the cuda-toolkit metapackage. I'd follow the instructions from that link closely.

You may also run into performance issues within WSL due to the virtual machine overhead.

xcjs ,
@xcjs@programming.dev avatar

We all mess up! I hope that helps - let me know if you see improvements!

maegul , to Fediverse
@maegul@hachyderm.io avatar

Nice demonstration of why mastodon's dominance is problematic

See the conversions here:
https://github.com/LemmyNet/lemmy/pull/4628
and
https://socialhub.activitypub.rocks/t/federating-the-content-of-posts-note-articles-and-character-limits/4087

AFAICT, mastodon's decisions, which are arguably problematic (on which see: https://lemmy.ml/post/14973403) are literally trickling down to other platforms and infecting how they federate with each other as they dance around mastodon's quirks in different ways.

It seems like masto is ruining "the standard" with its gravity.


@fediverse

xcjs ,
@xcjs@programming.dev avatar

It's a W3C managed standard, but there are tons of behavior not spelled out in the specification that platforms can choose to impose.

The standard doesn't impose a 500 character limit, but there's nothing that says there can't be a limit.

xcjs ,
@xcjs@programming.dev avatar

Or maybe just let me focus on who I choose to follow? I'm not there for content discovery, though I know that's why most people are.

xcjs ,
@xcjs@programming.dev avatar

I was reflecting on this myself the other day. For all my criticisms of Zuckerberg/Meta (which are very valid), they really didn't have to release anything concerning LLaMA. They're practically the only reason we have viable open source weights/models and an engine.

xcjs ,
@xcjs@programming.dev avatar

That's the funny thing about UI/UX - sometimes changing non-functional colors can hurt things.

How to drop files from Android to home server?

I'm looking for an easy way to upload files from my Android smartphone to my home server. is there a - ideally dockerized - solution for that? Some simple web GUI where I can click on "Upload" and the files will be saved to a certain directory on my home server?...

xcjs ,
@xcjs@programming.dev avatar

My go-to solution for this is the Android FolderSync app with an SFTP connection.

xcjs ,
@xcjs@programming.dev avatar

With UI decisions like the shortcut bar, they really don't. I switched to another SMS app because I couldn't stand it.

xcjs ,
@xcjs@programming.dev avatar

On Android, it moved SMS messages from the shared SMS store upon receipt and to Signal's own database, which was slightly more secure.

xcjs ,
@xcjs@programming.dev avatar

I...do not miss XP, but I understand the nostalgia attached to it.

I learned a lot of technical skills on XP, but that's what made me appreciate the architectural decisions behind UNIX-likes all the more.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • incremental_games
  • meta
  • All magazines