Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

model_tar_gz

@model_tar_gz@lemmy.world

This profile is from a federated server and may be incomplete. Browse more on the original instance.

model_tar_gz ,

Because starting with ‘X’ does not guarantee the ‘sh’ sound. See ‘xylophone’, ‘Xavier’, ‘Xenon’.

Xitter looks like ‘exiter’ to me.

model_tar_gz ,

Xean Connery hates this simple trick.

model_tar_gz ,

A true Debian user would never tell us that they use Debian. They would say they use Debian Testing’. BTW.

model_tar_gz ,

X was already a thing in Linux before Elon had a dream.

Fucking go ahead and take it though Elon. Wayland for the win.

model_tar_gz ,

His newest social media venture is focused on a niche of extreme exploration, named the Xtreme Network for Xtraordinary Xploration. You can find it at xnxx.com.

model_tar_gz ,

That that to the 3000 browser tabs I have open, two instances of VS code, the multithreaded python app I’m running and developing, the several-gigabytes large dataset that’s active in memory.

Some days, even 64 GB isn’t enough.

model_tar_gz ,

Well it’s only slightly more than one-third of almost 9000!

model_tar_gz ,

Hail.

model_tar_gz ,

There is a “tool library” sort of service (for profit) operating in my area. The prices are absurd—people are charging like $20/day for a tool that would cost $100 new, or half that used on craigslist. My projects often span multiple days, especially if there’s an unforeseen delay—which there always is because I’m a good engineer but a shitty carpenter.

I don’t use the service. I’m all for communal ownership but it still has to make sense.

model_tar_gz ,

It’s a for-profit service that people use to rent-out, and rent-in their tools. Not a true library so to speak but seeks to accomplish the same. Except that people charging $20/day to rent their battery-powered Ryobi drill is absurd.

model_tar_gz ,

It’s not a fair comparison then is it? $80/hr is an expensive but not outrageously so handyman, plus they have their own tools to purchase and maintain and other business operating overhead (fuel and transportation maintenance) etc.

DIY—if you’re able—is always less expensive.

model_tar_gz ,

Agreed.

model_tar_gz ,

And now there is precedent for Rust components in the Linux kernel.

model_tar_gz ,

Looks like SoCal fire season. Welcome to the haze.

model_tar_gz ,

I really love this webcomic series. I identify strongly with the Techno-Mage character.

model_tar_gz ,

But that’s not what the article is getting at.

Here’s an honest take. Let me preface this with some credential: I’m an AI Engineer with many years in field. I’m directly working on right now multiple projects that augment and automate code generation, documentation, completion and even system design/understanding. We’re not there yet. But the pace of progress in how fast we are improving our code-AI is astounding. Exponential growth in capability and accuracy and utility.

As an anecdotal example; a few years ago I decided I would try to learn Rust (programming language), because it seemed interesting and we had a practical use case for a performant, memory-efficient compiled language. It didn’t really work out for me, tbh. I just didn’t have the time to get very fluent with it enough to be effective.

Now I’m on a project which also uses Rust. But with ChatGPT and some other models I’ve deployed (Mixtral is really good!) I was basically writing correct, effective Rust code within a week—accepted and merged to main.

I’m actively using AI code models to write code to train, fine-tune, and deploy AI code models. See where this is going? That’s exponential growth.

I honestly don’t know if I’d recommend to my young kids programming as a career now even if it has been very lucrative for me and will take me to my retirement just fine. It excites me and scares me at the same time.

model_tar_gz ,

Yeah I get it 100%. But that’s what I’m saying. I’m already working on and with models that have entire codebase level fine-tuning and understanding. The company I work at is not the first pioneer in this space. Problem understanding and interpretation— all of what you said is true— there are causal models being developed (I am aware of one team in my company doing exactly that) to address that side of software engineering.

So. I don’t think we are really disagreeing here. Yes, clearly AI models aren’t eliminating humans from software today; but I also really don’t think that day is all that far away. And there will always be need for humans to build systems that serve humans; but the way we do it is going to change so fundamentally that “learn C, learn Rust, learn Python” will all be obsolete sentiments of a bygone era.

model_tar_gz ,

Also by design. Tech companies collude like this all the fucking time.

model_tar_gz ,

He threatened to walk away from Tesla if they didn’t give him twice his stake in the company (because he sold half his stake to buy Twitter).

model_tar_gz , (edited )

ONNX Runtime is actually decently well optimized to run on CPUs; even with large models. However, the simple truth is that there’s really no escaping that Billion+parameter models need to be quantized and even pruned heavily to fit in memory and not saturate the CPU cache so inferences/generations don’t take forever. That’s a reduction in accuracy, so the quality of the generations aren’t great.

There is a lot of really interesting research and development being done right now on smart quantization and pruning. Model serving technologies are improving rapidly too—paged attention is a really cool technique (for transformer based models) for effectively leveraging tensor core hardware—I don’t think that’s supported on CPU yet but it’s probably not that far off.

It’s a really active field and there’s just as much interest in running huge models on huge hardware as there is big models on small hardware. I recently heard of layerwise inference for CPUs; load each layer of the network to the CPU cache on demand. That’s typically a bottleneck operation on GPUs but CPU memoery so bloody fast that it might actually work fine. I haven’t played with it myself, or read the paper all that deeply so I can’t really comment more than it’s an interesting idea.

model_tar_gz ,

I guess that explains why they’re always throwing up so much. Into their kids’ mouths, even. ODs here, there, everywhere, all the time.

model_tar_gz ,

Serve ads inside the ads. It’s more power efficient—kill two birds with one stone?

model_tar_gz ,

Stop giving so many fucks about what other people think about your fashion. You do you, fam.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • incremental_games
  • meta
  • All magazines