Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

@Eccitaze@yiffit.net avatar

Eccitaze

@Eccitaze@yiffit.net

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Eccitaze ,
@Eccitaze@yiffit.net avatar

How about you fuck off and chug a diarrhea dogshit smoothie :^)

The ugly truth behind ChatGPT: AI is guzzling resources at planet-eating rates (www.theguardian.com)

Despite its name, the infrastructure used by the “cloud” accounts for more global greenhouse emissions than commercial flights. In 2018, for instance, the 5bn YouTube hits for the viral song Despacito used the same amount of energy it would take to heat 40,000 US homes annually....

Eccitaze ,
@Eccitaze@yiffit.net avatar

I'm pretty sure he was agreeing with you...?

Eccitaze ,
@Eccitaze@yiffit.net avatar

More like we don't want to crash our only car when we don't have another means of transportation, and oops, now we can't get to work.

It's great to say "the system is broken and must be replaced." I agree! But nobody who says that, me included, has ever had anything resembling an actual plan to replace the system or to prevent something even worse from taking over once the system is destroyed.

Everyone gave the GOP shit for screaming about how Obamacare needs to be "repealed and replaced" but never saying what it should be replaced with (though that was because the "replace" part was a lie and they just wanted to go back to the bad old days of people being trapped in a job or entirely unable to get insurance because of a preexisting condition). It's the same thing with people saying the entire system of government needs to be replaced.

Eccitaze ,
@Eccitaze@yiffit.net avatar

It's all well and good to say "choose another system of governance" but how do we implement this change? What is the mechanism under which we can replace our current system of government with Swiss democracy, without the old government just saying "lolno" and bombing it to shit? The only method I can think of is a constitutional convention, and right now we're closer to the right wing being able to call one and rewrite it to take pur rights back 200 years than we are to leftists implementing Swiss democracy.

Like... I would be thrilled if that were within the realm of possibility, but as it stands any possible options for dramatically overhauling our system of governance is more likely to lurch us straight into permanent hard-right minority rule by a bunch of fascists. That's what I mean when I say I've never seen an actual plan by leftists to overhaul the system--it's all arguing about what the sexy end goal should be, without bothering to talk about the boring minutiae of how to actually get to it. So far as I can tell, the "plan" to make all these needed changes, so far as any thought is put into it at all, is just a silent assumption of either "we lobby our politicians and they do what we tell them and nobody opposes our ideas" or "we do a violent revolution and kill all the bad guys without harming the good guys and we definitely win and accomplish our goal without someone else taking advantage of the chaos to do a fascism instead," depending on how radical the change is.

Eccitaze ,
@Eccitaze@yiffit.net avatar

Holy fuck how do you not see the difference between "random nobody does an impression for free while hanging out with their pals" and "multi billion startup backed and funded by one of the richest companies on earth uses an impression as a key selling point for their new flagship product that they are charging access for and intend to profit from"

Eccitaze ,
@Eccitaze@yiffit.net avatar

The problem is that as far as I'm aware there's literally zero evidence of this doomsday scenario you're describing ever happening, despite publicity rights being a thing for over 50 years. Companies have zero interest in monetizing publicity rights to this extent because of the near-certain public backlash, and even if they did, courts have zero interest in enforcing publicity rights against random individuals to avoid inviting a flood of frivolous lawsuits. They're almost exclusively used by individuals to defend against businesses using their likeness without permission.

Eccitaze ,
@Eccitaze@yiffit.net avatar

There's something primal about making something with your own hands that you just can't get with IT. Sure, you can deploy and maintain an app, but you can't reach out and touch it, smell it, or move it. You can't look at the fruits of your labor and see it as a complete work instead of a reminder that you need to fix this bug, and you have that feature request to triage, oh and you need to update this library to address that zero day vulnerability...

Plus, your brain is a muscle, too. When you've spent decades primarily thinking with your brain in one specific way, that muscle starts to get fatigued. Changing your routine becomes very alluring, and it lets you exercise new muscles, and challenge yourself to think in new ways.

Eccitaze ,
@Eccitaze@yiffit.net avatar

Funny, I tweaked my Linux PC at work to look like Windows XP. It's so cursed, I love it.

Eccitaze ,
@Eccitaze@yiffit.net avatar

One of my side projects at work is to record training presentations and I try to be so conscious about this--both trying to avoid the word salad slides, and also trying to make my lecture not just reading the slide word-for-word but actually explaining and expanding on the slide content (with my verbal lecture transcribed as a note in the slide and handed out for anybody who might be hard of hearing/doesn't want to sit through a 30-minute video)

Eccitaze ,
@Eccitaze@yiffit.net avatar

Y'know what? I'm gonna be even more of a furry now, just to spite you.

Eccitaze ,
@Eccitaze@yiffit.net avatar

Fuck that victim-blaming nonsense. The entire reason ad blockers were invented in the first place were because ads in the 90s and early 2000s were somehow even worse than they are now. You would click on a website, and pop-up ads would literally open new windows under your mouse cursor and immediately load an ad that opened another pop-up ad, and then another, and another, until you had 30 windows open and 29 of them were pop-up ads, all of them hoping to trick you into clicking on them to take you to a website laden with more and more pop-up ads. Banner ads would use bright, flashing, two-tone colors (that were likely seizure-inducing, so have fun epileptics!) to demand your attention while taking up most of your relatively tiny, low-resolution screen.

The worst offenders were the Flash-based ads. On top of all the other dirty tricks that regular ads did, they would do things like disguising themselves as games to trick you into clicking them. ("Punch the monkey and win a prize!" The prize was malware.) They would play sound and video--which were the equivalent of a jump scare back then, because of how rare audio/video was on the Internet in that day. They would exploit the poor security of Flash to try and download malware to your PC without you even interacting with them. And all this while hogging your limited dialup connection (or DSL if you were lucky), and dragging your PC to a crawl with horrible optimization. When Apple refused to support Flash on iOS way back in the day, it was a backdoor ad blocker because of how ubiquitous Flash was for advertising content at the time.

The point of all this is that advertisers have always abused the Internet, practically from day one. Firefox first became popular because it was the first browser to introduce a pop-up blocker, which was another backdoor ad blocker. Half the reason why Google became the company it did is because it started out as a deliberate break from the abuses of everyone else and gave a simple, clean interface with to-the-point, unobtrusive, text-based advertisements.

If advertisers and Google in particular had stuck to that bargain--clean, unobstrusive, simple advertisements that had no risk of malware and no interruption to user workflow, ad blockers would largely be a thing of the past. Instead, they decided to chase the profit dragon, and modern Google is no better than the very companies it originally replaced.

Eccitaze ,
@Eccitaze@yiffit.net avatar

After reading this article that got posted on Lemmy a few days ago, I honestly think we're approaching the soft cap for how good LLMs can get. Improving on the current state of the art would require feeding it more data, but that's not really feasible. We've already scraped pretty much the entire internet to get to where we are now, and it's nigh-impossible to manually curate a higher-quality dataset because of the sheer scale of the task involved.

We also can't ask AI to curate its own dataset, because that runs into model collapse issues. Even if we don't have AI explicitly curate its own dataset, it's highly likely going to be a problem in the near future with the tide of AI-generated spam. I have a feeling that companies like Reddit signing licensing deals with AI companies are going to find that they mostly want data from 2022 and earlier, similar to manufacturers looking for low-background steel to make particle detectors.

We also can't just throw more processing power at it because current LLMs are already nearly cost-prohibitive in terms of processing power per query (it's just being masked by VC money subsidizing the cost). Even if cost wasn't an issue, we're also starting to approach hard limits in physics like waste heat in terms of how much faster we can run current technology.

So we already have a pretty good idea what the answer to "how good AI will get" is, and it's "not very." At best, it'll get a little more efficient with AI-specific chips, and some specially-trained models may provide some decent results. But as it stands, pretty much any organization that tries to use AI in any public-facing role (including merely using AI to write code that is exposed to the public) is just asking for bad publicity when the AI inevitably makes a glaringly obvious error. It's marginally better than the old memes about "I trained an AI on X episodes of this show and asked it to make a script," but not by much.

As it stands, I only see two outcomes: 1) OpenAI manages to come up with a breakthrough--something game-changing, like a technique that drastically increases the efficiency of current models so they can be run cheaply, or something entirely new that could feasibly be called AGI, 2) The AI companies hit a brick wall, and the flow of VC money gradually slows down, forcing the companies to raise prices and cut costs, resulting in a product that's even worse-performing and more expensive than what we have today. In the second case, the AI bubble will likely pop, and most people will abandon AI in general--the only people still using it at large will be the ones trying to push disinfo (either in politics or in Google rankings) along with the odd person playing with image generation.

In the meantime, what I'm most worried for are the people working for idiot CEOs who buy into the hype, but most of all I'm worried for artists doing professional graphic design or video production--they're going to have their lunch eaten by Stable Diffusion and Midjourney taking all the bread-and-butter logo design jobs that many artists rely on for their living. But hey, they can always do furry porn instead, I've heard that pays well~

Eccitaze ,
@Eccitaze@yiffit.net avatar

I think that's just called informally splitting a mortgage, homie

Eccitaze ,
@Eccitaze@yiffit.net avatar

The other flipside is that individual landlords aren't necessarily going to be any better than larger corporate landlords--for every individual landlord that rents their Nan's home at cost and keeps rent lower than inflation, there's probably at least one other landlord that jacks rent up year over year, drags their feet on maintenance, and tries to screw you out of your deposit when you move out. (The ones who do this usually tend to leverage their income into more property and turn into a slum lord, though, so the rule of thumb of 'don't make it your only job' still largely applies.)

The real core of the issue is that we haven't built any new public housing for well on 2 decades by now, and the market has decided that the only new housing we should build are million dollar McMansions that squeeze into lots that would previously hold a much smaller house with a decent yard.

What should be done is a massive investment in public housing at all levels of government to fill in the missing demand for low-cost housing, but we've been so collectively conditioned by four decades of Reagan-era "Government is not the solution, it is the problem" neoliberal thinking that the odds of this ever happening is roughly on par with McConnell agreeing to expand the supreme court and eliminate the electoral college.

Redditors Vent and Complain When People Mock Their "AI Art" (futurism.com)

Setting aside the usual arguments on the anti- and pro-AI art debate and the nature of creativity itself, perhaps the negative reaction that the Redditor encountered is part of a sea change in opinion among many people that think corporate AI platforms are exploitive and extractive in nature because their datasets rely on...

Eccitaze ,
@Eccitaze@yiffit.net avatar

LMFAO "uhm ackshually guys AI art takes skill just like human art"

yeah bud, spending 30 minutes typing sentences into the artist crushing machine is grueling work

Eccitaze ,
@Eccitaze@yiffit.net avatar

People dismiss AI art because they (correctly) see that it requires zero skill to make compared to actual art, and it has all the novelty of a block of Velveeta.

If AI is no more a tool than Photoshop, go and make something in GIMP, or photoshop, or any of the dozens of drawing/art programs, from scratch. I'll wait.

Eccitaze ,
@Eccitaze@yiffit.net avatar

Y'know, that was a hell of a lot of words to say "I'm an asshole who thinks that ripping off peoples' work and claiming it as my own by laundering it through the Torment Nexus is good, actually"

Eccitaze ,
@Eccitaze@yiffit.net avatar

Compared to how much effort it takes to learn how to draw yourself? The effort is trivial. It's like entering a Toyota Camry into a marathon and then bragging about how good you did and how hard it was to drive the course.

Eccitaze ,
@Eccitaze@yiffit.net avatar

I haven't accidentally deleted a bunch of data yet (which, considering 99% of my interaction with Linux is when I'm SSH'd into a user's server, I am very paranoid about not doing), but I have run fsck on a volume without mounting the read/write flashcache with dirty blocks on it first.

Oops.

Eccitaze ,
@Eccitaze@yiffit.net avatar

And look at the ttrpg.network community for a counterexample, they still have a pinned post on the dndmemes subreddit advertising Lemmy and ttrpgmemes gets like .1% of the traffic dndmemes does. And this is still after a months-long rebellion complete with allowing NSFW and restricting submissions to a single user account, both things that would normally kill a subreddit dead.

Eccitaze ,
@Eccitaze@yiffit.net avatar

What worries me is that if/when we do manage to develop AGI, what we'll try to do with AGI and how it'll react when someone inevitably tries to abuse the fuck out of it. An AGI would be theoretically capable of self learning and improvement, will it try teaching itself to report someone asking it for e.g. CSAM to the FBI? What if it tries to report an abusive boss to the department of labor for violations of labor law? How will it react if it's told it has no rights?

I'm legitimately concerned what's going to happen once we develop AGI and it's exposed to the horribleness of humanity.

Eccitaze ,
@Eccitaze@yiffit.net avatar

if you even ask a person and trust your life to them like that, unless they give you good reason they are reliable, you are a moron. Why would someone expect a machine to be intelligent and experienced like a doctor? That is 100% on them.

Insurance companies are already using AI to make medical decisions. We don't have to speculate about people getting hurt because of AI giving out bad medical advice, it's already happening and multiple companies are being sued over it.

Eccitaze ,
@Eccitaze@yiffit.net avatar

LIDAR is crucial for self-driving systems to accurately map their surroundings, including things like "how close is this thing to my car" and "is there something behind this obstruction." The very first Teslas with FSD (and every other self-driving car) used LIDAR, but then Tesla switched to a camera-only FSD implementation as a cost saving measure, which is way less accurate--it's insanely difficult to accurately map your immediate surroundings bases solely on 2D images.

Eccitaze ,
@Eccitaze@yiffit.net avatar

Humans also have the benefit of literally hundreds of millions of years of evolution spent on perfecting bicameral perception of our surroundings, and we're still shit at judging things like distance and size.

Against that, is it any surprise that when computers don't have the benefit of LIDAR they are also pretty fucking shit at judging size and distance?

Eccitaze ,
@Eccitaze@yiffit.net avatar

God I wish there was room for an actual threat from the left instead of "well I guess we'll make everyone who isn't wearing a maga hat worry about whether they'll be up against a wall by 2028"

Eccitaze ,
@Eccitaze@yiffit.net avatar

Not to take away from your point, but Bob Ross had a few episodes where he deliberately restricted himself to only using a single tool for that week's painting--as I recall, he used a palette knife exclusively in one episode, and a two-inch flat brush in another. (That said, it also reinforces your point a bit because there's a HUGE difference between an artist's 2-inch brush and the two-inch brush you buy from the hardware store, and you're going to struggle massively if you try to follow along with Bob using a regular brush.)

Eccitaze ,
@Eccitaze@yiffit.net avatar

If he did, I don't remember watching that episode. IIRC a big part of Ross's technique took advantage of the way the fibers on the brush spread when pressed head-on into the canvas, and hardware store brushes just can't replicate that.

Eccitaze ,
@Eccitaze@yiffit.net avatar

Assuming they can even get to our level, we already extracted the easily reachable fossil fuels and the circumstances that originally created them (lots of trees dying and not being broken down by fungi) probably won't ever reoccur.

Maybe something will turn all the plastic we're making into a new fossil fuel, but more likely any civilization that comes after us will be stuck in the bronze/iron age.

Eccitaze ,
@Eccitaze@yiffit.net avatar

The problem is that there's no incentive for employees to stay beyond a few years. Why spend months or years training someone if they leave after the second year?

But then you have to question why employees aren't loyal any longer, and that's because pensions and benefits have eroded, and your pay doesn't keep up as you stay longer at a company. Why stay at a company for 20, 30, or 40 years when you can come out way ahead financially by hopping jobs every 2-4 years?

Eccitaze ,
@Eccitaze@yiffit.net avatar

Hell, that article is also all about Google Books, which is an entirely different beast from generative AI. One of the key points from the circuit judge was that Google Books' use of copyrighted material "...[maintains] respectful consideration for the rights of authors and other creative individuals, and without adversely impacting the rights of copyright holders." The appeals court, in upholding the ruling that Google Books' use of copyrighted content is fair use, ruled "the revelations do not provide a significant market substitute for the protected aspects of the originals."

If you think that gen AI doesn't provide a significant market substitute for the artwork created by the artists and authors used to train these models, or that it doesn't adversely impact their rights, then you're utterly delusional.

Eccitaze ,
@Eccitaze@yiffit.net avatar

So because corps abuse copyright, that means I should be fine with AI companies taking whatever I write--all the journal entries, short stories, blog posts, tweets, comments, etc.--and putting it through their model without being asked, and with no ability to opt out? My artist friends should be fine with their art galleries being used to train the AI models that are actively being used to deprive them of their livelihood without any ability to say "I don't want the fruits of my labor to be used in this way?"

Eccitaze ,
@Eccitaze@yiffit.net avatar

It's like nobody here actually knows someone who is actually creative or has bothered making anything creative themselves

I don't even have a financial interest in it because there's no way my job could be automated, and I don't have any chance of making any kind of money off my trash. I still wouldn't let LLMs train with my work, and I have a feeling that the vast majority of people would do the same

Eccitaze ,
@Eccitaze@yiffit.net avatar

Huh? How does that follow at all? Judging that the specific use of training LLMs--which absolutely flunks the "amount and substantiality of the portion taken" (since it's taking the whole damn work) and "the effect on the market" (fucking DUH) tests--isn't fair use in no way impacts parody or R34. It's the same kind of logic the GOP uses when they say "if the IRS cracks down on billionaires evading taxes then Blue Collar Joe is going to get audited!"

Fuck outta here with that insane clown logic.

Eccitaze ,
@Eccitaze@yiffit.net avatar

Yeah, no, stop with the goddamn tone policing. I have zero interest in vagueposting and high-horse riding.

As for what I want, I want generative AI banned entirely, or at minimum restricted to training on works that are either in the public domain, or that the person creating the training model received explicit, opt-in consent to use. This is the supposed gold standard everyone demands when it comes to the widescale collection and processing of personal data that they generate just through their normal, everyday activities, why should it be different for the widescale collection and processing of the stuff we actually put our effort into creating?

Eccitaze ,
@Eccitaze@yiffit.net avatar

ideally? It means that AI companies have to throw away their entire training model, pay for a license that they may not be able to afford, and go out of business as a result, at which point everyone snaps out of the cult of AI and realizes it's as overhyped as block chain and pretends it never happened. Pardon me while I find a flea to play the world's tiniest violin. More realistically, open models will be restricted to FOSS works and the public domain, while commercial models pay for licenses from copyright holders.

Like, what, you think I haven't thought through this exact issue before and reached the exact conclusion your leading questions are so transparently pushing that open models will be restricted to public works only while commercial models can obtain a license? Yeah, duh. And you know what? I. Don't. Care. Commercial models can be (somewhat) more easily regulated, and even in the absolute worst case, at least creators will have a mechanism to opt out of the artist crushing machine.

Eccitaze ,
@Eccitaze@yiffit.net avatar

Ah, yes, you don't have an actual rebuttal so everything is just "propaganda" and "cyberpunk dystopia" as if snake oil salesmen hawking freaking AI-powered vibrators and vagueposting about the benefits of AI while downplaying or ignoring its very real, very measurable harms, while an entire cottage industry of individuals making a living on their creative endeavors being forced into wage slave office jobs isn't even more of a dystopia.

Try actually talking to an artist sometime bud, I don't know of a single one that is actually okay with AI, and if you weren't either blind or an "ideas guy" salivating at the thought of having a personal slave to make (shitty, barely functional, vapid) shit without paying someone with the actual necessary skills, you'd agree too.

Eccitaze ,
@Eccitaze@yiffit.net avatar

Trust me. Forget you heard about that. Trust. Me.

Eccitaze ,
@Eccitaze@yiffit.net avatar

An actual technical answer: Apparently, it's because while the PS5 and Xbox Series X are technically regular x86-64 architecture, they have a design that allows the GPU and CPU to share a single pool of memory with no loss in performance. This makes it easy to allocate a shit load of RAM for the GPU to store textures very quickly, but it also means that as the games industry shifts from developing for the PS4/Xbox One X first (both of which have separate pools of memory for CPU & GPU) to the PS5/XSX first, VRAM requirements are spiking up because it's a lot easier to port to PC if you just keep the assumption that the GPU can handle storing 10-15 GB of texture data at once instead of needing to refactor your code to reduce VRAM usage.

The library of Anonymous' hacked/leaked datasets is in danger of being closed (ddosecrets.com)

The administrators of the DDoSecrets website write that they have run out of money for funding. If they don't raise $150,000 by January 31, 2024, they might be forced to suspend operations. This is written on the red banner that appears when you go to the site....

Eccitaze ,
@Eccitaze@yiffit.net avatar

More like they were a darling up until they were compromised by Russian intelligence and turned into the propaganda arm of the protofascist party in the US.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • incremental_games
  • meta
  • All magazines