Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

@MudMan@fedia.io cover
@MudMan@fedia.io avatar

MudMan

@MudMan@fedia.io

This profile is from a federated server and may be incomplete. Browse more on the original instance.

MudMan ,
@MudMan@fedia.io avatar

Presumably to minimize exposure while they add the announced security band-aids?

So... while I have you guys here, how do we feel about iOS having just announced basically the same feature? We angy about that one too or nah?

I mean, joking aside, I'm genuinely curious about what the reaction is going to be. On paper it's a very similar concept, but it feels like routing it through Siri and not surfacing the stored data will legitimately kill some of the creepy factor even if what's happening behind the scenes is very similar.

MudMan ,
@MudMan@fedia.io avatar

Well, yeah, but the baseline for outrage was with the feature existing, not with it being secure or not. There were a lot of people making the case that anybody who can open your computer because they have your password (abusive partners included) could then have a lot of access to your activity. That seems to carry over to this feature, too.

So I guess the question is, is there a "doing this right" version of this or not? You seem to implicitly be on the yes side, I'll be curious to find out if that's the majority.

MudMan ,
@MudMan@fedia.io avatar

There is a screenshot of the opt-in screen in the article. There is no default, just two buttons to say yes or no.

I swear, outrage should only be allowed based on the amount of work one is willing to put in before expressing it. If you don't do the reading, you don't get to be publicly angry. It'd save us all so much trouble.

For the record, the feature was always optional, as per the original announcement. Presumably the change is it is now part of the setup flow where it was going to be a settings toggle instead.

Which is, incidentally, how this used to work the first time Windows had this feature, back when it was called "Timeline" in Windows 10.

MudMan ,
@MudMan@fedia.io avatar

It doesn't because that's one of the four or five screens during the initial Windows setup where you opt in and out of all the other spyware features. They all look the same and are prompted in sequence. Unless they're doing something very weird you absolutely have to make a choice on each of them and they are unskippable otherwise.

I mean, you don't have to know, if you don't know Windows you don't have to recognize them. But if you do it's pretty obivous, so you... you know, could have asked or looked it up.

Or gone through the link, because come on, you didn't. You were obviously just reacting to the headline.

MudMan ,
@MudMan@fedia.io avatar

The exact wording, which, again, is in the article you didn't bother to read before posting, is "Quickly find things you've seen with Recall. Recall helps you find things you've seen on your PC when you allow Windows to save snapshots of your screen every few seconds".

Seriously, I don't even like the feature. I will absolutely turn it off, just like I did Timeline, and I expect it'll be gone in the next version, just like Timeline was.

But I did look at the stupid article before posting. So there's that.

MudMan ,
@MudMan@fedia.io avatar

That is a very long rant to agree with me in that you care enough to rant about this online but not enough to read past the headline.

So no, I have no intention to shut off the condescension, there is nothing passive about my aggression and people absolutely don't read the article regardless of how important they feel the issue is. Yesterday this was all about the most important threat to the security of the average cosnumer, now it's "unimportant sidenote stuff". Somebody should have told MS how unimportant it is, could have saved the devs the crunch to fix it by the time it ships in 10 days.

For the record, you're right about how hard it is to find things sometimes in localized versions of OSs. That's true of all of them, though, and I blame the fact that we're all stuck here speaking the haegemonic language and reading about tech only in English while local journalists struggle to stay relevant, so we learn all the brand names and settings in English despite the software itself being available in localized versions. But that's a whole other conversation.

MudMan ,
@MudMan@fedia.io avatar

Yeah, see, here's how I know I'm not scapegoating you and you also didn't read it.

The article clearly explains they WILL in fact encrypt it and require a passkey to access it once per session.

So yeah, no, my condescension is exactly about you. And others. But also you.

MudMan ,
@MudMan@fedia.io avatar

Yes, I am aware. I read about that yesterday, and yes, I did read it again at the bottom of this piece. It was really bad.

Which is presumably why, a couple of paragraphs above, they explain that:

Microsoft will also require Windows Hello to enable Recall, so you’ll either authenticate with your face, fingerprint, or using a PIN. “In addition, proof of presence is also required to view your timeline and search in Recall,” says Davuluri, so someone won’t be able to start searching through your timeline without authenticating first.

This authentication will also apply to the data protection around the snapshots that Recall creates. “We are adding additional layers of data protection including ‘just in time’ decryption protected by Windows Hello Enhanced Sign-in Security (ESS) so Recall snapshots will only be decrypted and accessible when the user authenticates,” explains Davuluri. “In addition, we encrypted the search index database.”

Here's the thing, it shouldn't take somebody calling you out on it on the Internet and engaging in a defensive back-and-forth driven by pride for you to actually read the thing. Commenting should be secondary to following the link and figuring out what's actually happening. But it's not. That is the part that pisses me off. Not the stupid feature that is still bad even without glaring security holes. Only partially the stupid rooting for commerical products like they're football teams. Fundamentally that our consumption patterns when it comes to information are broken and we think it only affects everybody else but not us.

That part is terrifying and infuriating.

MudMan ,
@MudMan@fedia.io avatar

I mean, no, that's dogmatic weirdness. The feature is secure if the feature that is live is secure. Software isn't magic, it doesn't have karma, it works the way it works.

Now, this is as secure as whatever they ship, but even assuming it's ironclad it's still a bad feature. You do not need an automatic screengrabber to remember what you did yesterday. Every piece of work software you may need to reopen has a recent files list, Windows has a file search function, browsers have a history. You have a brain. You don't lose track of so much stuff that you need to be recording your entire activity just in case. This is a bad gimmick that covers no use case, just like Timeline was. And because it's a bad useless feature the logical thing is to turn it off and forget about it, which is why everybody seems to have memory holed that Timeline ever existed.

You guys really don't need to get weird about it for it to be a bad idea, but since they're railroaded into shipping it, at least it's better to ship it with proper encryption and authorization features. Still turn it off, though.

MudMan ,
@MudMan@fedia.io avatar

Yeeeah, I'm thinking this conversation isn't worth pursuing. My point is already up there.

MudMan ,
@MudMan@fedia.io avatar

This is hilarious for life context reasons that I'm not gonna disclose here.

But good one. I swear, this place sometimes is Dunning-Kruger headquarters. Gotta decide if "this place" means "the whole Internet" or not, one of these days.

MudMan ,
@MudMan@fedia.io avatar

They already had this feature once in Windows 10, and you don't remember because it doesn't work like that.

MudMan ,
@MudMan@fedia.io avatar

They really haven't. Their onboarding flow has included this exact type of forced option for advertising data, location data and bug reports for what now? A decade, give or take? They have a very specific design language for these.

Plus, and I keep reminding people of this and they keep forgetting, they already made this feature once. It was on Windows 10, it was called Timeline, everybody turned it off and they never did much to change that, instead just adding a less intrusive offline version of it and ultimately removing it by the launch of 11 until... well, now.

What I don't understand is why you guys are so set on this specific list of grievances. You don't need to dismiss the improvements they are making. They are improvements and they are a good thing.

If you are set on rooting for or against OSs (and why would you, stop it, that's weird) you can instead just point out that... well, the feature itself is still garbage. Even with a default opt out, even assuming it's fully secure. It just covers no valid use case, unless you're starring in Memento II. It remains a security vulnerability because social engineering and shared computers are a thing. It is exactly as dumb and useless as Timeline was, and there's a reason nobody remembers that happened. The lack of AI search really, really isn't why that failed.

You don't need to come across as a paranoid conspiracy theorist making up slippery slopes to keep criticising this about the things they are actually fixing. There are plenty of valid issues with it at a fundamental design level they are not changing. Being so wildly speculative about the eeeeevil corporate MS lying to us just makes the criticisms sound less valid when the actual thing they are doing is still pretty useless at best, and most likely really bad.

MudMan ,
@MudMan@fedia.io avatar

It is amazing to live in a world where pointing out that a feature is a trainwreck is "shilling for corporations".

That's the part I just don't get. Why you guys need people to be in denial or toeing a certain line, facts be damned. It's not enough to be critical, people have to be critical at all times, of all things in the exact way everybody else is.

The account crap is not a valid counterexample. Windows 11 (Home, at least) was always explicitly presented as requiring an account. The methods to install without it were always an usupported workaround. It does suck that they went the Apple path and traded up-front price for data mining, I would absolutely prefer the alternative on principle, even if I was already logging in on Win10 for work reasons. If there was a natively compatible Windows alternative without this requirement I'd default to that. My Windows installs have most of the related features disabled, where I can do that. I just recently got to a place where I can disable OneDrive now and I am incredibly happy about it, since we're talking about it.

But it's not a slippery slope, it's them gradually closing the unsupported loophole that was keeping some people from flipping out about it as it becomes clearer that vas majority of user are, in fact, logging in with a MS account.

This is a datamining feature that is immediately unpopular and they are actively backtracking on it. There is clear precedent for this exact same functionality and it didn't go that way. That's not shilling, that's just how reality worked last time this happened. Literally this. The same feature implemented in a very similar way.

Again, there is plenty of legitimate stuff to complain about here. A lot of it is terrible even after the changes to opt-in and security. You don't need to make up a fictional future scenario where they un-fix the stuff they are fixing. You can dislike the fixed version for actual, good reasons without having to sound like a weird online cultist.

MudMan ,
@MudMan@fedia.io avatar

That's how this works, isn't it? Nobody reads past the headline. Everybody feels about it super strongly, just not strongly enough to actually read about it.

MudMan ,
@MudMan@fedia.io avatar

Yeah, right? The biggest bummer of this entire stupid thing that should never have existed is that it's overshadowing perhaps the most exciting hardware launch on Windowsland since the original Surface. I am VERY interested in seeing if Windows on ARM is viable this time, and as a longtime Windows 2-in-1 user I am incredibly excited about the prospect of a similarly performant version that doesn't need to be plugged in basically at all times.

But because MS can't come up with a feature without shooting itself in the foot with a bazooka we're all here talking about the stopgap they had to implement to save face while they wait to be able to quietly kill this dumb thing for good. I swear, they are incredibly bad at this.

MudMan , (edited )
@MudMan@fedia.io avatar

I'm torn about the marketing, because a) MS clearly wants to own "AI", and they do have the cheapest, best version of multimodal chat at the moment, and b) I do think to normies it's more marketable than "we did the MacBook Air, finally".

On the other hand, I 100% agree with you that I give zero craps about their stupid certification for 40 TOPS on laptops. I already own things with GPUs in them and I use very little in the way of LLMs or image generators, and certainly not offline, so the battery life and the matching improvements in weight are THE feature for me.

I mean, it doesn't really matter either way, the market is what it is, and I get to use the devices the same way regardless of how they're marketed, so sell whatever you have to sell. It's still fascinating and kinda sad to witness the self-sabotage, though.

This Hacker Tool Extracts All the Data Collected by Windows’ New Recall AI (www.wired.com)

When Microsoft CEO Satya Nadella revealed the new Windows AI tool that can answer questions about your web browsing and laptop use, he said one of the “magical” things about it was that the data doesn’t leave your laptop; the Windows Recall system takes screenshots of your activity every five seconds and saves them...

MudMan ,
@MudMan@fedia.io avatar

You do have a point, but it does highlight why Microsoft's framing is bad.

Microsoft is basing their approach to this on the concept that your MS account-secured local machine is itself secure, so whatever is in it is fine, because hey, your confidential work info is probably also in your hard drive and unencrypted, so if a bad actor can steal the pictures of it, then it can also steal the original document.

Which mostly is true, to be clear, but it fundamentally misunderstands how much juicier and easier of a target is a reliable, searchable database that logs all activity stored in a consistent location, as opposed to potentially having to extract everything up front. Plus, even if there are few guardrails to all data inside your system, there are some, as this will likely include info you may keep hidden, password-protected or encrypted both locally and remotely. There's a reason my password manager asks for my credentials manually once every time I use it.

MudMan ,
@MudMan@fedia.io avatar

In their defense, my mom hasn't earned that level of trust from me, either.

MudMan ,
@MudMan@fedia.io avatar

I am fascinated by this. I guess when there is no universally recognized ID it feels weirder?

I mean, sure, by all means withhold info from social media platforms, but if it's one where you're going to have your real name and your whole-ass work history on public display, surely verifying your ID is trivial? You could absolutely google the info in a LinkedIn page and find a bunch of additional info anyway.

I get it intellectually, it's a taboo now, just like it's a taboo to have people find out your address or phone number when it used to be publicly listed until a few years ago. It's just weird that it's still a taboo for the services where verifying your ID is presumably a feature, not a bug.

MudMan ,
@MudMan@fedia.io avatar

Well, I guess I'm glad we have fairly secure documentation. Not that fraud doesn't happen, but given how ubiquitous and easy to find that info is the real value is in the document itself, which is minted very much like paper money is and is pretty hard to falsify or forge.

Like many other issues this is the kind of issue I find is fairly well solved.

MudMan ,
@MudMan@fedia.io avatar

My question with that is one of usability. Where I live our ID has digital certifications in it and you can theoretically use it for online authentication. It's just a mess of a system, so people tend to pick other options.

I mean, it works... it's just that it's hard enough to use that you often are given alternative tools for verified ID and most people use those because they're more convenient than the solution that is meant to be the convenient standard. It's a once and future XKCD strip.

The authenticated, secure, universal ID card is pretty handy still, though.

MudMan ,
@MudMan@fedia.io avatar

Nnnope. Presumably because I have secure government ID minted like paper money, containing a digital certificate and pretty hard to falsify or forge.

Look, I'm not saying it doesn't happen, I'm saying it's a useful tool to mitigate it. I've definitely shared my official ID online for things. It's even used for preorders sometimes in high demand items or concert tickets to prevent scalping and effectively limit amounts per person. It kinda works.

MudMan , (edited )
@MudMan@fedia.io avatar

Yeah, but that's the point, with universal secure ID all of those actions require showing your universal secure ID. You can't just give people enough information that sounds or is legit and get a loan, you need to provide your ID and have it verified.

And hey, if somebody that holds a copy of your ID leaks it you're only at risk for a bit of time, because these things expire and each new one you get looks different. It's very hard and not worth it to forge these for that reason, and if somebody went to the trouble of doing that you could easily prove it doesn't match the original you hold.

Fraud and identity theft obviously still exist, but it normally involves getting older people to sign things they didn't mean to or getting people to share their information through social engineering. But just finding your info online and generating enough debt to create a massive problem? That seems hard and reversible.

MudMan ,
@MudMan@fedia.io avatar

The tragic irony of the kind of misinformed article this is linking is that the server farms that would be running this stuff are fairly efficient. The water is reused and recycled, the heat is often used for other applications. Because wasting fewer resources is cheaper than wasting more resources.

But all those locally-run models on laptop CPUs and desktop GPUs? That's grid power being turned into heat and vented into a home (probably with air conditioning on).

The weird AI panic, driven by an attempt to repurpose the popular anti-crypto arguments whether they matched the new challenges or not, is going to PR this tech into wasting way more energy than it would otherwise by distributing it over billions of computer devices paid by individual users. And nobody is going to notice or care.

I do hate our media landscape sometimes.

MudMan ,
@MudMan@fedia.io avatar

Oh, absolutely. There are plenty of good reasons to run any application locally, and a generative ML model is just another application. Some will make more sense running from server, some from client. That's not the issue.

My frustration is with the fact that a knee-jerk reaction took all the 100% valid concerns about wasteful power consumption on crypto and copy-pasted them to AI because they had so much fun dunking on cryptobros they didn't have time for nuance. So instead of solving the problem they added incentive for the tech companies owning this stuff to pass the hardware and power cost to the consumer (which they were always going to do) and minimize the perception of "costly water-chugging power-hungry server farms".

It's very dumb. The entire conversation around this has been so dumb from every angle, from the idiot techbros announcing the singularity to the straight-faced arguments that machine learning models are copy-pasting things they find on the Internet to the hyperbolic figures on potential energy and water cost. Every single valid concern or morsel of opportunity has been blown way out of reasonable proportion.

It's a model of how our entire way of interacting with each other and with the world has changed online and I hate it with my entire self.

MudMan ,
@MudMan@fedia.io avatar

No it wasn't. Here's how I know: all the valid concerns that came about how additional regulation would disproportionately stifle open source alternatives were immediately ignored by the vocal online critics (and the corporate techbros overhyping sci-fi apocalypses). And then when open alternatives appeared anyway nobody on the critical side considered them appropriate or even a lesser evil. The narrative didn't move one bit.

Because it wasn't about openness or closeness, it was a tribal fight, like all the tribal fights we keep having, stoked by greed on one end and viral outrage on the other. It's excruciating to watch.

MudMan ,
@MudMan@fedia.io avatar

Honestly, a lot of the effects people attribute to "AI" as understood by this polemic are ongoing and got ignited by algorithmic searches first and then supercharged by social media. If anything, there are some ways in which the moral AI panic is actually triggering regulation that should have existed for ages.

MudMan ,
@MudMan@fedia.io avatar

We're thinking about different "regulation", and that's another place where extreme opinions have nuked the ground into glass.

Absolutely yeah, techbros are playing up the risks because they hope regulators looking for a cheap win will suddenly increase the cost for competitors, lock out open alternatives and grandfather them in as the only valid stewards of this supposedly apocalyptic technology. We probably shouldn't allow that.

But "maybe don't make an app that makes porn out of social media pictures of your underage ex girlfriend at the touch of a button" is probably reasonable, AI or no AI.

Software uses need some regulation like everything else does. Doesn't mean we need to sell the regulation to disingenuous corporations.

MudMan ,
@MudMan@fedia.io avatar

Yeeeeah, you're gonna have to break down that math for me.

Because if an output takes some amount of processing to generate and your energy cost per unit of compute is higher we're either missing something in that equation or we're breaking the laws of thermodynamics.

If the argument is that the industry uses more total energy because they keep the training in-house or because they do more of the compute at this point in time, that doesn't change things much, does it? The more of those tasks that get offloaded to the end user the more the balance will shift for generating outputs. As for training, that's a fixed cost. Technically the more you use a model the more the cost spreads out per query, and it's not like distributing the training load itself among user-level hardware would make its energy cost go down.

The reality of it is that the entire thing was more than a bit demagogic. People are mad at the energy cost of chatbot search and image generation, but not at the same cost of image generation for videogame upscaling or frame interpolation, even if they're using the same methods and hardware. Like I said earlier, it's all carryover from the crypto outrage more than it is anything else.

MudMan ,
@MudMan@fedia.io avatar

No, that's not true at all. That's the exact same argument that the fearmongers are using to claim that traditional copyright already covers the use cases of media as AI training materials and so training is copying.

It's not. We have to acknowledge that there is some novel element to these, and novel elements may require novel frameworks. I think IP and copyright are broken anyway, but if the thing that makes us rethink them is the idea that machines can learn from a piece of media without storing a copy and spit out a similar output... well, we may need to look at that.

And if there is a significant change in how easily accessible, realistic or widespread certain abusive practices are we may need some adjustments there.

But that's not the same as saying that AI is going to get us to Terminator within five years and so Sam Altman is the only savior that can keep the grail of knowledge away from bad actors. Regulation should be effective and enable people to compete in the novel areas where there is opportunity.

Both of those things can be true at the same time. I promise you don't need to take the maximalist approach. You don't even need to take sides at all. That's the frustrating part of this whole thing.

MudMan ,
@MudMan@fedia.io avatar

Oh, man, I do miss being a techno-utopian. It was the nineties, I had just acquired a 28.8k modem in high school, my teachers were warning me about the risks of algorithmically selected, personalized information and I was all "free the information, man" and "people will figure it out" and "the advantages of free information access outweigh the negatives of the technology used to get there".

And then I was so wrong. It's not even funny how wrong I was. Like, sitting on the smoldering corpse of democracy and going "well, that happened" wrong.

But hey, I'm sure we'll mess it up even further so you can get there as well.

For the record, I don't mean to defend the status quo with that. I agree that copyright and intellectual property are broken and should be fundamentally reformulated. Just... not with a libertarian, fully unregulated framework in mind.

MudMan , (edited )
@MudMan@fedia.io avatar

For one thing, it's absolutely not true that what these apps provide is the same as what we had. That's another place where the AI grifters and the AI fearmongers are both lying. This is not a 1:1 replacement for older tech. Not on search, where LLM queries are less reliable at finding facts and being accurate but better at matching fuzzy searches without specific parameters. Not with image generation, obviously. Not with tools like upscaling, frame interpolation and so on.

For another, some of the numbers being thrown around are not realistic or factual, are not presented in context or are part of a power increase trend that was already ongoing with earlier applications. The average high end desktop PC used to run on 250W in the 90s, 500W in the 2000s. Mine now runs at 1000W. Playing a videogame used to burn as much power as a couple of lightbulbs, now it's the equivalent of turning on your microwave oven.

The argument that we are burning more power because we're using more compute for entertainment purposes is not factually incorrect, but it's both hyperbolic (some of the cost estimates being shared virally are deliberate overestimates taken out of context) and not inconsistent with how we use other computer features and have used other computer features for ages.

The only reason you're so mad about me wasting some energy asking an AI to generate a cute picture but not at me using an AI to generate frames for my videogame is that one of those is a viral panic that maps nicely into the viral panic about crypto people already had and the other is a frog that has been slow burning for three decades so people don't have a reason to have an opinion about it.

MudMan ,
@MudMan@fedia.io avatar

Well, if this was travel and not a fall down a very long, very dark hole, then one of the stops was learning when to say "I don't know".

I don't have all the answers for copyright. I don't think my problem is primarily with terms. I'm probably closer to thinking perhaps the system should acknowledge where we landed consuetudinarily. Just let people share all materials, acknowledge a right of the original author to be the sole profit holder in for-profit exploitation. That's effectively how most of the Internet works anyway. Even then there's obviously tons of stuff we'd have to sort out. What happens with ownership transfer? What about terms? What about derivative work? Components of larger works? I don't know.

We're talking about reworking some of the biggest markets and industries on the planet from the ground up. It's not a shower thought, it's something a whole bunch of very smart people with different backgrounds should and would have to get together for years to put together. Probably on a global scale.

It's an absurd question to have a locked down opinion about. The gap between beign able to tell "yeah, duh, something's not working" and being able to fix it is enormous here. Figuring out that much is probably as far as my trip is gonna take me at this point. And I know even less about patent law.

MudMan ,
@MudMan@fedia.io avatar

Nah, even that won't be. Because most of this workload is going to run on laptops and tablets and phones and it's going to run at lower qualities where the power cost per task is very manageable on hardware accelerated devices that will do it more efficiently.

The heavy load is going to stay on farms because nobody is going to wait half an hour and waste 20% of their battery making a picture of a cute panda eating a sandwich. They'll run heavily quantized language models as interfaces to basic apps and search engines and it'll do basic upscaling for video and other familiar tasks like that.

I'm not trying to be obtusely equidistant, it's just that software developers are neither wizards that will bring about the next industrial revolution because nobody else is smart enough... nor complete morons that can't balance the load of a task across a server and a client.

But it's true that they'll push as much of that compute and energy cost onto the user as possible, as a marketing ploy to sell new devices, if nothing else. And it's true that on the aggregate that will make the tasks less efficient and waste more heat and energy.

Also, I'm not sure how downvoted I am. Interoperable social networks are a great idea in concept, but see above about software developers. I assume the up/downvote comes from rolling a d20 and adding it to whatever the local votes are.

MudMan ,
@MudMan@fedia.io avatar

Much more power than what? What's your benchmark here?

MudMan ,
@MudMan@fedia.io avatar

Yes, no, you said that. But since that is a meaningless statement I was expecting some clarification.

But nope, apparently we have now established that a device existing uses up more power than that device not existing.

Which is... accurate, I suppose, but also true of everything. Turns out, televisions? Also consume less power if they don't exist. Refrigerators. Washing machines? Lots less power by not existing.

So I suppose you're advocating a return to monke situation, but since I do appreciate having a telephone (which would, in fact, save power by not existing), we're going to have to agree to disagree.

MudMan ,
@MudMan@fedia.io avatar

Look, I can suggest you start this thread over and read it from the top, because the ways this doesn't make much sense have been thoroughly explained.

Because this is a long one and if you were going to do that you would have already, I'll at least summarize the headlines: LLMs exist whether you like them or not, they can be quantized down to more reasonable power usage, are running well locally on laptops and tablets burning just a few watts for just a few seconds (NOW, as you put it). They are just one application of ML tech, and are not useless at all (fuzzy searches with few specific parameters, accessibility features, context-rich explanations of out of context images or text), even if their valid uses are misrepresented by both advocates and detractors. They are far from the only commonplace computing task that is now using a lot more power than the equivalent a few years ago, which is a larger issue than just the popularity of ML apps. Granting that LLMs will exist in any case, running them on a data center is more efficient, and the issue isn't just "power consumption" but also how the power is generated and what the reclamation of the waste products (in this case excess heat and used water) is on the other end.

I genuinely would not recommend that we engage in a back and forth breaking that down because, again, that's what this very long thread has been about already and a) I have heard every argument the AI moral panic has puth forth (and the ones the dumb techbro singularity peddlers have put forth, too), and b) we'd just go down a circular rabbit hole of repeating what we've already established here over and over again and certainly not convince each other of anything (because see point A).

MudMan ,
@MudMan@fedia.io avatar

Absolutely not true. Regulations are both in place and in development, and none of them seem like they would prevent any of the applications currently in the market. I know the fearmongering side keeps arguing that a copyright case will stop the development of these but, to be clear, that's not going to happen. All it'll take is an extra line in an EULA to mitigate or investing in the dataset of someone who has a line in their EULA (Twitter, Reddit already, more to come for sure). The industry is actually quite fond of copyright-based training restrictions, as their main effect is most likely to be to close off open source alternatives and make it so that only Meta, Google, and MS/OpenAI can afford model training.

These are super not going away. Regulation is needed, but it's not restricting or eliminating these applications in any way that would make a dent on the also poorly understood power consumption costs.

MudMan ,
@MudMan@fedia.io avatar

Yeah, who's saying it doesn't? It prevents the practices it prevents and allows the rest of the practices.

The regulation you're going to see on this does not, in fact, prevent making LLMs or image generators, though. And it does not, in fact prevent running them and selling them to people.

You guys have gotten it in your head that training data permissions are going to be the roadblock here, and they're absolutely not going to be. There will be common sense options, like opt-outs and opt-out defaults by mandate, just like there are on issues of data privacy under GDPR, but not absolute bans by any means.

So how much did opt-out defaults under GDPR stop social media and advertising companies from running social media and advertising data businesses?

Exactly.

What that will do is make it so you have to own a large set of accessible data, like social media companies do. They are positively salivating at the possibility that AI training will require paying them, since they'll have a user agreement that demands allowing your data to be sold for training. Meanwhile, developers of open alternatives, who are currently running out of a combination of openly accessible online data and monetized datasets put together specifically for research, will face more cost to develop alternatives. Ideally, hope the large AI corporations, too much cost pressure and they will be bullied out of the market, or at least forced to lag behind in quality by several generations.

That's what's currently happening regarding regulation, along with a bunch of more reasonable guardrails about what you should and should not generate and so on. You'll notice I didn't mention anything about power or specific applications there. LLMs and image generators are not going away and their power consumption is not going to be impacted.

MudMan ,
@MudMan@fedia.io avatar

The idea that a feature of this scope would only be gated by having access to your local account is so baffling to me. I've been around my share of bad corporate decisions, and even I genuinely have no idea what they were thinking or how it got this far into development without anybdoy raising a flag.

For now it's an obvious thing to turn off immediately and tell all your friends and relatives to turn off immediately. And yeah, it's a reason to avoid devices that support it out of the box, at least for less tech-savvy users.

MudMan ,
@MudMan@fedia.io avatar

That's entirely unrealistic and not particularly helpful. I mean, if you can and want to, by all means, that solves most of this immediate issue. In practice, many people just don't want to move, especially if they're not tech-savvy, are locked into that ecosystem for work or have devices that only work well in Windows (hi, that's me).

So for now mitigations matter. Plus this is, on paper, an optional feature only coming to some devices (and that's assuming you believe they won't be forced to back down from this absolute trainwreck). At this point, I'd recommend explaining the issue to Windows users who woudn't be experienced enough to know. And hey, if you're in a position to do what you do in a computer on a different platform and want to go that way, that's also reasonable. That's just not going to be everybody, or even most people.

I personally moved some of my family members to an Android device for home computing years ago and have never looked back, but I do use Windows on multiple devices and on most I either can't or don't want to switch, so... yeah, mitigation is important for many of my use cases.

MudMan ,
@MudMan@fedia.io avatar

But that's my point, though. You have to be completely detached from reality to think that securing the login to the local device means the local device is secure. People aren't going to get busted for this because they lack drive encryption, they're gonna get busted for this because some nice sounding gentleman on their phone is going to get them to give them remote access and have a mainline to every single thing they did for the past month. Or because their partner is going to use their shared password and find out some stuff they shouldn't have or whatever.

That's obvious to anybody who thinks about it for more than two seconds. How could it not be obvious to MS?

MudMan ,
@MudMan@fedia.io avatar

Ohhh, let's not relitigate this. You can check my post history for a bunch of people freaking out about it, but that's not the point I'm trying to make here. Let's just let a justified dunking on Microsoft be a justified dunking on Microsoft.

MudMan ,
@MudMan@fedia.io avatar

Look, the reason I dodge it is that I did try to move one of my weirder laptops over and it didn't go so well, but if you dwell on that too much here you get... a very specific type of response, and I'm not into starting that here.

Best I can tell you is if you're curious about Linux there are plenty of ways of giving it a try without stomping over anything on your Windows install. Go get an old external drive somewhere, set it up accordingly and boot from it. If you're gonna hit into compatibility or usability roadblocks you'll find out just fine, and you can always move to a more permanent setup later. Just... eh... make sure that you read some documentation in full first and you disable BitLocker or have your recovery keys handy. I wouldn't want my being in a hurry leading you to losing a bunch of data.

But again, not my point here. I'm more interested in the larger impact on a userbase that isn't going to do that anytime soon.

MudMan ,
@MudMan@fedia.io avatar

I'm... not impressed with the concept of disk partitioning, what a weird way to read that.

I'm impressed with the interface smartly picking up on what you're trying to do, shrinking and growing partitions and setting up things automatically to specifically support a non-destructive install to coexist with other OSs because the idea that you'd be just testing a distro alongside Windows or something else is a specifically supported use case.

MudMan ,
@MudMan@fedia.io avatar

Like I told the guy accusing me of trolling, I'm not even trying to "trigger" anyone, it's just that people will walk you through the same three basic troubleshooting options whenever you point out you bumped into a compatibility issue and it gets annoying after a while.

Agreed on the other thing, though. I actively want Linux on desktop/laptop to be better. I actively like many things about it already, which is why I was trying to set it up on this thing in the first place. I use it on other devices that have specific support for it, from SBCs to the Steam Deck. And I definitely also have issues, concerns and pet peeves with Windows, Android variants, MacOS, iOS, iPadOS and every other alternative out there.

I just don't particularly care to stick to a single thing and will use whatever the path of least resistance is for each application. Anything else seems nuts. An OS is a utility, not a sports team. It's like rooting for an AC manufacturer.

MudMan ,
@MudMan@fedia.io avatar

But I do like Linux. That's a really silly thing to say. That's why I was trying to get it in there even though I knew the support wasn't all sorted. Screw that "if you don't like it, leave" attitude.

And no, I won't contribute to fixing the issues because I lack the technical skills to do so and the skills I can contribute they don't need. That's also silly, you can't be arguing for mainstream adoption of a thing and simultaneously saying users should be out there fixing it themselves if they encounter an unaddressed hardware incompatibility.

And yes, it's absolutely down to the manufacturer not making a Linux version of their drivers and dumb dedicated software. Absolutely. What am I supposed to do about that? It's not a niche manufacturer, either, it's a pretty popular one. As far as I know, none of the big corporate laptop manufacturers offer official Linux support (at least not Lenovo, Asus, MSI or Dell, that I know of). In fact, the indie manufacturers tend to offer better support, what with using less custom hardware and software and sometimes offering a built-in Linux install as an option to serve as a workaround for OEM fees.

Look, if you don't want to hear about the issues people encounter with your OS of choice... fine, I guess. I don't know why you're emotionally invested in utilitarian pieces of software, but you do you. But if you hope that you're going to be online having a fanclub about an operating system, of all things, and nobody is ever gonna show up saying "hey, I tried it and it kinda didn't do it for me"... eh... maybe make it a private Discord channel instead, because that's probably not gonna happen otherwise. That's probably a reason why you don't get Windows or MacOS fanclubs out there, because let me be clear, I would have just as many objections to dump into those, albeit for different reasons.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • incremental_games
  • meta
  • All magazines