Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

@Ferk@kbin.social cover
@Ferk@kbin.social avatar

Ferk

@Ferk@kbin.social

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Ferk ,
@Ferk@kbin.social avatar

Signal is the same in that regards.

Ferk , (edited )
@Ferk@kbin.social avatar

You mean "confidentiality", not privacy.
Just the metadata related to whether you personally, traceable to your full name and address, have a Signal account and how much you use it might be considered a privacy breach already, even if the content of the messages is confidential.

Ferk , (edited )
@Ferk@kbin.social avatar

If you don’t like it, vote with your wallet

I'd say more: don't use Youtube if you don't like it.

It's very hypocritical to see how everyone bashes at Youtube, Twitter, Facebook, Uber, etc. and yet they continue using it as if life would be hell without the luxury of those completelly non essential brands. If you truly don't like them, just let them die... look for alternatives. Supporting an alternative is what's gonna hurt them the most if what you actually want is to force them to change.

There's also a lot of videos from rich Youtube creators complaining about Youtube policies, and yet most of them don't even try to set up channels on alternative platforms. Many creators have enough resources to even launch their own private video podcast services, and yet only very few do anything close to even attempt that.

Ferk ,
@Ferk@kbin.social avatar

The thing is.. they are not really disagreeing if they are not saying something that conflicts or challengues the argument.

They just mistakenly believe they disagree when in fact they are agreeing. That's what makes it stupid.

Ferk ,
@Ferk@kbin.social avatar

I wouldn't be surprised if at some point they start doing something like what Twitter did and require login to view the content.

Ferk ,
@Ferk@kbin.social avatar

Yes. The thing is that then you are no longer anonymously using yt-dlp.
The next step would be trying to detect that case.. maybe adding captchas when there's even a slight suspicion.
Perhaps even to the point of banning users (and then I hope you did not rely on the same account for gmail or others).
It'll be a cat and mouse situation. Similar as it happened with Twitter, there are also third party apps, but many gave up.

Ferk , (edited )
@Ferk@kbin.social avatar

While the result from generating an image through AI is not meant to be "factually" accurate, its seeking to be as accurate as possible when it comes to matching the prompt that is provided. And the prompt "1943 German Soldier" or "US Senator from the 1800" or "Emperor of China" has some implications in what kind of images would be expected and which kinds wouldn't. Just like how you wouldn't expect a lightsaber when asking for "medieval swords".

I'm not convinced that attempting to "balance a biased training dataset" in the way that this is apparently being done is really attainable or worthwhile.

An AI can only work based on biases, and it's impossible to correct/balance the dataset without just introducing a different bias. Because the model is just a collection of biases that discriminate between how different descriptions relate to pictures. If there was no bias for the AI to rely on, they would not be able to pick anything to show.

For example, the AI does not know whether the word "Soldier" really corresponds to someone dressed like in the picture, it's just biased to expect that. It can't tell whether an actual soldier might just be wearing pajamas or whether someone dressed in those uniforms might not be an actual soldier.

Describing a picture is, on itself, an exercise of assumptions, biases, appearances that are just based on pre-conceived notions of what are our expectations when comparing the picture to our own reality. So the AI needs to show whatever corresponds to those biases in order to match as accuratelly as possible our biased expectations for what those descriptions mean.

If the dataset is complete enough, and yet it's biased to show predominantly a particular gender or ethnicity when asking for "1943 German Soldier" because that happens to be the most common image of what a "1943 German Soldier" is, but you want a different ethnicity or gender, then add that ethnicity/gender to the prompt (like you said in the first point), instead supporting the idea of having the developers force diversity into the results in a direction that contradicts the dataset just because the results aren't politically correct. ..it would be more honest to add a disclaimer and still show the result as it is, instead of manipulating it in a direction that activelly pushes the IA to hallucinate.

Alternativelly: expand your dataset with more valuable data in a direction that does not contradict reality (eg. introduce more pictures of soldiers of different ethnics from situations that actually are found in our reality). You'll be altering the data, but you would be doing it without distorting the bias unrealistically, since they would be examples grounded in reality.

Ferk , (edited )
@Ferk@kbin.social avatar

The word "Nazi" wasn't part of the prompt though.

The prompt was "1943 German Soldier"... so if, like you said, the images are "Dressed as a German style soldier", I'd say it's not too bad.

Ferk ,
@Ferk@kbin.social avatar

From the actual regulation text:

the concept of ‘illegal content’ should broadly reflect the existing rules in the offline environment. In particular, the concept of ‘illegal content’ should be defined broadly to cover information relating to illegal content, products, services and activities. In particular, that concept should be understood to refer to information, irrespective of its form, that under the applicable law is either itself illegal, such as illegal hate speech or terrorist content and unlawful discriminatory content, or that the applicable rules render illegal in view of the fact that it relates to illegal activities. Illustrative examples include the sharing of images depicting child sexual abuse, the unlawful non-consensual sharing of private images, online stalking, the sale of non-compliant or counterfeit products, the sale of products or the provision of services in infringement of consumer protection law, the non-authorised use of copyright protected material, the illegal offer of accommodation services or the illegal sale of live animals. In contrast, an eyewitness video of a potential crime should not be considered to constitute illegal content, merely because it depicts an illegal act, where recording or disseminating such a video to the public is not illegal under national or Union law. In this regard, it is immaterial whether the illegality of the information or activity results from Union law or from national law that is in compliance with Union law and what the precise nature or subject matter is of the law in question.

So, both.

Ferk ,
@Ferk@kbin.social avatar

Yes.. honestly, I don't see this approach being worthwhile...

It's better to search for full open source alternatives, front end and backend... like Lemmy/kbin for or reddit, peertube/lbry for YouTube, etc.

Ferk ,
@Ferk@kbin.social avatar

Developing a crippled port that is limited/restricted by design due to Apple policies would not really help Mozilla’s/Firefox’s reputation anyway. Apple fanbois will complain ether way.

If those fanbois want a Firefox app on Apple systems, it's Apple the one they should complain to.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • incremental_games
  • random
  • meta
  • All magazines