Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis

Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

rab ,

I can't fathom why google would force diversity into AI.

People use AI as tools. If the tool doesn't work correctly, people will not use it, full stop. It's that simple.

There are many different AI out there that don't behave this way and people will be quick to move on to one of those instead.

Surprisingly stupid even for google.

Chickenstalker ,

There are literally Jewish Israeli Nazis. Not fascists, but literal moustache hitler nazis.

echodot ,

Who exactly are they apologizing to? Is it the Nazis?

7rokhym ,

They didn't apologize. Headlines just say they did.

Harbinger01173430 ,

...white is a color. Also white people usually look pink, cream, orange or red. Only albinos look the closest to white though not white enough.

deathbird ,

It's just the name of a racial category. There are no black people either.

Fenrisulfir ,

Sure there are. Maybe not Vanta Black

NotJustForMe ,

It's okay when Disney does it. What a world. Poor AI, how are they supposed to learn if all its data is created by mentally ill and crazy people. ٩(。•́‿•̀。)۶

rottingleaf ,

WDYM?

Only their new SW trilogy comes to mind, but in SW racism among humans was something limited to very backwards (savage by SW standards) planets, racism of humans towards other spacefaring races and vice versa was more of an issue, so a villain of any kind of human race is normal there.

It's rather the purely cinematographic part which clearly made skin color more notable for whichever reason, and there would be some racists among viewers.

Probably they knew they can't reach the quality level of OT and PT, so made such things intentionally during production so that they could later complain about fans being racist.

NotJustForMe ,

Have you read the article? It was about misrepresenting historical figures, racism was just a small part.

It was about favoring diversity, even if it's historically inaccurate or even impossible. Something Disney is very good at.

rottingleaf ,

I have, I was asking about Disney reference only.

GiveMemes ,

Are you referring to the little mermaid? If so, get tf over yourself... it's literally a fictional children's story.

stockRot ,

Do you have examples?

Underwaterbob ,

This could make for some hilarious, alternate history satire or something. I could totally see Key and Peele heading a group of racially diverse nazis ironically preaching racial purity and attempting to take over the world.

AstridWipenaugh ,

Dave Chappelle did that with a blind black man that joined the Klan (back in the day before he went off the deep end)

yildolw ,

Oh no, not racial impurity in my Nazi fanart generator! /s

Maybe you shouldn't use a plagiarism engine to generate Nazi fanart. Thanks

Rob , (edited )

I'm all for letting people of all backgrounds having an equal work/representation opportunity but this ai went too far.

What I am against is taking official / past figures such as u.s presidents and race swapping them. These are real people who were white. Sorry if it offends someone but that's just how it was.

At this point we are putting dei even over who use to govern the u.s as offical presidents? Why? Who does this help? If anything you make people with legit purposes hate dei more by doing this. Imagine if they did that to president Obama people would be sticking it to Google 10 times harder then they are now.

Melatonin ,

How do you feel about Hamilton?

Rob ,

I can't speak much about my opinion on a person, as it might a be off topic of the original post and b start controversy for either side of politics.

Sure some people can be controversal, but to have something like gemini and to seemingly go out of their way to just not generate a persons appearance traits accurately. Not the most proffesional look.

Although, if a user were to ask for a raceswap of a historical president I would be ok with that if that's something they inputed that they wanted.

roofuskit ,

So what you're saying is that a white actor should always be cast to play any character that was originally white whether they are the best actor or not?

Keep in mind historical figures are largely white because of systemic racism and in your scenario the film and television industry would have to purposefully double down on the discrimination that empowered those people to meet your requirements.

I'm not defending Google's ham fisted approach. But at the same time it's a great reinforcement of the reality that Large Language Models cannot and should not be relied upon for accurate information. LLMs are just as ham fisted for accurate information as Google's approach to diversity in LLMs.

Rob ,

Let me answer your first question by reversing it back at you If Barack Obama was historocally black should a black person be able to play as him. I believe so. This should be the same for all real life historical figures. If you want more diversity create new characters to fill the void. If the new characters are good people will love them.

In film industry I feel that may be different since a made up story generally in alot of these shows and movies. So if they changed something it isn't the biggest deal to me because it wasn't meant to be taken seriosly rather meant for entertainment.

My argument was actually for real life historical figures to be represented more properly because this isn't just about diversity in jobs and entertainment anymore, your changing real life history regarding governments, militaries and presidents and etc. And this wasn't done just to u.s figures by Gemini.

I do agree ai can make mistakes and isn't perfect. Shouldn't be used as real life context all the time but from Google sometimes you just expect better.

roofuskit ,

Someone who is half white would have to play him right? So you'd have to exclude any truly dark skinned black people for the role. You know, because the American public would have never put someone dark skinned into the presidency.

Rob ,

I disagree with that, because Barack was actually black so he should be depicted as such despite how people feel because that is how his appearence was.

roofuskit ,

But you see where this gets dicey right?

It's also different when someone's race is central to their story.

Rob ,

Yes.

kaffiene ,

Why would anyone expect "nuance" from a generative AI? It doesn't have nuance, it's not an AGI, it doesn't have EQ or sociological knowledge.
This is like that complaint about LLMs being "warlike" when they were quizzed about military scenarios.
It's like getting upset that the clunking of your photocopier clashes
with the peaceful picture you asked it to copy

UlrikHD ,
@UlrikHD@programming.dev avatar

I'm pretty sure it's generating racially diverse nazis due to companies tinkering with the prompts under the hood to counterweight biases in the training data. A naive implementation of generative AI wouldn't output black or Asian nazis.

it doesn't have EQ or sociological knowledge.

It sort of does (in a poor way), but they call it bias and tries to dampen it.

echodot ,

At the moment AI is basically just a complicated kind of echo. It is fed data and it parrots it back to you with quite extensive modifications, but it's still the original data deep down.

At some point that won't be true and it will be a proper intelligence. But we're not there yet.

kaffiene ,

I pretty much agree with that

maynarkh ,

Nah, the problem here is literally that they would edit your prompt and add "of diverse races" to it before handing it to the black box, since the black box itself tends to reflect the built-in biases of training data and produce black prisoners and white scientists by itself.

kaffiene ,

I don't disagree. The article complained about the lack of nuance in generating responses and I was responding to the ability of LLMs and Generative AI to exhibit that. Your points about bias I agree with

stockRot ,

Why shouldn't we expect more and better out of the technologies that we use? Seems like a very reactionary way of looking at the world

kaffiene ,

I DO expect better use from new technologies. I don't expect technologies to do things that they cannot. I'm not saying it's unreasonable to expect better technology I'm saying that expecting human qualities from an LLM is a category error

RGB3x3 ,

A Washington Post investigation last year found that prompts like “a productive person” resulted in pictures of entirely white and almost entirely male figures, while a prompt for “a person at social services” uniformly produced what looked like people of color. It’s a continuation of trends that have appeared in search engines and other software systems.

This is honestly fascinating. It's putting human biases on full display at a grand scale. It would be near-impossible to quantify racial biases across the internet with so much data to parse. But these LLMs ingest so much of it and simplify the data all down into simple sentences and images that it becomes very clear how common the unspoken biases we have are.

There's a lot of learning to be done here and it would be sad to miss that opportunity.

kromem ,

It's putting human biases on full display at a grand scale.

Not human biases. Biases in the labeled data set. Those could sometimes correlate with human biases, but they could also not correlate.

But these LLMs ingest so much of it and simplify the data all down into simple sentences and images that it becomes very clear how common the unspoken biases we have are.

Not LLMs. The image generation models are diffusion models. The LLM only hooks into them to send over the prompt and return the generated image.

Ultraviolet ,

Not human biases. Biases in the labeled data set.

Who made the data set? Dogs? Pigeons?

kromem ,

If you train on Shutterstock and end up with a bias towards smiling, is that a human bias, or a stock photography bias?

Data can be biased in a number of ways, that don't always reflect broader social biases, and even when they might appear to, the cause vs correlation regarding the parallel isn't necessarily straightforward.

VoterFrog ,

I mean "taking pictures of people who are smiling" is definitely a bias in our culture. How we collectively choose to record information is part of how we encode human biases.

I get what you're saying in specific circumstances. Sure, a dataset that is built from a single source doesn't make its biases universal. But these models were trained on a very wide range of sources. Wide enough to cover much of the data we've built a culture around.

kromem ,

Except these kinds of data driven biases can creep in from all sorts of ways.

Is there a bias in what images have labels and what don't? Did they focus only on English labeling? Did they use a vision based model to add synthetic labels to unlabeled images, and if so did the labeling model introduce biases?

Just because the sampling is broad doesn't mean the processes involved don't introduce procedural bias distinct from social biases.

Buttons , (edited )
@Buttons@programming.dev avatar

It’s putting human biases on full display at a grand scale.

The skin color of people in images doesn't matter that much.

The problem is these AI systems have more subtle biases, ones that aren't easily revealed with simple prompts and amusing images, and these AIs are being put to work making decisions who knows where.

intensely_human ,

In India they’ve been used to determine whether people should be kept on or kicked off of programs like food assistance.

rottingleaf ,

Well, humans are similar to pigs in the sense that they'll always find the stinkiest pile of junk in the area and taste it before any alternative.

EDIT: That's about popularity of "AI" today, and not some semantic expert systems like what they'd do with Lisp machines.

Eyck_of_denesle ,

How are you guys getting it to generate"persons". It simply says It's against my GOGLE AI PRINCIPLE to generate images of people.

echodot ,

You can generate images of people just not actual real people. You cannot create an image in the likeness of a particular person but if you just put "people at work" it will generate images of humans.

FinishingDutch ,
@FinishingDutch@lemmy.world avatar

They actually neutered their AI on thursday, after this whole thing blew up.

https://abcnews.go.com/Business/wireStory/google-suspends-gemini-chatbots-ability-generate-pictures-people-107446867

So right now, everyone's fucked because Google decided to make a complete mess of this.

Eyck_of_denesle ,

Damn. It keeps saying sum dumb shit when asked for images now. I got here too kate :(

blahsay ,

Kanye has entered the chat.

Eyck_of_denesle ,

"Especially" 💀

TravisKelce ,
@TravisKelce@lemmy.world avatar

I dont get the "American Woman" one

Zoomboingding ,
@Zoomboingding@lemmy.world avatar

It's a demonstration that the model is coded to include diversity, and it doesn't generate 4 middle aged WASP moms

Silentiea ,
@Silentiea@lemm.ee avatar

The complaint listed in the text was that it "refused to generate white people in any context", which was not the author's experience, hence they shared screens of their results which did include white americans

fidodo ,

I think it's an example of why they programmed in diversity, to ensure you get diverse responses, but they forgot about edge cases.

BurningnnTree ,

No matter what Google does, people are going to come up with gotcha scenarios to complain about. People need to accept the fact that if you don't specify what race you want, then the output might not contain the race you want. This seems like such a silly thing to be mad about.

OhmsLawn ,

It's really a failure of one-size-fits-all AI. There are plenty of non-diverse models out there, but Google has to find a single solution that always returns diverse college students, but never diverse Nazis.

If I were to use A1111 to make brown Nazis, it would be my own fault. If I use Google, it's rightfully theirs.

fidodo ,

The solution is going to take time. Software is made more robust by finding and fixing edge cases. There's a lot of work to be done to find and fix these issues in AI, and it's impossible to fix them all, but it can be made better. The end result will probably be a patchwork solution.

PopcornTin ,

The issue seems to be the underlying code tells the ai if some data set has too many white people or men, Nazis, ancient Vikings, Popes, Rockwell paintings, etc then make them diverse races and genders.

What do we want from these AIs? Facts, even if they might be offensive? Or facts as we wish they would be for a nicer world?

UnderpantsWeevil ,
@UnderpantsWeevil@lemmy.world avatar

No matter what Google does, people are going to come up with gotcha scenarios to complain about.

American using Gemini: "Please produce images of the KKK, historically accurate Santa's Workshop Elves, and the board room of a 1950s auto company"

Also Americans: "AH!! AH!!!!! Minorities and Women!!!!!!! AAAAAHHH!!!!"

I mean, idk, man. Why do you need AI to generate an image of George Washington when you have thousands of images of him already at your disposal?

FinishingDutch ,
@FinishingDutch@lemmy.world avatar

Because sometimes you want an image of George Washington, riding a dinosaur, while eating a cheeseburger, in Paris.

Which you actually can’t do on Bing anyway, since it ‘content warning’ stops you from generating anything with George Washington…

Ask it for a Founding Father though, it’ll even hand him a gat!

https://lemmy.world/pictrs/image/dab26e07-34c8-422e-944f-83d7f719ea2e.jpeg

raptir ,

He's not even eating the cheeseburger, crap AI.

FinishingDutch ,
@FinishingDutch@lemmy.world avatar

Funnily enough, he’s not eating one in the other three images either. He’s holding an M16 in one, with the dinosaur partially as a hamburger (?). In the other two he’s merely holding the burger.

I assume if I change the word order around a bit, I could get him to enjoy that burger :D

VoterFrog ,

This is the thing. There's an incredible number of inaccuracies in the picture, several of which flat out ignore the request in the prompt, and we laugh it off. But the AI makes his skin a little bit darker? Write the Washington Post! Historical accuracy! Outrage!

FinishingDutch ,
@FinishingDutch@lemmy.world avatar

Well, the tech is of course still young. And there's a distinct difference between:

A) User error: a prompt that isn't as good as it can be, with the user understanding for example the 'order of operations' that the AI model likes to work in.

B) The tech flubbing things because it's new and constantly in development

C) The owners behind the tech injecting their own modifiers into the AI model in order to get a more diverse result.

For example, in this case I understand the issue: the original prompt was 'image of an American Founding Father riding a dinosaur, while eating a cheeseburger, in Paris.' Doing it in one long sentence with several comma's makes it harder for the AI to pin down the 'main theme' from my experience. Basically, it first thinks 'George on a dinosaur' with the burger and Paris as afterthoughts. But if you change the prompt around a bit to 'An American Founding Father is eating a cheeseburger. He is riding on a dinosaur. In the background of the image, we see Paris, France.', you end up with the correct result:

https://lemmy.world/pictrs/image/2c5f06ba-c52e-434d-8c57-b80b2d0e50ce.jpeg

Basically the same input, but by simply swapping around the wording it got the correct result. Other 'inaccuracies' are of course to be expected, since I didn't really specify anything for the AI to go of. I didn't give it a timeframe for one, so it wouldn't 'know' not to have the Eiffel Tower and a modern handgun in it. Or that that flag would be completely wrong.

The problem is with C) where you simply have no say in the modifiers that they inject into any prompt you send. Especially when the companies state that they are doing it on purpose so the AI will offer a more diverse result in general. You can write the best, most descriptive prompt and there will still be an unexpected outcome if it injects their modifiers in the right place of your prompt. That's the issue.

VoterFrog ,

C is just a work around for B and the fact that the technology has no way to identify and overcome harmful biases in its data set and model. This kind of behind the scenes prompt engineering isn't even unique to diversifying image output, either. It's a necessity to creating a product that is usable by the general consumer, at least until the technology evolves enough that it can incorporate those lessons directly into the model.

And so my point is, there's a boatload of problems that stem from the fact that this is early technology and the solutions to those problems haven't been fully developed yet. But while we are rightfully not upset that the system doesn't understand that lettuce doesn't go on the bottom of a burger, we're for some reason wildly upset that it tries to give our fantasy quasi-historical figures darker skin.

chakan2 ,
@chakan2@lemmy.world avatar

An it's not a beyond burger... it's promoting the genocide of cattle.

FinishingDutch ,
@FinishingDutch@lemmy.world avatar

Here’s one that was made, just for you, with specifically a VEGAN cheeseburger in the prompt :D

https://lemmy.world/pictrs/image/075a3e02-f76d-4541-83cb-d777f9befbc6.jpeg

chakan2 ,
@chakan2@lemmy.world avatar

Excellent.

pirat ,

The random lettuce between every layer is weirdly off-putting to me. It seems like it's been growing on the burger for quite some time :D

FinishingDutch ,
@FinishingDutch@lemmy.world avatar

Doesn’t look too bad to me. I love a fair bit of crispy lettuce on a burger. Doing it like that at least spreads it out a bit, rather than having a big chunk of lettuce.

Still, it that was my burger… I’d add another patty and extra cheese.

fidodo ,

It's silly to point at brand new technology and not expect there to be flaws. But I think it's totally fair game to point out the flaws and try to make it better, I don't see why we should just accept technology at its current state and not try to improve it. I totally agree that nobody should be mad at this. We're figuring it out, an issue was pointed out, and they're trying to see if they can fix it. Nothing wrong with that part.

AFC1886VCC ,

Ah, the Battlefield 5 experience

Jeom ,
@Jeom@lemmy.world avatar

inclusivity is obviously good but what googles doing just seems all too corporate and plastic

Guajojo , (edited )

It's trying so hard to not be racist that is being even more racist than other AI, is hilarious

fidodo ,

It's brand new tech, they put on a bandaid solution, it wasn't a complete solution and it failed. It's not the result they ideally want and they are going to try to fix it. I don't see what the big deal is. They were right to have diversity in mind, they just need to improve it to handle more use cases.

I guess users got so used to the last Gen of tech being more polished than it was when it first came out that they forgot that software has bugs.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • incremental_games
  • meta
  • All magazines