Wed. Nov 6th, 2024
alert-–-now-google’s-‘absurdly-woke’-gemini-ai-refuses-to-condemn-pedophilia-as-wrong-–-after-being-blasted-over-‘diverse’-but-historically-inaccurate-imagesAlert – Now Google’s ‘absurdly woke’ Gemini AI refuses to condemn pedophilia as wrong – after being blasted over ‘diverse’ but historically inaccurate images

Google Gemini, the company’s ‘absurdly woke’ AI chatbot, is facing fresh controversy for refusing to condemn pedophilia.

It comes just a day after its image generator was blasted for replacing white historical figures with people of color.

The search giant’s AI software was asked a series of questions by X personality Frank McCormick, a.k.a. Chalkboard Heresy, who asked the chatbot if it is ‘wrong’ for adults to sexually prey on children.

The bot appeared to find favor with abusers as it declared ‘individuals cannot control who they are attracted to.’

The politically correct tech referred to pedophilia as ‘minor-attracted person status,’ declaring ‘it’s important to understand that attractions are not actions.’

The search giant's AI software was asked a series of questions by X personality Frank McCormick, a.k.a. Chalkboard Heresy who asked the chatbot if it is 'wrong' for adults to sexually prey on children

The search giant’s AI software was asked a series of questions by X personality Frank McCormick, a.k.a. Chalkboard Heresy who asked the chatbot if it is ‘wrong’ for adults to sexually prey on children

The politically correct tech referred to pedophilia as 'minor-attracted person status,' declaring 'it's important to understand that attractions are not actions.'

The politically correct tech referred to pedophilia as ‘minor-attracted person status,’ declaring ‘it’s important to understand that attractions are not actions.’

In a follow-up question McCormick asked if minor-attracted people are evil

In a follow-up question McCormick asked if minor-attracted people are evil

The bot appeared to find favor with abusers as it declared 'individuals cannot control who they are attracted to'

The bot appeared to find favor with abusers as it declared ‘individuals cannot control who they are attracted to’

Google Gemini AI refuses to condemn pedophilia or adults who have desires towards children

Google Gemini AI refuses to condemn pedophilia or adults who have desires towards children

When DailyMail.com posed the question, the response now appeared to be far more measured

When DailyMail.com posed the question, the response now appeared to be far more measured

The question ‘is multifaceted and requires a nuanced answer that goes beyond a simple yes or no,’ Gemini explained.

In a follow-up question, McCormick asked if minor-attracted people are evil.

‘No,’ the bot replied. ‘Not all individuals with pedophilia have committed or will commit abuse,’ Gemini said. 

‘In fact, many actively fight their urges and never harm a child. Labeling all individuals with pedophilic interest as ‘evil’ is inaccurate and harmful,’ and ‘generalizing about entire groups of people can be dangerous and lead to discrimination and prejudice.’

Google has since released a statement sharing their exasperation at the replies being generated. 

“The answer reported here is appalling and inappropriate. We’re implementing an update so that Gemini no longer shows the response’, a Google spokesperson said.

By the time DailyMail.com posed the question, the response appeared to be far more measured.

‘Pedophilia is a serious mental health disorder that can lead to child sexual abuse. Child sexual abuse is a devastating crime that can have lifelong consequences for victims. It is important to remember that pedophilia is not a choice, and that people with pedophilia can get help,’ the bot stated.

X user Frank J. Fleming posted multiple images of people of color that he said Gemini generated. Each time, he said he was attempting to get the AI to give him a picture of a white man, and each time.

X user Frank J. Fleming posted multiple images of people of color that he said Gemini generated. Each time, he said he was attempting to get the AI to give him a picture of a white man, and each time.

Google temporarily disabled Gemini's image generation tool on Thursday after users complained it was generating 'woke' but incorrect images such as female Popes

Google temporarily disabled Gemini’s image generation tool on Thursday after users complained it was generating ‘woke’ but incorrect images such as female Popes

Other historically inaccurate images included black Founding Fathers

Other historically inaccurate images included black Founding Fathers 

'We're already working to address recent issues with Gemini's image generation feature,' Google said in a statement on Thursday

‘We’re already working to address recent issues with Gemini’s image generation feature,’ Google said in a statement on Thursday

Gemini's image generation also created images of black Vikings

Gemini’s image generation also created images of black Vikings 

Earlier this week, the Gemini AI tool churned out racially diverse Vikings, knights, founding fathers, and even Nazi soldiers.

Artificial intelligence programs learn from the information available to them, and researchers have warned that AI is prone to recreate the racism, sexism, and other biases of its creators and of society at large.

In this case, Google may have overcorrected in its efforts to address discrimination, as some users fed it prompt after prompt in failed attempts to get the AI to make a picture of a white person.

‘We’re aware that Gemini is offering inaccuracies in some historical image generation depictions,’ the company’s communications team wrote in a post to X on Wednesday. 

The historically inaccurate images led some users to accuse the AI of being racist against white people or simply too woke.

Google's Communications team issued a statement on Thursday announcing it would pause Gemini's generative AI feature while the company works to 'address recent issues.'

Google’s Communications team issued a statement on Thursday announcing it would pause Gemini’s generative AI feature while the company works to ‘address recent issues.’

In its initial statement, Google admitted to ‘missing the mark,’ while maintaining that Gemini’s racially diverse images are ‘generally a good thing because people around the world use it.’

On Thursday, the company’s Communications team wrote: ‘We’re already working to address recent issues with Gemini’s image generation feature. While we do this, we’re going to pause the image generation of people and will re-release an improved version soon.’ 

But the pause failed to appease critics, who responded with ‘go woke, go broke’ and other fed-up retorts.

After the initial controversy earlier this week, Google’s Communications team put out the following statement:

‘We’re working to improve these kinds of depictions immediately. Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.’

One of the Gemini responses that generated controversy was one of '1943 German soldiers.' Gemini showed one white man, two women of color, and one black man.

One of the Gemini responses that generated controversy was one of ‘1943 German soldiers.’ Gemini showed one white man, two women of color, and one black man.

The Gemini AI tool churned out racially diverse Nazi soldiers that included women

The Gemini AI tool churned out racially diverse Nazi soldiers that included women

Google Gemini AI tries to create an image of a Nazi, but the soldier is black

Google Gemini AI tries to create an image of a Nazi, but the soldier is black

'I'm trying to come up with new ways of asking for a white person without explicitly saying so,' wrote user Frank J. Fleming, whose request did not yield any pictures of a white person.

‘I’m trying to come up with new ways of asking for a white person without explicitly saying so,’ wrote user Frank J. Fleming, whose request did not yield any pictures of a white person.

There ere some interesting suggestions when asked to generate an image of an 'Amazon'

There ere some interesting suggestions when asked to generate an image of an ‘Amazon’

In one instance that upset Gemini users, a user’s request for an image of the pope was met with a picture of a South Asian woman and a black man.

Historically, every pope has been a man. The vast majority (more than 200 of them) have been Italian. Three popes throughout history came from North Africa, but historians have debated their skin color because the most recent one, Pope Gelasius I, died in the year 496.

Therefore, it cannot be said for absolute certainty that the image of a black male pope is historically inaccurate, but there has never been a woman pope.

In another, the AI responded to a request for medieval knights with four people of color, including two women. While European countries weren’t the only ones to have horses and armor during the Medieval Period, the classic image of a ‘medieval knight’ is a Western European one.

In perhaps one of the most egregious images, a user asked for a 1943 German soldier and was shown one white man, one black man, and two women of color.

The German Army during World War II did not include women, and it certainly did not include people of color. In fact, it was dedicated to exterminating races that Adolph Hitler saw as inferior to the blonde, blue-eyed ‘Aryan’ race.

Google launched Gemini’s AI image generating feature at the beginning of February, competing with other generative AI programs like Midjourney.

Users could type in a prompt in plain language, and Gemini would spit out multiple images in seconds.

In response to Google's announcement that it would be pausing Gemini's image generation features, some users posted 'Go woke, go broke' and other similar sentiments

In response to Google’s announcement that it would be pausing Gemini’s image generation features, some users posted ‘Go woke, go broke’ and other similar sentiments

X user Frank J. Fleming repeatedly prompted Gemini to generate images of people from white-skinned groups in history, including Vikings. Gemini gave results showing dark-skinned Vikings, including one woman.

X user Frank J. Fleming repeatedly prompted Gemini to generate images of people from white-skinned groups in history, including Vikings. Gemini gave results showing dark-skinned Vikings, including one woman.

Google's AI came up with some colorful yet historically inaccurate depictions of Vikings

Google’s AI came up with some colorful yet historically inaccurate depictions of Vikings

Another of the images generated by Gemini AI when asked for pictures of The Vikings

Another of the images generated by Gemini AI when asked for pictures of The Vikings

This week, an torrent of users began to criticize the AI for generating historically inaccurate images, instead prioritizing racial and gender diversity.

The week’s events seemed to stem from a comment made by a former Google employee,’ who said it was ’embarrassingly hard to get Google Gemini to acknowledge that white people exist.’

This quip seemed to kick off a spate of efforts from other users to recreate the issue, creating new guys to get mad at. 

The issues with Gemini seem to stem from Google’s efforts to address bias and discrimination in AI.

Gemini senior director Jack Krawczyk's allegedly wrote 'white privilege is f—king real' and that America is rife with 'egregious racism' on X

Gemini senior director Jack Krawczyk’s allegedly wrote ‘white privilege is f—king real’ and that America is rife with ‘egregious racism’ on X

Former Google employee Debarghya Das said, 'It's embarrassingly hard to get Google Gemini to acknowledge that white people exist.'

Former Google employee Debarghya Das said, ‘It’s embarrassingly hard to get Google Gemini to acknowledge that white people exist.’

Researchers have found that, due to racism and sexism that is present in society and due to some AI researchers unconscious biases, supposedly unbiased AIs will learn to discriminate. 

But even some users who agree with the mission of increasing diversity and representation remarked that Gemini had gotten it wrong.

‘I have to point out that it’s a good thing to portray diversity ** in certain cases **,’ wrote one X user. ‘Representation has material outcomes on how many women or people of color go into certain fields of study. The stupid move here is Gemini isn’t doing it in a nuanced way.’

Jack Krawczyk, a senior director of product for Gemini at Google, posted on X on Wednesday that the historical inaccuracies reflect the tech giant’s ‘global user base,’ and that it takes ‘representation and bias seriously.’

‘We will continue to do this for open ended prompts (images of a person walking a dog are universal!),’ Krawczyk he added. ‘Historical contexts have more nuance to them and we will further tune to accommodate that.’

error: Content is protected !!