‘We’ll Do It Better’: Why Google Temporarily Stopped Its Latest AI Image Generation Model Gemini
‘We’ll Do It Better’: Why Google Temporarily Stopped Its Latest AI Image Generation Model Gemini
Gemini creates realistic images of people based on the descriptions just like Open AI’s ChatGPT. But it has not been trained to filter hate content, or introduce diversity into its inputs

Google is facing backlash over its latest artificial intelligence model, Gemini, which generates images of people from different ethnicities and genders.

Gemini creates realistic images of people based on the descriptions just like Open AI’s ChatGPT. But it has not been trained to filter hate content, or introduce diversity into its inputs.

“We’ve acknowledged the mistake and temporarily paused image generation of people in Gemini while we work on an improved version,” Google said in a blogpost.

Google says that Gemini Pro is more capable at tasks such as summarizing content, brainstorming and writing than GPT-3.5.

Gemini is now available on Google products in its Nano and Pro sizes, like the Pixel 8 phone and Bard chatbot, respectively. Google plans to integrate Gemini over time into its Search, Ads, Chrome, and other services.

Inaccuracies Spotted

Google has admitted to “inaccuracies” found in the historical depictions that Gemini AI chatbot was creating. The company said it has developed protections against low-quality information along with tools to help people learn more about the information they see online.

“In the event of a low-quality/ outdated response, we quickly implement improvements. We also offer people easy ways to verify information with our double-check feature, which evaluates whether there’s content on the web to substantiate Gemini’s responses,” it added.

According to DataQuest, the training data used to develop Gemini might have needed more diversity, leading to an inability to generate images beyond a specific range. This resulted in a lack of inclusivity and representation in the tool’s outputs.

A recent study by Stanford University that included responses from three AI generated models to 200,000 legal queries, researchers found errors in random questions on federal courts, with ChatGPT fabricated responses 69% of the time and Meta’s Llama 2 model hit 88%, as quoted by The Financial Times.

Researchers from University of Washington and Carnegie Mellon University found that AI models have different political biases, depending on how they have been developed.

What do we know about biases in AI?

AI bias refers to systems that produce biased results based on historical and current social inequality. “Businesses cannot benefit from systems that produce distorted results and foster mistrust among people of colour, women, people with disabilities, the LGBTQ community, or other marginalized groups of people,” according to a blogpost by IBM.

Usually, AI systems learn to make decisions based on training data so it is essential that the sampling for over or underrepresented groups are reviewed properly.

Flawed data will usually produce errors, unfair outcomes and amplify biases. “Algorithmic bias can also be caused by programming errors, such as a developer unfairly weighting factors in algorithm decision-making based on their own conscious or unconscious biases,” the IBM blog added.

Cognitive biases can favour a particular community or people through a person’s experiences and judgments when processing information.

What's your reaction?

Comments

https://umatno.info/assets/images/user-avatar-s.jpg

0 comment

Write the first comment for this!