<img height="1" width="1" src="https://www.facebook.com/tr?id=996474224689458&amp;ev=PageView &amp;noscript=1">

Questions? Call Us (617) 712-0447

Is AI biased?

Is AI biased?

Zain Jaffer is a real estate and property tech investor who sold his mobile ad startup Vungle in 2019 to private equity firm Blackstone.

ChatGPT’s growth has been nothing short of spectacular. It was launched in November 2022 during the tail end of the COVID pandemic. Two months after in January 2023, it had already reached 100M users, according to Reuters

Flash forward to 2024, and we still see blazing developments in this Artificial Intelligence (AI) space. Google’s own AI project, initially called Bard but now called Gemini, has displayed its own impressive capabilities. Large Language Models (LLMs) from companies like OpenAI, Google, Anthropic, and others now allow you to have human-like conversations with them.

In the video front, AI applications like Sora and Midjourney are able to generate almost photorealistic video snippets from text descriptions like “a gorilla in a suit walking on Fifth Avenue in New York City.”

There is a concept in computer science called the Turing Test, named after the celebrated British computer scientist and World War II codebreaker Alan Turing. Put simply, if you can no longer distinguish if the entity you are speaking with behind a wall or curtain is a person or a machine, then that machine (if it is one) is said to have passed the Turing Test.

All this speed of innovation has brought both new possibilities and fears. There are fears of AIs becoming “sentient” (self-aware) and deciding to get rid of humans probably borne out of watching old films like The Terminator, War Games, and the like. There are valid fears of massive job loss as the capabilities of AI are now even better than what humans can do.

For example in February 2024, a US company called Klarna said that an adapted version of ChatGPT was able in the span of one month of testing, to do 2.3 million conversations or two thirds of their monthly capacity, representing the work of 700 human agents. The chatbot was able to resolve customer issues faster and more accurately, could work 24/7 365 days in a year and could speak in 35 languages. 

Whether the fears are of human extinction, or human job loss, the fears may have some basis. 

Now there is a new fear that has suddenly turned into reality. When users tried testing Google’s Gemini AI application, they noticed that it refused to render images of people like Adolf Hitler, the Founding Fathers, and other figures in history, as white or Caucasian. Instead what it did was to display inaccurate results of these historical figures as people of color, or even of a different gender.

This is a problem. If people cannot get accurate results from an AI most likely they will stop using it. Even Elon Musk, in a March 2024 tweet, pointed out the same.

The problem with these powerful dominant tech giants is that they are using their scale and heft to influence society. Nothing wrong with that if done in an open and right way, but altering history to suit an agenda is simply wrong. After this came out in the press, Google acknowledged this in a statement and said: 

It’s clear that this feature missed the mark. Some of the images generated are inaccurate or even offensive. We’re grateful for users’ feedback and are sorry the feature didn't work well.

We’ve acknowledged the mistake and temporarily paused image generation of people in Gemini while we work on an improved version.

Still, it is pretty hard to believe that a product of this magnitude and importance from a global tech leader did not undergo management and engineering reviews. It could also reveal a cultural bias within the company that outsiders like you and me may not care for. When we search for something on the web for example, we want accurate answers, not a sermon on what the world ought to be.

One of the problems in the world today is that the tech giants like Microsoft, Google, Facebook, and other household tech names control so much of our data, that it has become effectively theirs to exploit. It would be good if they could act altruistically, but as it appears with this human rendering fiasco, they have an agenda they want to push.

Companies Need To Look At Decentralized Artificial Intelligence

So for ordinary small individuals, and small companies, how can they ensure that they are not using an LLM, a chat bot, or AI video generation tool that basically forces its own viewpoint on the user?

One solution that people might want to look at is Decentralized Artificial Intelligence or Decentralized AI. This simply means that instead of a tech giant owning your data and using you and your data as their product that they want to sell to you, there are now communities of people who have banded together and developed community based AI software. These groups are often united using a crypto token. 

Perhaps this is a call for you to consider using these lesser known AI tools from these decentralized communities as an alternative to Big Tech. That is, unless you want to live in an AI world where the Sun revolves around the Earth, and the Moon landings never happened.

New call-to-action