Does AI Reduce Human Bias, or Does It Reinforce the Bias Hidden in Data?

AI is often expected to “eliminate human bias and make fair, objective decisions.” This is largely because we tend to view AI as a cool-headed tool, free from emotions or personal preconceptions.
However, in reality, AI learns from data created by humans. If that data contains bias, AI may not reduce it but instead amplify it. In fact, multiple incidents and studies have clearly demonstrated this.
In St. Louis County, Missouri, during an assault investigation in 2021, Mr. G was mistakenly identified as a suspect through facial recognition AI and was subsequently incarcerated for over 16 months.
This occurred despite clear contradictions with DNA evidence and alibis, as investigators placed too much trust in the AI’s output.
Rather than delivering a fair judgment, AI ended up reproducing existing “racial bias” in society—stripping a person of his freedom in the process.
In March 2025, several news outlets reported that OpenAI’s video generation tool Sora repeatedly reinforced stereotypes. For example:
CEOs and professors were depicted as men,
Receptionists and clerical staff as women,
People with disabilities always shown in wheelchairs,
Overweight individuals almost never portrayed as “running.”
At first glance, these might appear to be “everyday scenes,” but they do not necessarily reflect reality. Such tendencies risk unconsciously spreading cultural stereotypes.
For developers integrating AI into products and services, bias is a practical issue.
Imagine building a web application that generates content to display to users.
If biased text or images are generated, passing them directly to the UI is dangerous. We need to proactively anticipate such risks and prepare responses. For example:
Reviewing AI outputs before showing them to users
Logging results so problems can be identified and fixed later
Clearly indicating: “This content is AI-generated and may not be fully fair or unbiased”
These are not advanced techniques but rather practices similar to validation and error handling—things web developers already do regularly.
AI does have the potential to reduce bias. For example, in medicine, AI can analyze vast amounts of image data and provide consistent diagnoses, helping reduce missed detections of serious conditions such as cancer or tumors.
At the same time, biases hidden in data—and the tendency of humans to treat AI outputs as “absolute”—pose the risk of amplifying human prejudice.
At least for now, AI is not a magical fairness machine but more like a mirror reflecting society as it is. What shows up in that mirror depends on the data we provide and how we choose to use it.
Neither trusting AI blindly nor doubting it excessively is the answer.
Striking this balance may be the kind of human wisdom we most need in the age of AI-driven decision-making.
The examples and perspectives shared here are just a few among many. I’m still learning myself, but I believe it’s important not to assume that AI is always right — and to pause and think critically with our own judgment.