April 10, 2025

Racial Bias in AI: How Algorithms Are Failing Black People – Word In Black

Invest in Word In Black’s mission to confront inequities, elevate solutions, and amplify the Black experience.
Stay up-to-date on reporting that amplifies the stories, voices, and perspectives of Black America.
Word In Black
A NATIONAL MEDIA BRAND INNOVATING ON THE LEGACY OF THE BLACK PRESS.
This post was originally published on Defender Network
By ReShonda Tate
Artificial Intelligence was once heralded as the great equalizer—promising efficiency, objectivity and progress. But for many African Americans, the growing influence of AI has exposed a much darker reality: algorithms that perpetuate the very racism they were supposed to eliminate.
From facial recognition misfires to discriminatory hiring systems and over-policing through predictive technology, many in the Black community are bearing the brunt of AI’s biases. And experts say it’s not accidental—it’s built into the system.
“AI systems learn from data—and that data reflects our society’s biases,” says Dr. Joy Buolamwini, founder of the Algorithmic Justice League. “If you train an algorithm on a flawed history, it will replicate those injustices.”
Stay up-to-date on reporting that amplifies the stories, voices, and perspectives of Black America.
AI models are developed using massive datasets, often pulled from historical records, social media and even government databases. But when those sources contain racial disparities—such as disproportionate policing or underrepresentation in high-wage jobs—the AI absorbs and amplifies those inequities.
“These systems are tested in sanitized labs, not real-world environments where racial complexity exists. And when they fail, Black people pay the price,” Buolamwini said.
Facial recognition technology is under increasing scrutiny for its alarming inaccuracy in identifying Black individuals—errors that have already led to wrongful arrests and widespread concern.
Detroit resident Robert Williams knows firsthand the devastating impact of faulty facial recognition. 

“A computer said I stole something I had nothing to do with. It turned my life upside down,” said Williams, whose case has been taken up by the ACLU. “I never thought I’d have to explain to my daughters why daddy got arrested. How does one explain to two little girls that a computer got it wrong, but the police listened to it anyway?”
A study by the MIT Media Lab revealed that commercial facial recognition systems misidentified darker-skinned individuals, particularly Black women, at rates far higher than white men. One system misclassified dark-skinned women 34% of the time, compared to just 0.8% for light-skinned men.
Last year, civil rights advocates in Houston raised concerns after the City Council approved a $178,000 contract with Airship AI Holdings, Inc. The deal added a 64-camera network with facial recognition capabilities to the Houston Police Department’s surveillance tools.
Texas Southern University professor Carroll Robinson, a former Houston City Council member, warned of the risks.
“Some innocent person, misidentified, not by a human, but by a camera, ends up in the criminal justice system, incarcerated at the county jail,” Robinson said.
Robinson has called for state legislation to ensure artificial intelligence systems do not perpetuate racial discrimination.
The technology’s failings extend beyond policing. Amazon’s face-ID system, Rekognition, notoriously misidentified Oprah Winfrey as male and falsely matched 28 members of Congress with criminal mugshots in a test by the ACLU. 
A more recent study by the U.S. Commerce Department echoed these concerns. It found that facial recognition systems were far more likely to falsely match two different Black faces than white faces—error rates for African men and women were exponentially higher than for Eastern Europeans, who had the lowest error rates.
These disparities stem from how AI systems are trained. 
“Algorithms are only as good as the data we feed them,” says Buolamwini. “When those datasets are dominated by white male faces, the systems struggle to identify anyone who doesn’t fit that mold.”
Buolamwini learned this firsthand as a student. While working on a project using computer vision, she discovered that the robot couldn’t detect her face—until she put on a white mask.
Activists and civil rights groups are pushing back. Buolamwini’s Algorithmic Justice League is calling for legislation that enforces transparency in AI systems, mandates third-party audits and prohibits the use of certain technologies—like facial recognition—in policing altogether.
There are signs of progress as some local governments are also banning facial recognition tech, and some companies are beginning to reevaluate their tools.
While much of the conversation centers on the harm AI causes, Black technologists are also reimagining what equitable AI could look like.
Organizations like Black in AI, Data for Black Lives and the Algorithmic Justice League are creating spaces where Black developers, ethicists and data scientists are taking the lead.
“Our taxpayer dollars should not go toward surveillance technologies that can be abused to harm us, track us wherever we go, and turn us into suspects simply because we got a state ID,” the ACLU said in a statement.
Invest in Word In Black’s mission to confront inequities, elevate solutions, and amplify the Black experience.
✔ Fund an on-the-ground report from the frontlines of the fight for equity
✔ Pay for Freedom of Information Act requests that reveal government secrecy
✔ Provide vital safety equipment that protect our reporters covering protests
✔ Fund outreach and convenings that power solutions to the greatest problems confronting American society
A NATIONAL MEDIA BRAND INNOVATING ON THE LEGACY OF THE BLACK PRESS.

source

About The Author

Past Interviews

Download Our New App!

Umoja Radio Amazon Mobile AppUmoja Radio Amazon Mobile AppUmoja Radio Android Mobile AppUmoja Radio iPhone Mobile AppUmoja Radio iPhone Mobile App