Skip to Content
Line of students standing, holding books and other school supplies

Can AI Fix the Racial Wealth Gap?

Learn how companies can mitigate the risks of artificial intelligence-driven bias and even improve corporate diversity.

By Robin Meyerhoff | 8 min read

Artificial intelligence (AI) is often touted as an equalizing technology that removes bias from decisions – but it has not always lived up to its promise. Examples of AI gone awry are easy to find. Google made headlines in 2015 when its image-recognition technology returned photos of Black men if users searched for images of gorillas. And Amazon had to cancel a project to build an AI-driven recruiting engine after it was apparent the tool preferred men over women.


The problem is that AI is only as good as the data it learns from, and algorithmic bias can be inadvertently introduced at any point in the development process. Historical data sets used to train the tool may be biased to begin with. And outcome validation by testers during the development process can skew the way the AI learns.


But rather than rejecting the use of AI, a community of academics and tech companies are using it to improve corporate diversity by proactively removing cues that may work against traditionally marginalized people. Their goal is to create systems that truly level the playing field, from recruitment through hiring and on to promotion, providing underrepresented groups access to higher-paying jobs commensurate with their skills and education.

The racial wealth gap

Researchers say lack of access to those higher-paying jobs has long-term implications to the underlying U.S. economy. A 2020 Brookings Institution study found that the net worth of a typical white family is nearly 10 times greater than that of a Black family. And a 2019 McKinsey study found that wealth gap “contributes to intergenerational economic precariousness.” When properly implemented, using AI to reduce workplace racial inequality can improve unequal access to wealth along racial lines.


More than two-thirds of middle-class Black children, the McKinsey study predicted, are likely to fall out of the middle class as adults. Their loss could cost the U.S. economy as much as US$1.5 trillion in reduced consumption and investment – as much as 6% of the U.S. GDP – by 2028.


Black people still face considerable challenges in corporate America. A study by Coqual, a think tank and consulting group dedicated to workplace equality, found that Black professionals are generally more ambitious in their careers than their white counterparts, yet make up only 8% of all professionals and 3.2% of executives or senior-level managers.

Barriers to professional jobs

The barriers to getting into well-paid, professional jobs are significant. Judith Williams, chief diversity and inclusion officer at SAP, says education isn’t the issue. Black students in the United States complete high school at a nearly equal rate to that of White students – and college graduation is increasing, albeit more slowly. “You need to look at diversity in the talent pipeline – who gets funding and who goes into the [technology] field,” Williams says.


Peter Bergman, professor of economics and education at Columbia University, believes that solving educational parity helps, but doesn’t remove, all the barriers to financial advancement. “You could take a Black student who lives next door to a white family. Both parents are high earners making the same amount of money, the children live in the same place and attend the same schools – but there will be a great disparity in their outcomes,” Bergman says.


Social networks, which Bergman points out are not meritocracies, play a critical role in people’s ability to get and keep well-paying jobs or receive promotions. Similarly, algorithms may encode existing bias in new-hire screening processes.




Is AI bias-free?

“A lot of companies using algorithms as part of their screening processes make claims about fairness and lack of bias that aren’t supported by the evidence,” he says. “For example, they’ll say it’s bias-free because gender or race isn’t included. But that doesn’t mean it lacks bias. You could have two people of equal talent from different groups and the hiring manager will likely interview someone from one group over another.”


How AI Can End Bias

If it's taught to play fair, AI can be a powerful tool for reducing human bias.

Bergman says this happens because of how variables are correlated. “If race isn’t indicated as a quality to consider, the algorithm latches onto other available variables like the university you attended, job experience, and location.” Those factors often correlate to race and impact who gets a job interview and who doesn’t.


To move beyond this effect, companies must consider different data. If a company that hasn’t traditionally hired people of color relies on historical hiring data, that data will likely only exacerbate the problem.


Bergman recommends training algorithms to explore different candidates, similar to how Netflix suggests shows for us to watch. It doesn’t just rely on viewing history – it also offers completely different shows to see if a viewer is open to other types of content.


“There are a lot of parts to hiring beyond the algorithm. So it’s not just about changing it,” Bergman says. “You have to explicitly incorporate exploration into the algorithm and apply that to the hiring framework.”


Bergman believes that approach must also be applied to digital HR systems that track data regarding the entire employee lifecycle, from application and interview through hiring and retention. He says, “That’s what we’re hoping to do – develop an evidence-based approach to creating a diverse workplace culture.”



How to override AI bias

Appen and Writer are two companies that train algorithms to help organizations weed out racism and discrimination in corporate culture.


Sydney-based Appen provides training data for AI that helps many of the world’s biggest technology companies, including Amazon and Microsoft, develop machine learning, speech recognition, and computer vision algorithms.


Appen CEO Mark Brayan says the company aims to eliminate two sources of bias. The first is historical data that tends to exclude certain populations. For example, insurance claim payout models may be based on 100 years’ worth of policies sold to white men – which Appen rectifies by providing a more diverse data set.


The second is bias introduced by the humans that interact with the data. “Humans need to transcribe and annotate data, which brings in their perspective and cultural context,” Brayan says. “People score data’s relevance, which can drift if diverse viewpoints are not represented.” To correct this, Appen uses crowdsourcing to ensure data workers come from a variety of backgrounds, tapping into a network of more than one million contractors globally.


“People are subject to different conditions, and relevance can be impacted by time, culture, etc.,” Appen says. “Our crowd helps customers map to users’ diversity and demography – and provides unbiased data.”


Writer, a San Francisco-based startup with customers including Twitter, Discovery Channel, and Intuit, builds AI tools that align language with diversity and inclusion efforts. “People have the best of intentions but don’t always know the correct way to speak about or to diverse groups. We help them do what they intend to do,” says May Habib, co-founder and CEO of Writer.


Q&A: How to Give AI a Moral Compass

To make artificial intelligence behave ethically, humans have to take the lead.

Writer’s tools work by training algorithms on language guidelines created by marginalized groups. “We build on the work done by underrepresented communities. People have spent their careers developing inclusive language and we help it go mainstream,” Habib says. “Alternative suggestions are key and that’s where AI comes in. Writer sits in people’s browsers to make sure that people are being sensitive and creating environments of belonging.”


For example, technology companies have come under fire for not creating inclusive cultures. Product developers use terms like “whitelist” and “blacklist,” “master” and “slave,” and “dummy.” Habib says, “We help product teams say what they mean – we suggest replacing ‘dummy value’ with ‘placeholder value,’ or ‘blacklist’ with ‘deny list.’”


While AI can foster more inclusive hiring and employment practices, to truly address the wealth gap companies need to profoundly shift corporate culture. AI can help, but the rest relies on organizational leaders’ will and appetite to enact long-term, sustainable changes.

Gaps still persist

But even tech companies, like those in Silicon Valley, have a long way to go. A new study by researchers at New York University’s Stern School of Business found that Black, Hispanic, and Asian applicants are 8% to 13% less likely to receive a callback than white applicants for positions in tech companies. These gaps persist throughout the interview and offer processes.


Bergman explains: “If people really cared about these issues, they’d build systems that create data about the workplace and rigorously evaluate it. They’d put their money where their mouth is and use that evidence to understand the culture better.”


He adds that culture is multi-dimensional and complex. For example, multinational corporations need to address the different ways that Blackness is understood globally as a social construct. He recommends that organizations approach it in multiple ways and understand the different ways it can be measured.


Embracing the approaches recommended by Bergman and others gives companies a path to turn well-meaning support for racial diversity into significant change that can create equal opportunities in professional fields – and do their part towards closing the wealth gap.

Meet the Authors

Robin Meyerhoff
Technology Communications Professional | SAP

Further reading

SAP Insights Newsletter

Ideas you won’t find anywhere else

Sign up for a dose of business intelligence delivered straight to your inbox.

Back to top