How AI Can End Bias
By Fawn Fitter, Steven T. Hunt | 11 min read
We humans make sense of the world by looking for patterns, filtering them through what we think we already know, and making decisions accordingly. When we talk about handing decisions off to AI, we expect it to do the same, only better.
Machine learning does, in fact, have the potential to be a tremendous force for good. Humans are hindered by both their unconscious assumptions and their simple inability to process huge amounts of information. Artificial intelligence (AI), on the other hand, can be taught to filter irrelevancies out of the decision-making process, pluck the most suitable candidates from a haystack of résumés, and guide us based on what it calculates is objectively best rather than simply what we’ve done in the past.
In other words, AI has the potential to help us avoid bias in hiring, operations, customer service, and the broader business and social communities – and doing so makes good business sense. For one thing, even the most unintentional discrimination can cost a company significantly, in both money and brand equity. The mere fact of having to defend against an accusation of bias can linger long after the issue itself is settled.
Harmful human bias – both intentional and unconscious – can be avoided with the help of artificial intelligence, but only if we teach it to play fair and constantly question the results.
Beyond managing risk related to legal and regulatory issues, though, there’s a broader argument for tackling bias: in a relentlessly competitive and global economy, no organization can afford to shut itself off from broader input, more varied experiences, a wider range of talent, and larger potential markets.
That said, the algorithms that drive AI don’t reveal pure, objective truth just because they’re mathematical. Humans must tell AI what they consider suitable, teach it which information is relevant, and indicate that the outcomes they consider best – ethically, legally, and, of course, financially – are those that are free from bias, consciously or otherwise. That’s the only way AI can help us create systems that are fair, more productive, and ultimately better for both business and the broader society.
Bias: Bad for business
When people talk about AI and machine learning, they usually mean algorithms that learn over time as they process large data sets. Organizations that have gathered vast amounts of data can use these algorithms to apply sophisticated mathematical modeling techniques to see if the results can predict future outcomes, such as fluctuations in the price of materials or traffic flows around a port facility. Computers are ideally suited to processing these massive data volumes to reveal patterns and interactions that might help organizations get ahead of their competitors. As we gather more types and sources of data with which to train increasingly complex algorithms, interest in AI will become even more intense.
Using AI for automated decision-making is becoming more common, at least for simple tasks, such as recommending additional products at the point of sale based on a customer’s current and past purchases. The hope is that AI will be able to take on the process of making increasingly sophisticated decisions, such as suggesting entirely new markets where a company could be profitable or finding the most qualified candidates for jobs by helping human resources (HR) look beyond the expected demographics.
As AI takes on these increasingly complex decisions, it can help reduce conscious and unconscious bias. By exposing a bias, algorithms allow us to lessen the impact of that bias on our decisions and actions. They enable us to make decisions that reflect objective data instead of untested assumptions, reveal imbalances, and alert people to their cognitive blind spots so they can make more accurate, unbiased decisions.
5 new truths about corporate leadership
Companies are putting social responsibility at the forefront of their strategy.
Imagine, for example, a major company that realizes that its past hiring practices were biased against women and that it would benefit from having more women in its management pipeline. AI can help the company analyze its past job postings for gender-biased language, which might have discouraged some applicants. Future postings could be more gender neutral, increasing the number of female applicants who get past the initial screenings.
AI can also support people in making less-biased decisions. For example, a company is considering two candidates for an influential management position: one man and one woman. The final hiring decision lies with a hiring manager who, when they learn that the female candidate has a small child at home, assumes that she would prefer a part-time schedule.
That assumption may be well intentioned, but it runs counter to the outcome the company is looking for. AI could apply corrective pressure by reminding the hiring manager that, all qualifications being equal, the female candidate is an objectively good choice who meets the company’s criteria. The hope is that the hiring manager will realize their unfounded assumption and remove it from their decision-making process.
At the same time, by tracking the pattern of hiring decisions this manager makes, the AI could alert them – and other people in HR – that the company still has some hidden biases against female candidates to address.
Look for where bias already exists
In other words, if we want AI to counter the effects of a biased world, we have to begin by acknowledging that the world is biased. And that starts in a surprisingly low-tech spot: identifying any biases baked into your own organization’s current processes. From there, you can determine how to address those biases and improve outcomes.
There are many scenarios where humans can collaborate with AI to prevent or even reverse bias, says Jason Baldridge, a former associate professor of computational linguistics at the University of Texas at Austin and now a research scientist at Google, where he works on natural language understanding. In the highly regulated financial services industry, for example, Baldridge says banks are required to ensure that their algorithmic choices are not based on input variables that correlate with protected demographic variables (like race and gender). The banks also have to prove to regulators that their mathematical models don’t focus on patterns that disfavor specific demographic groups, he says. What’s more, they have to allow outside data scientists to assess their models for code or data that might have a discriminatory effect. As a result, banks are more evenhanded in their lending.
How to prevent artificial intelligence bias
The reason for these checks and balances is clear: the algorithms that drive AI are built by humans, and humans choose the data with which to shape and train the resulting models. Because humans are prone to bias, we have to be careful that we are neither simply confirming existing biases nor introducing new ones when we develop AI models and feed them data.
“From the perspective of a business leader who wants to do the right thing, it’s a design question,” says mathematician Cathy O’Neil, founder of a consulting firm that helps organizations manage and audit their algorithmic risks and author of the best-selling book Weapons of Math Destruction. “You wouldn’t let your company design a car and send it out in the world without knowing whether it’s safe. You have to design it with safety standards in mind,” she says. “By the same token, algorithms have to be designed with fairness and legality in mind, with standards that are understandable to everyone, from the business leader to the people being scored.”
To eliminate bias, you must first make sure that the data you’re using to train the algorithm is itself free of bias, or, rather, that the algorithm can recognize bias in that data and bring the bias to a human’s attention. For example, companies today know not to include language as overtly discriminatory as “No women need apply,” but, deliberately or otherwise, they still use phrases like “outspoken” and “aggressively pursuing opportunities,” which are proven to attract male job applicants and repel female applicants, and words like “caring” and “flexible,” which do the opposite.
Once humans categorize this language and feed it into an algorithm, the AI can learn to flag words that imply bias and suggest gender-neutral alternatives. Unfortunately, this de-biasing process currently requires too much human intervention to scale easily, but as the amount of available de-biased data grows, this will become far less of a limitation in developing AI for HR.
Similarly, companies should look for specificity in how their algorithms search for new talent. According to O’Neil, there’s no one-size-fits-all definition of the best engineer; there’s only the best engineer for a particular role or project at a particular time. That’s the needle in the haystack that AI is well suited to find.
Look beyond the obvious
AI could be invaluable in radically reducing deliberate and unconscious discrimination in the workplace. However, the more data your company analyzes, the more likely it is that you will deal with stereotypes, O’Neil says. If you’re looking for math professors, for example, and you load your hiring algorithm with all the data you can find about math professors, your algorithm may give a lower score to a Black female candidate living in Harlem simply because there are fewer Black female mathematicians in your data set. But if that candidate has a PhD in math from Cornell, and if you’ve trained your AI to prioritize that criterion, the algorithm will bump her up the list of candidates rather than summarily ruling out a potentially high-value hire on the spurious basis of race and gender.
To further improve the odds that AI will be useful, companies have to go beyond spotting relationships between data and the outcomes they care about. It doesn’t take sophisticated predictive modeling to determine, for example, that women are disproportionately likely to jump off the corporate ladder at the halfway point because they’re struggling with work-life balance.
Many companies find it all too easy to conclude that women simply aren’t qualified for middle management when they quit those roles. However, a company committed to smart talent management will instead ask what it is about these positions that makes them incompatible with women’s lives. It will then explore what it can change so that it doesn’t lose talent and institutional knowledge that will cost the company far more to replace than to retain.
That company may even apply a second layer of machine learning that looks at its own suggestions and makes further recommendations: “It looks like you’re trying to achieve X outcome, so consider taking Y approach,” where X might be promoting more women, making the workforce more ethnically diverse, or improving retention statistics, and Y is redefining job responsibilities with greater flexibility, hosting recruiting events in communities of color, or redesigning benefits packages based on what similar companies offer.
Can we trust data to be neutral?
Cathy O’Neil explains why we need processes that counter algorithmic bias.
AI context matters – and context changes
Even though AI learns – and maybe because it learns – it can never be considered “set it and forget it” technology. To remain both accurate and relevant, it has to be continually trained to account for changes in the market, your company’s needs, and the data itself.
Sources for language analysis, for example, tend to be biased toward standard American English, so if you’re building models to analyze social media posts or conversational language input, Baldridge says, you have to make a deliberate effort to include and correct for slang and nonstandard dialects. Standard English applies the word “sick” to someone having health problems, but it’s also a popular slang term for something good or impressive. Confusing the two could lead to an awkward outcome, to say the least. Correcting for that, or adding more rules to the algorithm, such as “The word ‘sick’ appears in proximity to positive emoji,” takes human oversight.
Today, AI excels at making biased data obvious, but that isn’t the same as eliminating it. It’s up to human beings to pay attention to the existence of bias and enlist AI to help avoid it. That goes beyond simply implementing AI to insisting that it meet benchmarks for positive impact. The business benefits of taking this step are – or soon will be – obvious.
Clearly, organizations that can address this concern explicitly will have a competitive advantage, but simply stating their commitment to using AI for good may not be enough. They also may wish to support academic efforts to research AI and bias, such as Association for Computing Machinery’s annual Fairness, Accountability, and Transparency (FAT*) global conference, most recently held in January 2020.
O’Neil, who blogs about data science and founded the Lede Program for Data Journalism, an intensive certification program at Columbia University, is going one step further. She is attempting to create an entirely new industry dedicated to auditing and monitoring algorithms to ensure that they not only reveal bias but actively eliminate it. She proposes the formation of groups of data scientists that evaluate supply chains for signs of forced labor, connect children at risk of abuse with resources to support their families, or alert people through a smartphone app when their credit scores are used to evaluate eligibility for something other than a loan.
As we begin to entrust AI with more complex and consequential decisions, organizations may also want to be proactive about ensuring that their algorithms do good – so that their companies can use AI to do well.
Meet the Authors
SAP Insights Newsletter
Ideas you won’t find anywhere else
Sign up for a dose of business intelligence delivered straight to your inbox.