Skip to Content
People looking at a computer screen

People + Technology: Rewriting the Script on Hiring Automation

By Lauren Bidwell

According to a 2017 survey from Pew Research Center, Americans are more concerned than excited about emerging automation technologies. Many people don’t look forward to a world in which machines do all of the thinking and work.

Of all the technologies considered in the survey, the one people want least is hiring automation technology. In fact, only 22 percent of respondents reported feeling enthusiastic about the development of hiring algorithms, and 76 percent of respondents said they would not want to apply for a job that used a computer program to select applicants, due mostly to beliefs that “computers can’t capture everything about an applicant” and algorithms are “too impersonal.” This distrust of hiring automation is particularly striking when compared to attitudes toward things like “a future where robots and computers can do many human jobs” (33 percent of respondents were enthusiastic about this) or “the development of robot caregivers for older adults” (44 percent were enthusiastic).

It seems people are more comfortable with the idea of a robot taking care of their grandparents than having a computer interview them for a job. Clearly, there is a personal element to the hiring process that people perceive to be important.

As one survey respondent elaborated, “A lot depends on the sophistication of the program, but I do believe that hiring people requires a fair amount of judgment and intuition that is not well automated. I feel that I have a lot of skills that would be hard to quantify and require that kind of in-depth thinking and judgment.”

This is not an unreasonable evaluation. Machine learning and algorithms operate on a set of mathematical rules that are not designed to take “intuition” into account. They are, by definition, “cold, inhuman, calculating machines.” What is interesting is that people would rather be judged by human intuition than mathematical formulas, even though there are well-documented problems with hiring managers relying on their intuitions during the hiring process. Namely, that humans have an inherent bias in favor of people who are similar to them.

Psychological research suggests that we favor people who look, think, and/or act like us over people who don’t — a phenomenon known as the “similar-to-me effect.” For example, research has shown that employers are more likely to hire a candidate if the candidate is competent and more culturally similar, and that both black and white raters are willing to give higher performance ratings to employees who share their same race. An increasing emphasis on culture fit as a hiring priority in recent years has also garnered much attention from experts who believe this criteria identifies a fit between the interviewer and interviewee, rather than a fit between the interviewee and the organization.

A major advantage of using technology is the ability to detect these types of bias. However, the strong aversion people feel toward hiring algorithms and workplace automation suggests there is a level of misunderstanding and mistrust regarding this technology that business leaders must address.

Here are a few methods of doing so:

1. Emphasize a ‘people + technology’ philosophy

Could talent decisions that are now the responsibility of people become the responsibility of machines? Probably, but most organizations don’t adopt machine learning in order to replace humans with machines. Rather, they adopt machine learning to give humans better data with which to make better decisions.

Companies should view hiring automation technology as an aid to hiring decisions rather than a replacement for humans, and they should clearly communicate this philosophy to candidates and employees.

2. Recognize that not all hiring decisions should be automated

Automation can save time and increase decision accuracy, but efficiency and accuracy shouldn’t be the only factors considered when determining whether to use hiring automation technology.

As Google’s Head of People Analytics Prasad Setty describes, certain decisions are inherently more meaningful when organizational leaders are accountable for them. After Setty’s team developed a mathematical formula that could make (with 90 percent accuracy) the same promotion decisions as committees of people were making, the team expected committee members to be thrilled about the time- and effort-saving solution. Instead, the team found that committees hated the idea of hiding behind a formula. Setty paraphrases the committee response like so: “For such important decisions as promotions, we don’t want to stand behind a black box and say that the formula made me do so. We want to stand behind these decisions.”

Important decisions about people should be made by people. Hiring automation can and should help guide staffing decisions, but it should not be viewed as taking decision-making responsibility from business leaders. This is critical to ensuring that employees and candidates feel fairly treated. Employees believing that the process associated with decision outcomes is fair is an important component of procedural justice, which research has shown can positively influence job performance, job satisfaction, organizational commitment, and organizational trust.

3. Understand the limitations of automation and put appropriate oversight in place

Technology can be very useful for detecting inequities, but it is important to recognize that machines are not entirely immune from bias. After all, they are built by (sometimes biased) humans.

A 2016 ProPublica story reports on a group of researchers who found that when the predictions made by COMPAS — a risk assessment software forecasting criminal reoffender likelihood used by judges to make bail and sentencing decisions — were compared with actual reoffender data, black offenders were twice as likely as white offenders to have been labeled by the system as “high-risk” but not actually to reoffend, while white offenders were much more likely to have been labeled as “low-risk” but to go on to commit other crimes.

While this finding is not meant to suggest that hiring algorithms should be avoided because there is a possibility they will be biased, it does emphasize the responsibility organizations have to find an appropriate balance between people, automation technology, and decision oversight.

The emergence of automation technologies in the workplace may be inevitable, but employees’ negative perceptions of these developments don’t have to be. It is up to business leaders to develop a sense of understanding and trust in employees about the role, value, and limitations of automated technologies and the effect on decision-making processes in the workplace. Automation technology has tremendous potential to improve our world, but it is our responsibility to ensure that world is not one in which all decision-making power is given to machines.

About the Author

Lauren Bidwell, Ph.D.
Research Scientist, Human Capital Management Research

Dr. Lauren Bidwell is an Experimental Psychologist with a specialization in Decision Making research. Her role involves driving innovative thinking and best practices around talent management and the use of technology to support effective talent decisions. Lauren has engaged with dozens of customer organizations around the world and is an active author and presenter.

About SAP HR Research

Our research team advances the art and science of Human Capital Management by studying the relationship between technology, workforce psychology, and business performance. Our researchers and psychologists uncover trends to keep you on the cutting-edge of technology and thought leadership – and help shape the design of SAP SuccessFactors HCM software.

Back to top