flex-height
text-black

AI ethics rules HR leaders can take to the C-suite

HR leaders can create guardrails to gain the benefits of artificial intelligence while using it responsibly.

The question among HR leaders these days isn’t whether to adopt AI, it’s how. Given the importance of people to the HR function, it’s imperative to use the tools responsibly.

Artificial intelligence’s value proposition is clear. An AI system can assess the environment “and take actions to maximize the chance of successfully achieving its goals” with the ability to adapt over time as it interprets data. Generative AI (GenAI) systems can produce text, images, and varied content based on the data it is trained on.

Across the world, business leaders see AI and generative AI as a chance to gain a competitive edge, improve business processes, and maximize efficiency, and HR executives are among those exploring opportunities. A 2023 Gartner survey of 250 chief human resources officers (CHROs) found that 77% have a positive outlook about these tools, and that 45% were using them.

Among the top uses so far: talent acquisition, learning development, and performance management, according to a 2024 Society of Human Resources Management survey of 2,366 HR professionals. An April 2024 study from Deloitte found that 75% of nearly 2,000 C-suite level respondents said they expect generative AI to affect their talent strategies within the next two years.

But with the understandable excitement about the possibilities offered by AI, there are clear reasons for caution.

Jennifer Lauro, vice president and group counsel for HR and litigation at The Hanover Insurance Group, Inc., said there is an inherent disconnect between the binary, black-and-white thinking of technologists and the law. Because AI makes decisions based on averages, it fails to consider individual differences and that can become problematic when screening employees—especially employees with disabilities.

“A screening mechanism you might use to screen out job candidates with gaps in employment could screen out someone who has an illness that takes them out of the workplace from time to time,” she says. “All of a sudden your AI tool has put you in a situation where you may be in violation of [Title 1 of] the [Americans with Disabilities Act].”

As executives focused on both people and legal compliance, CHROs and other HR leaders are in a strong position to influence their organizations in the profitable, responsible use of AI. There are four priorities when applying AI to HR organizations and helping their enterprises create guardrails: providing an ethical framework for using AI; assigning responsible leadership for ensuring regulatory compliance; working closely with AI developers to mitigate the risk of bias against individuals; and communicating policies and procedures for AI use.

Given the momentum of this technology and HR leaders’ responsibilities to remain compliant with the law, there is space to experiment with AI in ways that are ethical, comply with laws, and that also promote a strong culture. This article delves into how.

Best practices for HR leaders to manage the sweeping changes that AI will bring to work

Read more on HR and AI

Placing ethics front and center of AI projects in HR

Ethics is a crucial area for HR, especially given the laws governing employment in the United States and other advanced economies.

Consider candidate assessment. HR's role is to ensure that the use of AI in assessment and selection aligns with ethical standards, respects individual rights, and promotes fairness and transparency throughout the candidate evaluation and hiring process. At the very minimum, a company’s HR department should develop an ethical framework that guides the use of AI in assessment and selection.

Balance experimentation and risk management to improve how work is done.
Recommendation from group of chief human resource officers gathered at Cornell University

Several companies have taken this approach. For example, SAP’s handbook, updated in August 2023, establishes three ethical pillars: human agency and oversight (to “allow human intervention for all automated decision processes”); safeguards to prevent bias that could harm a person’s rights; and transparency and explainability—ensuring that AI systems are identified and its capabilities documented.

CHROs, including those from GE, Intel Corp., Prudential Financial, IBM and The Estee Lauder Companies, published a list of principles that promote responsible use of AI that emerged from a meeting in 2024 at Cornell University’s School of Industrial and Labor Relations. The principles also assert that “humans should lead the way” in overseeing AI implementations, and that use of AI should “balance experimentation and risk management to improve how work is done.”

In a LinkedIn post about the principles, Christy Pambianchi, executive vice president and chief people officer at Intel, said the sooner we can adopt the best parts of this new technology and thoughtfully build the right constraints to prevent harm, the greater and more positive AI’s effect will be on the world of work.

Three business professionals interacting with a white humanoid robot in a bright office space

“With AI, the question shouldn’t be how to restrict its use but how to leverage the potential it offers,” she wrote. “It is important to apply appropriate safeguards to protect both employees and the organization. But equally so is the need to engage and involve staff, provide training, and listen to their ideas and feedback—all while putting the appropriate policies in place, including how to use AI responsibly.”

The AI implementation guidelines published by a high-profile group of CHROs meeting at Cornell refer to an entire organization’s AI practices, but they, and an accompanying list of suggested best practices, point to the essential role HR leaders should play.

AI implementation best practices

The practices include:
  • Setting up a multidisciplinary steering committee of senior leaders, including HR, to determine AI investment priorities
  • Establishing a central repository of AI initiatives, “to ensure enterprise-wide accountability and organizational learning”
  • Reviewing existing data practices in conjunction with a company’s CIO and CTO, to ensure they are both suited to AI applications and compliant with privacy regulations
  • Developing a change management plan to educate the organization about AI tools and their effect on work processes

Ensure regulatory compliance—before implementation

Just as AI evolves constantly, so do the laws and regulations that govern AI on local, national, and international levels. It’s important that businesses monitor these rules before they even think about implementing AI.

In some cases, this may mean assigning a compliance officer with the task of monitoring regulations pertaining to AI. It also should mean an overhaul of data use practices. For example, in the hiring process, reviewing and ensuring that the collection, storage, and processing of candidate data complies with relevant privacy regulations.

Equally important to these new data protection measures is an initiative to communicate the measures clearly to employees and job candidates.

Shawmut Design and Construction, a $2 billion construction management firm based in Boston, is an example of a company that implemented AI technologies to comply with worker safety rules, improve the employee experience, and provide business benefits – all at the same time.

During the COVID-19 pandemic, Shawmut leaned on the technology to emphasize worker safety. According to Chief People and Administrative Officer Marianne Monte, Shawmut used GPS tracking software on worker phones to keep tabs on the location of every worker on job sites, and an AI engine automatically sent alerts when workers got closer than six feet apart.

Since then, the company has expanded this use case to keep track of whether workers are tied off on buildings (so they don’t fall), and whether workers are using scaffolding properly.

“These tools feel very benign to me, and they are making our job sites safer,” Monte told SAP Insights. “It’s much less about the height of that ladder than it is about a person’s mental state when they get on it.”

Oversight steps HR leaders can take for AI systems

We recommend that HR leaders develop a series of protocols to guarantee that people remain in control of AI implementations. Specifically, HR leaders should:
  • Institute an internal AI ethics committee with senior leaders across all areas of the business.
  • Design an issue resolution grievance process.
  • Ensure leadership demonstrates clear communication about the uses and value of AI.
  • Work closely with product development to balance the human factor.
  • Create quarterly metrics to track the usage of AI. The metrics should evolve over time to gauge changes.
  • Define success criteria to evaluate the application of AI and its value.
  • Implement governance to oversee the usage of AI and ensure the organization complies with established ethical principles.

Protecting against bias

AI systems are only as good as the data they are trained on. If the training data contains biases, such as gender or racial biases, the AI system may perpetuate these biases in the assessment and selection process. To avoid these outcomes, HR should work closely with AI developers and data scientists to identify and mitigate biases in AI algorithms. The goal: ensuring that the AI systems used in a process, for example, in candidate assessment and selection, do not discriminate against any group based on protected characteristics such as race, gender, or age.

The challenge of applying this principle becomes clear when examining the application called emotion AI. This application uses biological signals such as vocal tone, facial expressions, and data from wearable devices to detect and predict how employees might be feeling. It is used widely in customer service departments; many call centers deploy it to monitor what employees say and how they say it.

Scholars in recent years have raised concerns about the science behind emotion AI, and the ethics of technology that tracks feelings. They also have said this application of AI has the potential for invading privacy and perpetuating racial, gender, disability, and mental health status bias.

Paying attention to employee reactions is another key concern, notes Nazanin Andalibi, assistant professor of information at the University of Michigan, who has studied emotion AI applications and wrote in The Conversation about workers’ anxiety.

“While some workers noted potential benefits of emotion AI use in the workplace like increased well-being support and workplace safety, mirroring benefits claimed in patent applications, all also expressed concerns,” Andalibi wrote. “They were concerned about harm to their well-being and privacy, harm to their work performance and employment status, and bias and mental health stigma against them.”

While monitoring the use of data and ensuring regulatory compliance are key responsibilities of adopting AI in HR-centered applications, it’s also clear that HR leaders need to maintain their core practices of managing and caring about people.

That task begins and ends with communication.

Two people in a lab looking at a tablet computer while standing in front of various equipment, including yellow robotic arms

The vital importance of communicating how AI works

HR executives deal with people—individuals—their concerns, their careers, their lives. They deal with, and help their organizations comply with, the laws that govern us. And there are visions of implementing the latest technology like AI to improve how we work. When the three meet, there are bound to be missed connections.

Once an HR department has established policies and procedures for incorporating AI, it’s important to publicize them internally.

This concern predates the advent of generative AI. For example, in 2022, the U.S. Equal Employment Opportunity Commission (EEOC) published guidelines to help companies manage some of this risk. Among them: ensuring the system interface is accessible to as many people with disabilities as possible, determining whether the algorithm puts disabled individuals at a disadvantage, and taking steps to provide reasonable accommodations including alternative job candidate tests.

Recommendations like those from the EEOC speak to an HR organization’s core function: setting guidelines for how people work. Once an HR department has established policies and procedures for incorporating AI, it’s important to publicize them internally. This can happen in the form of blog posts, memos, and all-hands meetings that cover these points:

Embedded in this list: the acknowledgment that in many industries, employees are anxious about AI and the impact it could have on their jobs and the workplace in general. According to a May 2024 study from Microsoft and LinkedIn, nearly half (46%) of 31,000 respondents indicated they are worried AI might replace their jobs. More than half (53%) of respondents in the same study acknowledged that they worry using AI at work might make them look replaceable.

To some extent, these fears are justified: IBM CEO Arvind Krishna made headlines in 2023 when he said his company planned to pause hiring for positions it eventually could replace with AI, and 72% of 135 CHROs surveyed by Gallup in 2023 said they see AI replacing jobs in their companies in the next three years.

Some companies, such as Honeywell, are addressing these concerns by training all employees on AI so the technology becomes a more widely accepted part of the culture, as noted in the Microsoft-LinkedIn study. Honeywell recently launched the GenAI Academy, where it educates employees on how to incorporate AI into their jobs, supporting employee growth and development with the aim of increasing ambassadors and GenAI power users across the globe.

Other companies are just now embarking on similar strategies, openly incorporating AI into plans for the future.

Constance Noonan Hadley, an organizational psychologist and research associate professor at Boston University Questrom School of Business, said this approach would become more prevalent among large companies in the months and years ahead – and that employers would need to manage the changes that come with their workers’ new capabilities.

“Companies must renegotiate the ‘operational contract’—the how of work—with their employees as AI puts more power into the hands of workers in terms of the way the job gets done,” Hadley told authors of the Microsoft-LinkedIn study.

In the early days of AI, HR’s leadership role is clear

One of the C-level executives quoted in the 2024 Deloitte report on generative AI says it best: “We’re in the first inning of a thousand-inning game, and there’s so much to be figured out.” Overall, HR’s role is to ensure that the use of AI aligns with ethical standards, respects individual rights, and promotes fairness and transparency across departments.

It’s important for leaders to remind employees that many aspects of the HR function—compensation, paid time off, benefits—don’t have to do with humans at all. Managing these areas are the safest tasks for companies to target with AI.

By focusing on these areas and communicating clearly along the way, leaders will set themselves—and their companies—up for success with AI.

This approach also paves the way for adopting the next iterations of AI, whatever those might be. This undoubtedly will create even greater efficiencies, for whatever processes humans want to improve. If all goes well, it also should create a more efficient workplace, one where technology and humans can work more seamlessly to improve the bottom line over time.

Read more