How HR can direct use of AI and ChatGPT
As adoption of AI spreads, HR has an opportunity to shape how it is used to benefit the business and its employees.
A new worker is competing for your job. They can compose emails, draft reports, write code, and even conduct performance reviews faster than you. Their performance is constantly, consistently improving. They never take a coffee break, need a day off or, for that matter, sleep. That’s because they’re an algorithm.
Generative artificial intelligence (AI) burst into the news in late 2022, astounding businesses and consumers alike. Pundits call the “chatbot” technology revolutionary. But many people, including technologists, are also worried. A group of the technology’s developers even issued a warning in May 2023 that their creation could pose a societal risk akin to a nuclear war or pandemic. They said that managing it should be a global priority.
Read more about AI and HR at SAP Insights
Three HR Strategies for Managing AI Disruption (research report)
People are debating about whether AI represents a fundamental transformation in the nature of the world or if it’s just another in a long line of “world changing” technologies. (Thirty years ago, people said the Internet would end in-person shopping and totally transform the retail industry, yet most of us still shop in stores, even as we also buy online.) But it seems clear that generative AI will redefine how we work. Instead of detecting patterns or recommending choices from various options like older forms of artificial intelligence does, generative AI can produce new content such as text, images, code, and audio, using models trained on specific data. Based on that data, these models “generate” a document, a picture, computer programming, sound. ChatGPT, a chatbot that debuted in November 2022, can converse with humans, answer questions, and write text, and has garnered the most attention. But there is also DALL-E 2, which creates images from natural language descriptions; and GitHub Copilot, which automatically generates code in dozens of programming languages.
Business use of these tools has picked up fast, including in HR. A survey of 1,000 U.S. business leaders by Resume Builder in February found that 49% of companies already use ChatGPT and an additional 30% plan to in the future. Of those organizations that are already using ChatGPT, a majority of them are using it specifically to facilitate hiring: 77% said they use it for writing job descriptions, 66% for drafting interview requisitions, and 65% for automatically responding to applicants.
Machine learning and AI have operated for years in the background in applications such as chatbots, recommendation engines, and common consumer electronics. (Think: Smartphone users who unlock their phones with facial recognition.) But this new breed of AI puts them front and center. Experts say that generative AI will force people—specially white-collar professionals—to either up their game or risk being replaced. For businesses, the technology has the potential to create competitive advantage, but they will need to thoroughly understand the limitations of the technology and the needs of their employees first.
This is where HR must step up, too. HR executives and their teams are certainly familiar with AI—many already employ AI tools such as automated recruitment. But generative AI presents a broader opportunity for HR to play a strategic role in their organizations, making sure it is used in ways that add value for both employer and employee.
For example, HR can redesign jobs and workflows to take advantage of the efficiency of AI, while providing more time for human creativity and innovation, and likely enhancing the employee experience and increasing job satisfaction. It can train employees on how to use generative AI productively, accurately, equitably, and ethically. HR can also help guard against the potential for the technology to do damage. Using ChatGPT to help write a job description poses little risk; but relying on it to conduct a job interview or performance review —where emotional intelligence and human sensitivity are key—would not only fail to deliver the right information but would likely alienate potential and current employees. (Full disclosure: SAP is actively investing in using generative AI to support recruiters.)
Generative AI’s heightened impact on human work
Generative AI represents a significant inflection point in how AI and humans work together. The Open AI Initiative (creator of ChatGPT) predicts that about 80% of the U.S. workforce could have at least one-tenth of their work tasks affected by the introduction of large language models (the key element in generative AI), and about 19% of workers could see at least half of their tasks affected. Specifically, work that involves scientific and critical thinking skills are less likely to be affected by generative AI, while programming and writing skills are “more susceptible to being influenced” by the large language models like ChatGPT.
Generative AI is advancing so quickly that organizations must jump into the fray now if they want to stay ahead of the competition, says Thomas Davenport, a professor of IT and management at Babson College, and the author of several books on AI, most recently All in on AI: How Smart Companies Win Big with Artificial Intelligence with co-author Nitin Mittal.
So far, generative AI has mostly been used to augment human work, “because it requires human intervention up front to figure out what you want to know and to issue a prompt, then requires human intervention at the end to see if it makes sense, to edit it, and address any areas it doesn’t cover,” says Davenport, who also serves as a digital fellow at the MIT Initiative on the Digital Economy. But in the next five years, it could replace many jobs, especially in creative work. For example, even though content would still need editing, the world may need fewer journalists as generative AI writes more articles, he notes. “Not all jobs will go away,” Davenport says, “but in every type of content creation, some will.”
Sounds scary, and potential fodder for a Luddite-style rebellion by workers. But in many cases, AI can help employees instead of replacing them.
Research by MIT Sloan Management Review and Boston Consulting Group (BCG) found that not only do organizations derive value from AI, but their workers do as well. In fact, there appears to be a symbiotic relationship between the two. The report, which was based on a global survey of 1,741 managers and interviews with 17 executives, found that many workers regard AI as a coworker, not a threat. Some 64% of employees said they personally obtained value from AI. These workers were 3.4 times more likely to be more satisfied in their jobs than employees who said they did not get value from AI. What’s more, the 85% of workers who said their organization obtained value from AI also said that they personally obtained value from it.
“What’s interesting is the alignment of the value created both for the individuals and for the companies. It’s [the] opposite of what people think,” says François Candelon, managing director and senior partner at BCG and co-author of the MIT Sloan/BCG report. “It creates a flywheel effect. Your employees get value, so they adopt it and, as they adopt it, the business gets value.”
Companies already are experimenting with melding the talents of humans and the capabilities of generative AI. One emerging approach limits the training of generative AI models to an organization’s internal data. BCG has trained large language models on the company’s own internal, proprietary data, says Candelon. It has launched a pilot in which its management consultants use these models to more easily and quickly find the information they need to better serve clients.
Morgan Stanley also has a pilot based on internal data. Davenport recently wrote in Forbes that the financial services company has fine-tuned GPT-4 on more than 100,000 internal documents containing information on its policies and processes, investment advice, and general business information. As of spring 2023, 300 of Morgan Stanley’s 16,000 financial advisers had experimented with using it.
HR’s role as AI educator
Generative AI has already spread so far and wide that the technology, in and of itself, won’t create an advantage for a company, says Candelon. He says competitive advantage will come from two things. First, the data has to be high quality, accessible, and well organized. (No minor feat for many organizations.) The second ingredient—and where HR organizations can play a major role—is how well organizations teach their employees how to use, and how not to use, this AI. This training entails much more than just how to use the tool. Employees need to understand how the technology works, what potential problems it can cause, and how to guard against those problems. Specifically, HR can do the following:
Teach employees to write good prompts. It takes skill to create “prompts”—that is, formulating questions clearly enough that the AI produces the type and quality of information sought. For example, an HR professional would not just ask ChatGPT to write a how-to guide on company benefits. Rather, she would have to specifically direct the AI with something like this: “Write a 2,000-word summary of our company’s employee benefits, aimed at all salaried employees. Use short declarative sentences and descriptive subheadings. Include examples of how each benefit could be accessed by an employee.”
Teach them to be skeptical - to check the AI’s work. Even with good prompts, generative AI is prone to “hallucinate,” meaning it can extrapolate information incorrectly and yet present it authoritatively as being factual. The stakes can be meaningful. The New York Times reported that, in June 2023, a New York judge fined two lawyers $5,000 after they filed a legal brief containing fictional cases and legal citations generated by ChatGPT. In another example, a technology reporter and children’s book author who tested ChatGPT by asking it for her own biography, found its answer was a strange mixture of fact and fiction, with four true statements and three false ones.
That’s why educating employees is so important, notes Candelon. “We need to make sure people understand how to challenge that first draft,” he says.
Teach them what to use in training AI models. Training AI only on carefully curated, internal information, rather than allowing it to range beyond that to whatever information it finds on the Internet, can limit the tendency of generative AI to hallucinate, says Davenport. According to Davenport, Morgan Stanley also limits the types of prompts its advisers can use with ChatGPT, essentially limiting them to issues relevant to the business.
Teach them about language limitations. There can also be plenty of material that gets lost in translation, especially if the language is not English, according to Katja Einola, an assistant professor and member of the faculty at the Center for Responsible Leadership at the Stockholm School of Economics, who conducts research in how AI affects employees. Einola—a native of Finland—has found that, in Finnish, ChatGPT “can produce these really weird statements.” People need to understand how generative AI collects information in order to realize why it can’t necessarily be trusted. “You may have no clue how the machine got a particular result. It’s actually based on patterns and frequencies, not critical thinking,” she explains.
Teach them to guard against bias. Employees also have to learn responsible AI practices, such as how to guard against bias. This work begins with the design of algorithms—ensuring that these systems built by humans aren’t confirming existing biases or introducing new ones. But employees can look for signs of bias in the results they receive. For example, it is better to be specific in recruiting for certain skills to meet a job, rather than character traits (such as “outspoken” or “caring”) that could point a system to certain types of candidates. (For more on this point, see our article, “How AI Can End Bias.”)
Teach them how to expand and improve AI’s work. Even if the AI output is understandable and fairly accurate, it’s really just a start. “AI systems are fundamentally myopic in ways that humans are not,” says Sam Ransbotham, professor of analytics at the Carroll School of Management at Boston College and co-author of the MIT Sloan–BCG study.
People inherently understand context, experts say. They have a broad knowledge of what’s happening—both in their company and in the world—which needs to be added to what an AI chatbot produces.
With quality data and well-trained people, organizations should be well equipped to use generative AI. Then comes the real operational strategy and opportunity—how to redesign workflows and jobs to make the most of human talent and machine capability. An AI chatbot may be adequate to handle many customer service requests, but when will a human customer service representative step in? Will it be when the customer is irate, or should the AI be trained to monitor certain things that would trigger an earlier handoff?
Providing this level of support and education helps ensure that employees are getting value—enabling them to be a part of the “flywheel effect” described by Candelon above. More value means more adoption and business benefit.
HR’s generative AI agenda beyond training
What’s clear about the emerging landscape of deploying generative AI in business is that HR has a bigger role to play. After all, technology does not replace the need for people. But it does change what people need to do. For example, think back to the book reports many of us wrote in school. Often, we would start by summarizing the plot or other content from the book before we offered our conclusions about the book’s meaning or message. This first part is what generative AI does. It summarizes existing content. But it cannot offer original opinions or novel conclusions based on that content. That requires a human. Job tasks in which employees were largely just summarizing existing content are being automated by AI. This will increase the time available to employees, as well as the expectations placed on them to focus on job tasks that depend on uniquely human capabilities such as creativity, sensitivity, and critical thinking.
But the big question is, will HR step up and play that role by helping companies and employees rethink the nature of jobs and work? Experts like Davenport and Einola say HR hasn’t done so thus far. Einola has talked to dozens of companies for her latest research project but adds that, so far, “HR is not in the discussion.” Davenport adds, “I’ve tried to talk with HR people about this, but it seems that they aren’t really doing much. I think they really need to step up.”
It is still early days. Perhaps organizations are focused on the technology because it’s moving so fast. But HR professionals should be proactive and convince their organizations that HR expertise is key to making it all work—for employees as well as for employers.
In addition to the educational role of training employees in how to use generative AI responsibly, appropriately, and productively, there are other opportunities for using human resources expertise. One is in helping managers redesign jobs and workflows to take advantage of what humans and AI do best, respectively. And another is working to inspire employees to be imaginative in using AI and to figure out how AI can make a particular job or task more productive and enjoyable. After all, the best outcome is that AI offloads drudgery and frees up time for people to be more innovative and creative.
What’s clear about the emerging landscape of deploying generative AI in business is that HR has a bigger role to play. After all, technology does not replace the need for people. But it does change what people need to do.
There are also safeguards HR can monitor to ensure their organizations avoid ethical or management missteps. For example, companies should not use AI to judge human performance. Recent research (link requires registration) by SAP SuccessFactors found that employees were ready to embrace AI in several areas, including to boost productivity and efficiency. But employees were also clearly disturbed when the technology was used to judge them. This suggests that it’s important to establish usage guidelines for generative AI. Processes that require emotional intelligence and human sensitivity, such as an exit interview, are strictly human work. On the other hand, generative AI might be a good training tool for HR managers. Since performance reviews and exit interviews are notoriously difficult (not to mention dreaded)—at least partly because managers don’t do them very often—AI could create simulations in which HR managers can practice conducting them.
In addition, when designing jobs or workflows, don’t presume to understand how the work is actually done. Ask your people.
Einola finds that companies sometimes think a particular task is straightforward and simple enough to be automated when it’s really not. Even the lowest-level employees may have amassed subtle, tacit knowledge over years of experience that cannot be modeled in AI. “The managers don’t understand the workflow and the people get angry and resentful because they are forced to use AI” that has been badly designed, she says. “I know many people who ended up leaving a particular company because part of the job was automated when what was really needed was augmentation.”
The ideal scenario is AI helping humans and humans helping AI. That may be the next step in this evolution, says Ransbotham. “The first model was humans teach machines. The second is where we are now—combining what humans know with what machines know,” he explains. “The next model is mutual learning—where we are collectively learning together so both the machine and the humans get better.”