media-blend
text-black

A room with monitors displaying data and maps. A businessperson looks at the world map on a large screen. Another person works on screens with code.

The importance of AI literacy to AI adoption

Gaining AI skills takes training and experience. Workers know that success—theirs and their companies’—depend on it.

In the mid-1990s, when the Internet arrived on the business scene with a screech of modem static, businesses scrambled to get online. They hired programmers fluent in HTML and JavaScript, marketers with e-commerce degrees, consultants who promised to “web-enable” business operations. In the decades that followed—through the emergence of e-commerce, smartphone apps, and more—you could draw a straight line from technical know-how to business value.

Today, however, AI presents a different kind of training challenge. The technology isn’t just new—it’s mutating before our eyes. A year ago, most AI tools could only process text. Today, you can upload a photo of a whiteboard, a chart from a meeting, even a hand-marked PDF—and the model can read it, interpret it, and help you act on it. And this shifting boundary between machine and human capability—the question of what counts as a uniquely human skill—has become a moving target.

One day, prompt engineering is the skill of the future; the next, a large language model update sends that skill requirement the way of the switchboard operator by learning to decipher our intentions from half-formed thoughts and inarticulate grunts—no expert phrasing required. As AI grows more proficient in tasks once considered innately human, business leaders are grappling with a deep uncertainty: How do you invest in people—train them, enable them, develop them—when the very capabilities that made them valuable yesterday might be automated tomorrow?

The answer, or at least the beginning of one, lies in the concept of AI literacy. Not a technical credential, not a job title, but a measurable attribute—a way of thinking about and engaging with AI that equips workers to adapt and thrive. In a series of recent studies with workers, managers, and HR professionals, we found that AI-literate workers aren’t just better at using AI tools. They’re more optimistic, more curious, and more confident about what an AI-enabled future has in store, for them and for their organizations.

The best part? AI literacy can be taught. It can be cultivated, modeled, and shared. In the sections that follow, we’ll explore what AI literacy really means, how to develop it within your workforce, and how to use it as a compass when navigating the strategic, complex decisions that lie ahead.

AI literacy can be taught. It can be cultivated, modeled, and shared.

Two businesspeople in a data center interact with a large digital screen displaying various icons and graphics.

AI literacy gained through exposure and experience

To understand the AI attitudes and behaviors of workers across industries, we conducted our first global survey in October 2024, gathering responses from 4,023 employees and managers. The questions were broad by design: Had respondents used AI tools at work? How optimistic—or anxious—did they feel about AI’s growing role in the workplace? Were they confident in their own ability to work with these tools?

The answers painted a complex but encouraging picture. As we reported at the time, the single biggest factor shaping how employees felt about AI—whether they were hopeful, fearful, or somewhere in between—was their level of AI literacy. Those with higher literacy were far more likely to expect positive outcomes from AI, and far less likely to feel fear, distress, or apprehension. They were also more likely to express nuanced, mature views on how AI use should (or shouldn't) influence workplace decisions like promotion and compensation.

Intrigued, we dug deeper. A more recent follow-up survey of 4,030 global employees and managers gathered more detailed data on employees’ current AI literacy, prior experiences in and preferences for building it (such as through formal training, self-study, or peer learning), and beliefs about its current and future importance (for example, how much it will affect their prospects for advancement). For the questions measuring AI literacy itself, we used a structured scale also used in academic research literature. This scale includes the distinct characteristics of an AI-literate person, things like knowing how to apply AI to reach goals, detecting when you’re interacting with AI, and assessing the limitations and capabilities of AI.

Answers to these indicators of AI literacy capture something more than just experience or even subjective confidence. The construct taps into a deeper, more reflective level of intellectual engagement with AI. To agree with these statements is to declare a kind of felt sophistication and surefootedness, the kind of intuition that matters in a world where the boundaries of AI’s capability are, at least for now, in a constant state of rapid expansion.

In our research, 70% of AI-literate respondents said they expected positive outcomes from the technology, compared to just 29% of those with low literacy.

It’s no real mystery where AI literacy comes from. It doesn’t magically materialize through top-down directives or mandatory webinars—it first grows out of workers’ personal curiosity, trial-and-error, and experimentation with AI tools often outside of formal structures or even the work domain. In fact, a recent McKinsey study found that employees often outpace their managers when it comes to hands-on familiarity with AI. Many are already using these tools—on their own time and often quietly—to boost productivity, generate ideas, or streamline repetitive tasks at work. And the employees who are leaning in on their own accord are the ones who anticipate that good things will come from AI, a key outcome from being AI literate. In fact, from our research, 70% of AI-literate respondents said they expected positive outcomes from the technology, compared to just 29% of those with low literacy.

It’s too early to draw a straight line from that mindset to business performance—but it stands to reason that the employees most comfortable exploring new tools, experimenting with them, and spotting their practical value will also be the ones who find efficiencies, spark innovation, and help drive meaningful returns.

How to build an AI-literate organization

So how do you build that kind of workforce—the one that sees AI not as a threat or mystery but as a tool worth understanding, iterating with, and improving for enhanced individual and organizational results? Our research makes one thing clear: Different organizational personas need different kinds of support. Employees, managers, and HR professionals each bring their own starting positions with and desires for AI to the table. But across that diversity in need and preference, three core strategies consistently make the biggest effect: experiential exposure, structured training, and the influential norms of an AI-literate organizational culture.

Let’s take those in turn.

Experiential exposure: let them try it out

Our research shows that so far, to improve their understanding of AI, the majority of employees (60%) have used very basic resources like reading articles and watching videos. Those resources have their benefits, but the most effective way to build AI literacy is to let people get their hands dirty. Formal training has its place—and we’ll get to that—but for many employees, comfort with AI is like comfort behind the wheel when you’re learning to drive. Manuals and even simulators are simply not a substitute for getting behind the wheel and lurching out into real-life traffic.

For organizations, that means giving employees safe, low-stakes ways to experiment with AI. Let them use it to draft e-mails, summarize documents, or mock up project plans. The key is to keep the setting contained—such as internal communications or intramural projects—where mistakes are low-impact, quickly forgiven, and unlikely to reach customers or damage the company’s reputation.

And identifying those low-stakes opportunities is vital. According to a separate study we conducted of HR leaders, most companies have policies that dictate what data are appropriate or not to share with an AI model but very few companies offer guidance about the appropriate usage of AI, such as what types of work or how much of one’s work should be accomplished with AI. To make the most out of this experimental stage, organizations should prioritize establishing clear AI guardrails for employees. These policies should help shape the sandbox in which workers experiment. And, with the right safeguards in place, this kind of experiential learning builds more than skill. It fosters confidence and good judgment, and confidence in one’s good judgment—all key aspects of AI literacy.

Structured training: help them practice with purpose

Dirty hands will only get you so far, however. At some point all that newfound comfort and curiosity needs the scaffolding of structured training—and when it comes to AI, that’s a challenge. Unlike spreadsheets or CRM systems, AI in all its novel permutations doesn’t come with a decades-old ecosystem of certifications, training manuals, and best practices. Structured training is still possible—essential, we’d argue—but it works best when it’s specific to the tools people use, the jobs they hold, and the tasks they perform.

Start with the tools. Many employees don’t realize that AI is already embedded in the systems they use every day—suggesting replies in Outlook, flagging tone in Grammarly, or auto-summarizing meetings in Zoom and Microsoft Teams. Helping them spot those features—while showing, side-by-side, how much faster a task gets done with AI versus without—can be a lightbulb moment. Immediate results build confidence and a hunger for more.

A good AI training program leaves workers able to perform better at the tasks they’re charged with within an organization.

Then there’s the job-level view. A good AI training program leaves workers able to perform better at the tasks they’re charged with within an organization. For some employees, this may eventually involve learning how models are trained, tuned, and maintained. But for many others, all that’s needed will be practical essentials: how to craft effective prompts, where to find the right inputs, and how to appropriately integrate AI outputs into the offline products or services they’re responsible for producing.

Finally, get down to the task level. Not every task is a good fit for AI, and a strong training program should help employees develop a feel for which parts of their work—and which moments in their day—still call for a human touch. When training helps an employee focus on these specifics and is coupled with those happy accidents and lightbulb moments remembered from the sandbox, employees start to build a foundational understanding that will help them identify the fit-for-purpose uses of AI that optimally benefit their work.

And this is something workers want. Employees may groan at the thought of another compliance training module, but when it comes to AI, eyes are likely to roll a little less. In fact, our research found that formal training was the number one thing workers wanted to improve their AI literacy. It was the top-ranked learning modality chosen by 44% of our sample, compared to informal learning, on-the-job training, and social learning as less-preferred options.

Off-the-clock experimentation may have gotten a portion of the workforce pretty far. But moving forward with the goal of bringing everyone along on this AI skill development journey, what they’re asking for and need are lessons.

Two coworkers sitting at a desk are engaged in a discussion in front of a computer screen displaying code.

Organizational culture: influence them through peer-to-peer learning

Organizational science has long shown the power of company culture and associated norms in influencing employee attitudes and behaviors. Now there exists an excellent opportunity to use these social dynamics to foster collective AI literacy across the workforce. To get specific, leaders can and should:

Again, our research suggests that these initiatives won’t face much resistance from employees. Human nature may be threatened by change in general, but when it comes to AI, workers see what’s coming, and they’re already thinking seriously about their own role in it. Eighty-seven percent believe it’s important to their company that they improve their AI literacy. More than half—57%—see their limited understanding of AI as currently impeding their success at work, and 63% say it’s likely to become an impediment in the future.

This isn’t a case of needing to persuade people that AI matters, in other words. They know it does. What they’re looking for now is help catching up.

Learning to surf

“You can’t control the waves, but you can learn to surf,” wrote mindfulness expert Jon Kabat-Zinn, and his meaning is clear. In a chaotic universe that we can barely understand, let alone manipulate, there is little upside in wasting time and energy trying to control events, people, and the world around us. Putting it in business terms, for maximum ROI, a far more sensible approach is to spend that time and energy on finding poise and stability within the chaos.

This attitude is exactly what’s needed in the age of AI. The technology will continue to develop, the tools will continue to change, and the use cases will continue to expand. The boundaries between humans and machines will keep blurring, perhaps until they all but disappear. But AI literacy—the mindset that equips people to engage thoughtfully, adapt quickly, and stay upright upon the restless tide—is a durable personal attribute. It won’t stop the waves. But it will help you ride them, and in time turn their momentum into your own.

document icon

How to assemble your GenAI dream team

Include a diverse group of thinkers to experiment with new large language models.

Read the point of view

Read more