flex-height
text-black

Robot with a humanoid face, it looks hopeful

Prepping the business for an AI world

In this collection of articles, we reveal how AI is changing business, and what it takes to realize its full potential.

default

{}

default

{}

primary

default

{}

secondary

Everyone, it seems, is implementing artificial intelligence, whether it’s developing a proof of concept or pilot, or moving to full-on adoption. Along the way, leaders are learning valuable lessons about using AI throughout the organization, such as:

You can learn more about how to reap AI’s benefits while avoiding its pitfalls in the articles below.

Woman engineer seated in front of a computer station, working on a robot head next to her.

#RelationshipGoals for AI and humans

Calling all humans. AI—and especially generative AI (GenAI)—is taking on more activities formerly in the human domain, like creating new content. But there are still lots of things it can’t do, like imagine new business models or recognize abstract concepts such as humor or irony.

As AI takes hold, creativity and critical thinking skills will become increasingly valuable because these are things only a human can bring to the table. Uniquely human traits like emotional intelligence, cross-cultural awareness, curiosity, critical thinking, and persistence can and should be taught and cultivated, as opposed to expecting these qualities to develop on their own, as if by magic.

Reevaluate what makes humans human in “The Human Factor in an AI Future”.

Emotion commotion. Meanwhile, a phenomenon called “affective computing” uses machine learning to teach computers how to understand and project human emotions based on input from gestures, facial expressions, text, and tone of voice. This is important for businesses that want to use AI to engage people, as humans are known to respond to emotionally resonant experiences.

Sensitivity to ethical considerations is key. For instance, the privacy implications of detecting employee frustration with a process or task are very different from the concerns raised when monitoring consumer mood changes in a retail setting.

Read about computerized emotional awareness and how to use this superpower in “Empathy: The Killer App for AI”.

AI, behave yourself!

Warning: bias ahead. Most business leaders know by now that AI tools can unwittingly introduce bias into processes. And when they do, suffering ensues, both for people harmed by bad decisions and for companies whose reputations crater when these decisions become known.

Why is it so difficult to know when AI is biased? Data scientist and author Cathy O’Neil says it comes down to the harmful and widespread misconception that algorithms are neutral. Business leaders need to create processes to consider and mitigate against potential bias.

Read our Q&A with O’Neil to discover where bias lurks in “Unmasking Unconscious Bias in AI.”

Here’s the rub: It’s hard to tell what AI is doing. Many companies just set up AI systems and let them run without understanding why a particular decision was made. This is problematic, particularly in regions like the EU, which require businesses to be able to explain the decisions their systems make (such as denying a consumer’s loan application). Business and technology leaders alike need to start treating AI as a new source of risk that needs to be managed.

If you’re not already worried, read “AI Buyer Beware: Know How Your AI Works."

A woman with glasses with data from monitors reflected in her glasses

And AI won’t exactly curb its own behavior. By neglecting AI risks, you can expose your organization and people to many types of harm—harm that can spiral out of control very quickly. In this Q&A, ethicist and former philosophy professor Reid Blackman gives pointers for what businesses should think about as they define the ethical boundaries for their AI systems and create the processes and quantitative metrics to support and enforce those boundaries.

Read why Blackman says ethical AI is not a technology problem in “Giving AI a Moral Compass.”

The risk is real. When it comes to who would be held accountable both for safeguarding the use of AI and for dealing with its potential negative consequences, most fingers point in one direction: the businesses who choose to use it. But don’t be tempted to drop this on your chief information officer’s or chief risk manager’s desk and call it a day.

Ensuring AI behaves responsibly is a team effort, involving everyone from technology and data executives to legal and business execs, and everyday users of the AI system. The goal is to create a multilayered governance system for AI use that fills all the gaps.

Get started on creating a responsible AI framework with five action items in “No More Excuses for Irresponsible Use of AI."

From HR to the supply chain, AI is everywhere

AI’s effect on jobs? It’s complicated. HR professionals are the front-line warriors for managing AI-driven changes to the workforce. But there’s a big gray area between job apocalypse and full-employment nirvana. As such, people managers will need to accommodate many different potential outcomes, from increased automation displacing workers to the need for more human-centric roles created in the wake of AI.

Learn more about possible AI workforce scenarios and how HR should respond in Three HR Strategies for Managing AI Job Disruption.

HR as AI whisperer. HR leaders need to play a bigger role in how AI tools like ChatGPT are used in the workplace. Let’s start with training. Employees need help understanding how to use generative AI productively, accurately, equitably, and ethically, along with guarding against the potential for the technology to do damage.

Additionally, HR could play a huge role in helping to redesign jobs and workflows to take advantage of AI efficiency gains, thereby enhancing the employee experience and furthering AI adoption. HR professionals need to be more assertive about convincing their organizations that HR expertise is key to making AI work.

Learn about HR’s strategic role with generative AI in “How HR Can Direct Use of AI and ChatGPT.”

Supply and demand shocks, be gone. Long-lasting disruptions to global supply chains (we’re looking at you, COVID-19 pandemic) have forever put an end to traditional demand forecasting, with its reliance on historical data. With the increased frequency, scope, and scale of catastrophes—from global climate change and cyberterrorism to rolling pandemics—businesses are now turning to AI-based predictive modeling.

This method incorporates real-time demand signals (such as point-of-sale data, weather forecasts, social media feeds, competitive intelligence, and macroeconomic indicators) and AI-driven analytics to anticipate and respond to unexpected shifts in demand. If you want a durable supply chain, this is the way to go.

See how companies are moving from forecast to foresight in “The New Era of Demand Planning.”

SAP logo

SAP Product

What is supply chain planning?

Learn about demand forecasting for the modern supply chain.

Read more

AI with a corner-office view. AI is even coming to the hallowed halls of the C-suite. At this level, AI will become a tool for inquiry rather than action, helping CXOs prove or dispel hunches and gut instincts with extensive data and analysis. No more relying solely on personal experience or counsel from trusted advisors.

Read how AI and machine learning (ML) will profoundly change how companies are led in “The C-Suite Gets an AI Upgrade.”

Person in a field of tall grass using a digital tablet with a virtual overlay of data.

AI in the wild

Business use of AI is growing fast. Companies are ditching the endless cycle of proofs of concept and are now moving to actual implementation. AI is being used for everything from speeding up drug development and designing toy cars to pollinating crops and increasing efficiency in large-scale manufacturing.

However, there are still as many ways to get it wrong as get it right. Much can be learned from those who’ve rolled up their sleeves and moved forward with AI.

For inspiration and insight into your own AI use cases, read “8 Examples of Artificial Intelligence in Action.”

We can do hard things. It’s all too easy to get your head turned by the glittery object that is ChatGPT. But former Columbia University professor and best-selling author Eric Siegel says the real breakthroughs are tied not to generative AI but to machine learning. All too often, however, businesses get ML wrong because successful ML projects are notoriously difficult.

Siegel says the key is viewing each ML project as a business project, and not becoming enchanted by the technology itself. His book provides a six-part framework for implementing ML to drive operational gains.

Read our Q&A with Siegel to get your AI projects grounded in reality in “A Playbook for Machine Learning Projects That Work.”