flex-height
text-black

Colleagues in a meeting

AI in finance: Myths, misconceptions, and reality

To gain the benefits of AI in finance, business leaders need to ask themselves some tough questions. Start with this reality check.

More from this research

If you’re tired of hearing about how AI builds strong businesses in innumerable ways, we’re here to help. Rather than feeding more dry kindling into the already roaring blast furnace of AI hype, or FOMOing you into believing that your business will self-immolate if you don’t AI-ify everything in sight right now, we’ve tried a different method.

We’ve trained our eyes on AI through the finance lens, focusing on what business leaders need to know about the technology. Like any good quant, we’ve tracked down tons of unbiased research to sift the true facts from the, umm, artificial ones, including what AI is good for, how it affects finance jobs, how to get started, and how to advise the rest of the business on AI investments, strategy, and governance.

First, some assumptions

Before we dig into the details, let’s show what we mean by AI, and why it’s time for us to pay attention.

We want to agree on definitions because AI is a general term that encompasses more than one type of technology. We’re not going to delve (much) into robots. We’re focused on two types of AI: exploratory and generative.

Exploratory AI comes in different forms, including machine learning. This type of AI can sift through oodles of data with little or no guidance from humans to answer questions or reveal patterns.

Because it learns, exploratory AI can often improve its output over time. Remember how Google’s AI played thousands of games of Go against itself? It learned to predict the moves of the human champion and beat him.

Generative AI is newer. It also can churn through massive amounts of data in search of patterns and answers. But it has two special powers.

One is the ability to create original content based its explorations, whether it be a cat picture or the first draft of a quarterly financial report.

The other is a true superpower: it’s easy to use. As with a search engine, the vast complexity of generative AI data and algorithms are hidden behind a straightforward text box. That means you can ask it questions in plain language, though you are more likely to get useful answers when your queries are detailed and targeted—more on that later.

Finance is well-suited to using both types of AI and advising the business on them:

In the rest of this article, we’ll explain how AI will influence the finance department, the rest of the business, and the role of the CFO.

Section 1: Why AI deserves some of its hype, but not all of it

Why are people comparing AI to electricity and the internal combustion engine?

AI is the most portentous technology of our generation. Sure, that sounds hypey, until you compare it to the other big technological advancement of the last half century or so—the Internet.

The Internet is merely a data pipe. It delivers data for humans to analyze. AI takes data and applies some aspects of human intelligence to it—our ability to recognize patterns, analyze, and synthesize it—and kicks them up a few notches.

Here’s why the hype is justified. Researchers from the National Bureau of Economic Research (NBER) deem AI to be a General Purpose Technology (GPT—not to be confused with ChatGPT), one invention in the pantheon of inventions that become so ubiquitous and influential that they change both business and society. Like how the internal combustion engine remade factories, transportation, and our lives.

GPTs usually take decades to reach their full potential, in part because they can’t hit the big time without the help of supporting inventions. Adoption of the internal combustion engine didn’t become ubiquitous or change society until another invention, superhighways, came along. Their arrival set off a flurry of innovation including long-haul trucking supply chains, suburbs, malls, office parks, and drive-throughs that reshaped life and work in the 20th century.

AI is playing out similarly. The term artificial intelligence was coined in the ’50s, but it’s taken decades to develop the necessary supporting building blocks needed to bring AI to life. These include the Internet, cloud computing, and (weird surprise) video games. It turns out that the superfast chips needed to speedily render the heroes and villains of video games are perfect for chowing the massive amounts of data that feed AI applications.

Boom. Now AI is everywhere.

Many of the pieces are in place for AI to reshape society like the internal combustion engine did in its day. But AI isn’t magic. The AI-powered future is more imaginary than real right now. What happens next will depend on what individuals and companies decide to do with it—or not.

Is there a killer AI app for finance right now?

Yes: Prediction. Not only is this what AI does best, but it falls squarely on finance’s most important role: anticipating future events.

AI can see patterns in corporate and economic data and make predictions that humans never will. NBER found it can untangle intricate relationships between economic indicators or financial variables and create not just forecasts but alternate future scenarios at a depth that is impossible for humans to match.

For example, AI can pore over your competitors’ financial statements to detect whether the language in the statements is consistent over time. The insight? The CFA Institute Research & Policy Center found that using the same words in financial statements year after year doesn’t mean that companies are lazy. It means they’re more successful than companies that get more creative with their writing, resulting in financial statements with low textual similarity.

AI can even detect the tone of voice used on earnings conference calls to figure out whether your competitors’ leaders sound pumped or pooped. A downbeat tone means the company is at a higher risk for a stock price crash the following year.

AI is also much better than humans at predicting financial risks. A study of the role of AI in enhancing audit accuracy found that an AI model guided by real examples of fraud can use these known patterns to classify transactions much faster and more accurately. But AI can also explore the data on its own to anticipate and identify new and developing fraud patterns. This predictive model helps finance leaders keep up with fraudsters as they change their tactics.

However, as we’ve said, you need to check its work. Exploratory AI reflects the intentional and unintentional biases of the humans who create the algorithms and the data the algorithms feed on—AI’s version of garbage in, garbage out. Further, exploratory AI is a willful beast that can wander far off the track created for it. Generative AI can be even worse. If it can’t find an answer to a query, it will sometimes make stuff up. As the keeper of their company’s financial health, CFOs have a crucial role to play in making AI behave. More on that later.

How will using AI in other departments affect finance?

Finance relies on information gathered from every part of the business to fuel its auditing, reporting, compliance, and risk management processes. When that information improves, finance can do a better job.

Researchers investigating AI and financial reporting quality found that when AI is used to improve forecasts about anything—customer demand, supply chain disruptions or delays, product defects, equipment failures—finance can make better estimates too. Think about having more accurate projections of line items ranging from sales returns and warranty claims to allowances for doubtful debts, loan losses, and inventory or other asset write-downs.

With better quality information from different parts of the company, finance can focus on the areas that need the most help. For example, a study by McKinsey & Co. found that one high-tech manufacturer uses exploratory AI to track and predict financial and business-continuity risks. The application enables the company’s finance group to focus audits on the units that pose the greatest risks. Not only does AI reduce the time needed to complete each audit, it reduces the total cost of internal audits by 15 to 20%.

Does AI make employees more productive?

It depends. Some research shows that when employees use AI to get advice, their productivity improves.

For example, AI helps customer service agents better than traditional preprogrammed software, which has long provided agents with advice in the form of scripts or real-time prompts.

NBER observed over 5,000 agents who used a generative AI-based conversational assistant during text-based chats with customers. The assistant gave the agents information to share with customers and prompts to manage the tenor and range of the chats. With an AI bird on their shoulders, the agents resolved an average of 14% more issues per hour.

However, the AI assistants didn’t help every agent equally. Each agent’s productivity gains were linked to their experience. The performance of novices and low-skilled agents improved by 34%, while the more experienced agents experienced little or no improvement.

This makes sense when you consider that AI chatbots are only as good as the data they train on—in this case, the best practices of the most talented, experienced agents. For the newbies, using AI was like having a really good cheat sheet. Meanwhile, AI obliterated the experience and talent that had helped the best agents not only rise to the top but also earn more money and recognition.

What these findings might mean for the careers of customer service agents is beyond our scope here. But it’s worth noting that AI is likely to affect how we develop employees in every discipline.

Insights advice

Put AI to work on predictive processes. AI can improve finance’s predictive abilities. But business leaders should place big bets on other processes outside of finance, such as product development. For example, at Moderna, AI algorithms were used to develop the first COVID-19 vaccine in just 65 days, a process that would previously have taken years.

Clean up that data. AI renews an age-old question: Why is most of the data companies produce so awful? There are a lot of reasons, which we aren’t going to get into. Just beware: dirty data turns AI output into garbage just as easily as with old-fashioned software. Maybe AI will supply the business case to finally get everyone’s data house in order.

Appoint a CTO. AI is a highly complex set of technologies that is advancing quickly. To keep up, companies will need advice from a technology specialist. If your CIO is too busy keeping the data centers running, hire a CTO. Some AI-intensive companies are going even further and hiring a Chief AI Officer, though that wouldn’t constitute a full-time gig at most organizations.

Section 2: How AI will change the ways finance does its work

Will AI replace finance jobs?

Yes and no. AI is pricking the once impenetrable bubble that insulated white-collar work from automation. But its reach is limited (for now) by two human advantages that we take for granted.

One is abstract thinking: problem-solving, intuition, persuasion, and creativity. The other is our physical intelligence. It takes us a year or so to learn to walk, while those poor robot dogs spent decades staggering under the weight of millions of lines of code, trying to get the hang of it.

But our physical advantages are narrowing all the time. The Stanford robot dog can now climb, leap, and run into burning buildings (on purpose).

Few jobs don’t require abstract thinking, physical intelligence, or both. While AI can take over some parts of a knowledge-based job—what academics call discrete tasks—humans’ abilities are still needed for the parts that AI can’t handle.

AI will keep advancing, and a day may come when it can do anything a human can. Experts surveyed by Our World in Data think this could happen within a few decades. And if it does, society, not just businesses, will have to create ways for people to have purposeful work.

How do I pick the finance processes that will benefit most from AI?

Prediction is the most valuable use case for AI because it helps finance carry out its most important role: advising the rest of the business about its financial position and strategy. But AI may also save time and money inside the finance department provided CFOs don’t view it purely as a tool for automation. AI is best thought of as a technology BFF—there to support staff, not do their jobs.

Not all finance processes or tasks are worth friending with AI. Tasks with at least one, and preferably all, of the following characteristics will be the best candidates for handing off:

Density. Tasks that involve poring through complex documents such as contracts can exhaust humans. McKinsey & Company note that a global consumer-package-goods company used AI to convert financial data into a draft management discussion and analysis report for its monthly operational review, freeing up finance staff to focus on—you guessed it—higher value tasks, such as calculating risk.

Duration. Some tasks aren’t complex but are still time-consuming. Finance staff can spend many hours gathering data before it can even be analyzed. AI can seize the data much faster than a human.

For example, one of the European Central Bank’s (ECB) most important roles is bank supervision. To keep updated about what the banks are up to, the ECB built an exploratory AI platform that gathers a broad range of relevant text documents about the banks under its charge: news articles, supervisory assessments, and the internal documents from each bank. The software uses natural language processing models trained with supervisors’ feedback to collate the needed data in seconds and arrange it by topic. Supervisors can now quickly understand the relevant information—instead of spending time searching for it.

Defects. Exploratory AI is good at catching mistakes that humans might miss. McKinsey & Company describe how the Bank of Canada built a machine learning tool to detect anomalies in regulatory submissions. Its automated daily runs catch things people wouldn’t. The employees can focus on analyzing the anomalies.

Differentiation. AI can help the business stand apart from competitors. Within finance, that means improving the forecasting process so that the business can spot new opportunities for revenue or avoid risks better than competitors.

Since first mentioning AI in its 2017 annual report, JPMorgan Chase has built a team of 2,000 AI experts and data analysts working on more than 400 AI use cases in areas including financial fraud and risk. The company views data management as a differentiator, saying that high-quality data can fuel better insights and improve how it manages risk. It’s a huge investment that highlights how important buffing the crystal ball is to financial performance.

Insights advice

Start with proven applications. Though prediction is the most valuable application for AI, it’s not the easiest to develop, nor are there many success stories to look to as models. But there are some problems AI has been working on for decades and has mastered, such as voice and optical character recognition (OCR). For example, finance departments are using OCR-based AI to read and analyze paper contracts for revenue recognition. You can already buy AI software to process accounts receivable and accounts payable, as well as to code invoices.

Focus on augmentation, not just automation. For more than a century, the popular philosophy behind business automation has been the belief that replacing manual processes, and sometimes the people who perform or manage them, will improve efficiency and quality while lowering costs. And the results have been mixed.

Applying a replacement philosophy to AI projects increases the chances for failure because even exploratory AI algorithms, while generally more accurate than humans, are not 100% reliable.

And again, as we’ve said, generative AI makes mistakes. It learns from the data it sees and has no sense of ethics or propriety. Remember Microsoft’s chatbot Tay, created in 2016 to be a hip teenager? When released on X, formerly Twitter, it was lured into conversations with online trolls who taught it to become a hate-spewing jerk. Poor Tay had to be put in permanent detention.

Augmentation is better than replacement, especially for generative AI, because it keeps a human in the loop. Its main value is its semi-humanness: it communicates, it creates, and it predicts. But it can’t (yet) produce original insights, especially about problems that no one has seen before. And the world is full of these.

Don’t stop at the low-hanging fruit. At the moment, it’s much easier to focus your AI efforts on bringing more intelligence to routine tasks such as exception identification for invoice processing—and these are the areas where vendors already are, or soon will be, offering packaged products.

While gathering up all those ripe apples, start a parallel effort to tackle the more difficult areas that will produce analysis that separates you from the pack, such as predictive forecasting. These customized analytical applications are more complex and building them will likely require hiring AI experts or consultants. Planning and developing them will also take longer, so it pays to start figuring it out now.

Section 3: How to prepare finance employees for AI

What AI skills do finance employees need?

Let’s start with the skills that finance employees won’t need—or will need less and less—as AI improves. As in every other white-collar area of the company, AI will reduce the need for subject matter skills and knowledge.

Many finance processes are complex but well defined, with strict rules that allow for little improvisation. They are like honey to AI because they are discoverable—whether from the oceans of data ingested by generative AI tools or from specialized pools of finance-specific data. AI can learn them without improvising and then behaving like someone who’s had too much wine at an office social.

The growth skills for finance employees involve creativity, critical thinking, and relationships more than any body of knowledge. McKinsey & Co. report that communication, leadership, and collaboration skills appear increasingly in finance job postings, while traditional knowledge-based skills like those needed to conduct an audit are treading water.

What are queries, and what do employees need to know about them?

The ability to use generative AI tools such as Chat GPT could be the most important new skill for finance professionals since the spreadsheet. Unlike a spreadsheet, generative AI tools are deceptively simple. They look like search engines. Employees may think they should use them the same way. If they do, the results might convince them that AI has little to offer.

Don’t be fooled. The learning curve for generative AI tools is steeper than it appears, and your employees will need new skills to use them productively.

In a sense, we’re going Back to the Future. In the early days of databases, programmers had to write precise queries to get anything useful out of them (which is sometimes still true, depending on the age and complexity of the application).

When you use a tool like Chat GPT, you also need programming skills. You don’t need to learn SQL, because generative AI understands natural language. But you do have to learn to write logic-based instructions and thorough contextual descriptions that tell the tool how to sift through its piles of data.

Writing queries will become an art because AI has no sense of logic—it can’t interpret nuance in a sentence like humans can. Queries need to be written so that the AI knows the path to follow in its quest. Unlike exploratory AI, generative AI doesn’t learn from experience. Each query is treated as new and unique. That means the better the query, the better the answer.

Laying out the steps will also help humans reduce the black-box effect of not knowing how the AI came up with its answers. (Aristotle would make a model employee in a generative AI-infused finance department.)

How do we help employees accept AI, rather than resisting or ignoring it?

Like every technology before it, AI needs a shot of change management to make it palatable to employees. But accepting AI may be harder, because it has the potential to perform as a colleague rather than a static, impersonal tool.

MITSloan’s 2022 Artificial Intelligence and Business Strategy Global Executive Study and Research Project found that the best way to get employees to use AI regularly in their workflow is to require them to use it but give them free rein to override or ignore its advice. That may seem paradoxical until you consider that requiring everyone to use AI drives a sense of equality: no one is being singled out; everyone must do it. Meanwhile, when AI recommendations are optional, we feel more competent (the machine isn’t always right) and in charge (machines aren’t the bosses of us).

Requiring AI makes employees three times more likely to use it, while merely encouraging them to trust AI makes them only twice as likely. MIT Sloan also found that when employees can choose to ignore AI recommendations, they are twice as likely to use it than those who were forced to abide by the algorithm.

Do finance employees like using AI for routine tasks?

Evidence suggests they do. The Organization for Economic Cooperation and Development (OECD), a club of mostly rich countries, did a series of finance industry case studies to gauge employees’ reaction to AI. In one, a UK financial firm used AI to help with a range of repetitive administrative tasks involving mortgage underwriting, interest rate adjustments, commercial banking, and brokerage. Its outputs were reviewed by humans. Employees saw the technology as an improvement because their work became less administrative. They had more time to support customers and colleagues, do research, plan, and manage projects.

Does the shift to more “value-added” work risk stressing employees out?

Yes, this is a real risk. AI applications may automate the “easy” tasks. That leaves employees with the same amount of work but without the mental break afforded by the easier problems. Seventy-five percent of finance industry employees surveyed by the OECD said that AI had increased the pace at which they worked.

Insights advice

Define new roles before automating. You’ve heard this before, but it still bears repeating: Employees handle change better and are more productive when they know change is coming, they understand why, and, most importantly, they know what it means for them personally and professionally.

Few AI-infused finance tasks won’t require human involvement or supervision. Before automating, leaders and managers will want to work with employees to define their new roles post AI. Doing so will reduce turmoil, turnover, and strife. And employees will have fewer reasons to resist, either directly or passively, because they took part in creating their new roles.

Seek out liberal arts grads. CFOs shouldn’t assume that people with finance backgrounds are the only ones who should chat with generative AI. “It’s the individual that knows how to ask the right questions” who will be the best prepared for an AI future, according to Ford’s CFO, John Lawler.

People who have a grounding in arts- and humanities-based subjects such as logic (philosophy majors rejoice!) and journalism (all about asking good questions) could be good additions to finance.

Keep a few mindless tasks for the humans. Humans can’t be heads down and focused completely on an intense, brain-consuming task all day, every day. If you give all the routine stuff to AI, the humans are more likely to burn out. The OECD survey learned that a Canadian manufacturing company was so concerned about burnout that it left some of the easy tasks alone as a mental health break for employees.

Retrain as you go. Finance is a calling; many employees get into it to apply their love of numbers. Further, finance tasks such as bookkeeping don’t reward people for creativity. But increasingly, finance employees will be stewards of AI in routine tasks—think exception handling rather than doing the tasks themselves. That, theoretically, gives them more time to do analytical work in concert with their colleagues. Training to work together rather than solo can be incorporated into work now by creating team structures that emphasize more collaboration or by tweaking meeting practices to emphasize more problem-solving, creativity, and empathy.

Get logical. Train employees to write great logic-based queries, and you’ll get more out of generative AI tools.

Section 4: How CFOs can help create an AI strategy for the business

Do I need a separate strategy for AI, or should it be part of our digital strategy?

AI is advancing so quickly and has so much potential for changing work and society that it deserves its own spotlight. Meanwhile, digital transformation is a forever project that will be informed by how companies decide to use AI.

CFOs play a prominent role in advising on the revenue opportunities and efficiencies of their company’s digital strategy. Their role in AI strategy will be similar. Here are some potential areas for AI to contribute to financial results:

Add revenue or volume. AI can uncover possibilities for selling more of your products to existing customers or discovering new customers and markets.

Differentiate and increase willingness-to-pay. AI is good at predicting what customers want next, whether it be product recommendations from Amazon or movie recommendations from Netflix. Even when the recommendations aren’t great, they make customers feel these brands have more to offer than competitors. And customers become more willing to pay for things they didn’t know they needed or wanted.

Defuse competitive risks. Maybe the numbers behind the AI project you’re considering don’t seem so great. But at the risk of raising the specter of FOMO, investing anyway may protect the company from competitors. It’s fair to assume at least some of them are moving ahead with AI. If they succeed, the companies that haven’t explored AI could lose market share and opportunities to attract new customers.

There’s reason to think that AI will be adopted more quickly than previous waves of technology. The Web struggled for years because of lousy Internet speeds and weak computing power. But AI has a sturdy foundation: fast Internet, cloud providers to store your data, those super-speedy video-game chips, and constantly improving generative AI models (including industry- and function-specific ones).

Transform business models. As innovative technology often does, AI creates opportunities for entering new markets—or underserved ones. A fintech startup called Tala sells microloans to people in Kenya, Mexico, the Philippines, and India who tend to use bank alternatives, such as cell phones, for financial services. The company scores loan applications through a mobile app using a customized version of Metaflow, an AI system that was developed and open sourced by Netflix.

Rethink outsourcing. If you have business processes that are handled entirely by outsourcers, chances are that the primary driver of that decision was lower cost labor. But the advantages of labor arbitrage—many people doing time-consuming work for less money—will wither as AI improves. Chances are if you’ve codified a process well enough for someone else to do it, it will be a good candidate for AI.

Outsourcing won’t go away, but it will change. Perhaps service providers will automate your processes and manage them for you. Or, as Harvard Business Review (HBR) predicts, they will help automate them so you can manage them yourself.

Reduce environmental emissions. Companies need a strategy for reducing AI-sourced emissions because the technology is an energy hog. Its appetite for data is one of the factors causing the growth of data centers. HBR anticipates that by 2030, about 8% of global energy will be consumed by data centers, with AI accounting for 20% of the total.

However, AI shows promise for mitigating the problem it is creating by helping companies reduce their carbon emissions. For example, companies can use it to make their energy consumption more efficient.

AI could also help companies decrease emissions (and costs) through predictive maintenance and improved waste management. A report by Boston Consulting Group suggests that by finding new ways to make companies run more efficiently, AI could potentially reduce greenhouse emissions by 5–10% by 2030.

Where are the best opportunities for AI to improve revenues?

Look for situations that rely on AI’s prediction power. The biggest hot spots for predictive AI in companies today are:

Sales and marketing. Customers have long been pushing for a more personalized experience. AI puts that expectation into overdrive: We’re all still waiting for a useful chatbot that knows our buying history and preferences and doesn’t drive us to scream REPRESENTATIVE! into our phones in frustration when we don’t get the response we need.

Product development. An AI-powered product development process can help companies create more new offerings faster. And that leads to growth.

Researchers found that over an eight-year period, companies that invested more in AI for any reason increased sales by 19.5%, employment by 18.1%, and market valuation by 22.3%. The study attributed these gains primarily to more and better product innovation.

Who should be responsible for AI strategy in my company?

Traditionally, the CIO or the CTO oversees strategy for new technologies. However, financial services companies, which have been investing in AI for years, are carving out new roles to focus on it. JP Morgan Chase, for example, now has a Chief Data and Analytics Officer who sits on its operating committee and reports to the CEO.

At the very least, companies with strong AI ambitions should have a specialist who knows about AI and data and can advise the executive committee, according to Harvard Data Science Review. Ideally, this person will be on staff, but in these days of sports-star salaries and bonuses for AI savants, a trusted consultant can also fill the bill.

Do I need yet another steering committee?

We’re afraid so. And this committee should draw from everywhere in the company.

Members should include the AI specialist, who will help create use cases and separate AI myth from reality. In addition, it needs the CEO for strategy, the CIO (or CTO) and the CFO to figure out costs and manage projects, and the CHRO to help with change management and the effects of AI on employees.

One function of the steering committee should be to educate board members on AI. Ideally the board itself will have its own AI expert to help directors make informed decisions about how AI fits into the corporate strategy and the timeline for investments.

Should I build an AI infrastructure myself, or should I rely on vendors?

Build or buy decisions for AI, as for other technology investments, rest on a cost/benefit analysis. If you have a well-defined use case, it may be easy to quantify the benefits. But the cost of AI can be difficult to calculate because there are so many factors, such as the scope of the application, the number of people who will use it, and the quantity and quality of your data.

Turning to average costs can help. However, it’s next to impossible to come up with an average cost for exploratory AI models because they are usually built for a specific purpose, such as fraud prevention or credit checking in finance.

Generative AI systems are a little easier to quantify because vendors are publishing rates for querying the numerous publicly available large language models (LLMs), small language models (SLMs), and AI image generators. Vendors, as well as cloud providers, should be able to put a number on building or hosting the supporting infrastructure.

(LLMs and SLMs contain the algorithms that generate answers to people’s queries. SLMs are specialized with industry and company data; when glued to a more generic LLM, they can provide better and more specific answers.)

An estimate by ClearML suggests that the first year of training, fine-tuning, and running an LLM for 3,000 employees hovers around $1 million using an in-house team. Again, how much a system costs depends on the quality and quantity of data you have, and how it is used. A single model could be used for multiple purposes, so the cost for new applications and users could come down over time.

If that sounds expensive, researchers agree that buying access to commercially available AI by the sip can quickly add up to more than building one yourself.

To oversimplify: There are two ways that vendors charge for access to AI models. First, you pay to ask the model a question. Then you pay for the answer, which may include text, pictures, or computer code.

Because new technology comes with new jargon, the industry has picked the term token to describe the bits and bytes that the models use to process inquiries and generate results. It takes roughly 1,000 tokens to create 750 words of text (the equivalent of about three printed, double-spaced pages).

Looking at LLMs, the average price per token seems small. Using the latest, greatest version of ChatGPT costs $2.50–$5.00 for 1 million input tokens and $7.50–$15.00 for 1 million output tokens according to OpenAI, which operates it.

The price a company pays depends on whether they use the commercial version or have customized it, among the many other factors we mentioned above.

The costs can add up quickly when you have thousands of people submitting queries (think about customers asking questions of a chatbot). Further, the type of content generated can drive up costs. HBR reports that pixel-packed images blast through more tokens than the best ride at an amusement park.

Beyond cost, building a generative AI model in-house versus buying from commercial providers each have advantages and disadvantages (see tables).

Paying by the token for a commercial LLM

Advantages
Disadvantages
  • Ease of use. Less need for internal technology expertise.
  • Higher costs. Researchers agree that using commercially available LLMs is more expensive than doing it yourself.
  • Lower infrastructure cost. Just plug into the vendor’s LLM.
  • Outages. LLMs are a new technology and usage is skyrocketing. You can neither predict nor control outages that result from strains on these systems.
  • Stay up to date. No need to upgrade software or infrastructure as LLM technology improves.
  • Grow as needed. Since most commercial LLMs are cloud-based, it’s easy for you to get more LLM power when you need it.
  • Lack of flexibility. As with any new technology, as your employees become more adept at querying LLMs, they may find your choice doesn’t give them everything they need. Companies may need to switch providers at any point to get the best results for their teams.

Self-hosted LLMs

Advantages
Disadvantages
  • Spread out costs. The more processes and employees you have querying the LLM, the more cost efficiencies you’ll see.
  • Cost of talent. There is no established pool of AI experts anywhere. There are programmers, and there are data analysts. Using AI effectively requires deep knowledge of both areas. Business leaders should be prepared to spend money training and keeping technologists. This talent shortage is bidding up asking salaries faster than a Picasso at an art auction.
  • Predictability. Your staff can decide when to do updates and address any problems or needs.
  • Infrastructure and development costs. AI’s appetite for computing power is big and getting bigger fast. For example, a single high-powered GPU (those chips that power video games and AI) costs about US$10,000 right now. To create your own LLM using the open-source Llama platform requires at least one GPU. Development costs start at $1 million for a model serving 3,000 people. Meanwhile, monthly operational costs can also run into the millions, depending on the size and complexity of the model. In addition, keeping those data centers powered and cool is going to cost you.
  • Data and change control. If all your data is in-house, you have a better shot at keeping it private. And if your company decides to change its AI strategy, you can decide when and how to make the needed changes.
  • Scaling costs. Most companies are just getting started with AI. As your ambitions grow, you will need more of everything: people, computers, data, software, and more.

Insights advice

Walk and chew gum. Think of AI strategy as two-pronged, IEEE suggests: Use AI to help employees be more productive for quicker ROI while at the same time experimenting with AI’s predictive abilities. This experimental route may have a much bigger effect, such as growing revenues from existing products and services and discovering new products and markets.

Put the board to work. We hope you don’t have one of those lazy, dysfunctional boards that rubber-stamps whatever the CEO says rather than adding diverse perspectives and asking tough questions. Boards should push back, as they would for any new, unproven management proposal. IEEE suggests they can also challenge companies to avoid acts of omission, such as moving too late or not understanding the risks of falling out of step with the AI strategies of their competitors.

Link to the larger technology strategy. AI deserves its own strategy, but don’t let it become too disconnected from the broader IT strategy. New AI apps should not outrun the ability of your company’s IT infrastructure to absorb them. IEEE believes companies will also want IT to help with data governance, ensure security and privacy, and follow government regulations.

Have an AI people strategy. Getting the most from AI requires more than just technical talent. Research presented at the Americas Conference on Information Systems predicts that employees will have to learn to do their jobs differently. Rather than just chasing expensive external talent, companies should invest in training and upskilling existing employees.

Section 5: How CFOs can help the business with AI governance

Should I trust AI?

Certainly not at first. We’ve all been spoiled by how easy it is to interact with that AI celebrity, Chat GPT. Yes, we’ve seen a few hallucinations. But it does pretty well at answering our questions and generating okay content.

Don’t expect the same from your own generative AI project at the beginning—especially if you’re using your own data or stirring it into one of the commercially available LLMs. As Forrester CEO George Colony says, the output from a new model may only be right 60% of the time. You wouldn’t put that in front of customers or employees. Expect to do a lot of tweaking and code rewrites before it’s ready for the world.

Furthermore, although programmers try to design algorithms to perform reliably and consistently, we can’t envision how they will act once deployed. They are as inscrutable and unpredictable as teenagers.

A paper published by IEE found that it’s mathematically impossible to reliably replicate the results of complex algorithms like those that power LLMs. Normally, this is a deal-breaker in science: if no one can repeat your experiment with the same outcome, your theory is headed for the trash can.

Nor is it possible for humans to predict what will cause an algorithm to become unstable and make bad decisions or return false information. For example, humans know to ignore a bumper sticker slapped onto a stop sign; it doesn’t affect whether we decide to stop. AI doesn’t have that ability because it can’t reason. By slapping a small bumper sticker on a stop sign, researchers presenting at the 2018 Conference on Computer Vision and Pattern Recognition (CVPR 2018) were able to fool a self-driving car into continuing past it.

This small bumper-sticker error can have huge consequences to life and limb. And an AI-generated financial forecast with even the slightest error can drive a business into a ditch.

Blame gets a bad rap, but it can be useful: it pinpoints problems and helps us figure out how to prevent them from recurring. When an AI system makes a mistake, stakeholders will line up to ask who or what went wrong. The developer, the user, or the algorithm itself?

But when it comes to AI, it’s hard to see who’s at fault because the algorithms don’t usually leave any breadcrumbs for auditors to follow the way traditional software does.

Transparency is crucial, however, both to improve system performance in the future, and to win over skeptical humans at any point. If the experts can’t tell you how AI arrives at its answers, the application isn’t ready to go live.

Is it possible to create AI that isn’t biased?

Humans write algorithms, and humans have biases, intentional or not. Because AI can learn (and doesn’t get embarrassed or defensive when called out), it can become less biased over time. Humans can keep watch on its outputs and tweak the algorithm if they see biased results.

Will employees trust AI too much?

Even when AI is running well, be careful about giving it too much power. Humans have a habit of seeing technology as being more trustworthy than our fellow human beings (and sometimes it is). We may therefore put too much faith in the technology when making high-stakes decisions, according to Big Data & Society.

Complacency can also come from our tendency to want to avoid blame for bad decisions, a.k.a. “agency laundering,” in which we distance ourselves from ethically or morally suspect decisions: “It was all Siri’s fault!” Assigning clear human responsibility and accountability for each AI-enhanced task is crucial to reducing the risk of overreliance on AI.

We already have policies to govern technology. Do we need special policies for AI?

Yes. What sets AI apart from every other type of software is that it can learn from what it did in the past or from feedback on its answers. This means it can easily stray from the path that developers originally created for it, without anyone knowing why. And in doing so, it has the potential to do a great deal of harm to your employees and customers.

Existing technology governance policies typically cover only how technology is developed and implemented. AI policies need to zoom out to address societal concerns. Here are examples of guidelines to consider in creating your own policy:

Inclusivity. When AI gathers intel, it can underrepresent or misinterpret data about people who are not members of a dominant group (such as women among senior executives, Black people in the U.S. population, or individuals with physical disabilities). That’s because there is often much less data about them to work with.

Less data means the outputs from the AI will be skewed toward the group with the most inputs. LLMs are fueled by public data sources, and these currently overrepresent White males. A Cornell University study of AI-generated images found that women and Black people are significantly underrepresented, which could deepen existing biases and stereotypes. The research also found males were overrepresented in images of people working, which they conclude could deter women and young girls from following their professional goals.

A policy that requires tuning AI to adjust for underrepresented groups isn’t only about treating people equally, says the Organization for Economic Cooperation and Development. It’s also about ensuring they are portrayed accurately.

Fairness and privacy. We’re accustomed to focusing on customers and employees when it comes to data privacy. But accessing a third-party LLM to gather financial intel could violate the privacy or other rights of the people whose data is collected there (to date, LLMs have tended to inhale data indiscriminately without checking whether they are allowed to have it, or whether it’s accurate). Because there’s no way to know, companies should protect that data as if it were their own.

The EU has proposed that companies using LLM data can be held liable for privacy violations along with the LLM owner. European regulators are also considering holding companies responsible for addressing misinformation and disinformation that is amplified by AI, as well as respecting freedom of expression and other rights and freedoms protected by applicable international law.

Disclosure. Companies owe it to customers, employees, and everyone else to show the intended purpose of their AI systems when they deploy them. They should do so in plain language, not in one of those endless, incomprehensible user agreements written by lawyers for the company’s protection. Stakeholders should be aware when they’re interacting with AI systems, and companies should provide information to help those wronged by an AI system to challenge its output.

Safety and security. Companies will need ways to ensure that if AI systems risk causing undue harm or show undesired behavior, they can be overridden, repaired, or decommissioned. Where possible, companies should extend cybersecurity protection to cover the data they receive from AI entities, such as LLMs.

Accountability and responsibility. Many data sources, particularly from the public Internet, contain inaccuracies, biases, and inadvertent disclosures of confidential information. Through innocuous queries, AI can create troubling data all on its own, such as deep fakes: realistic images, video, or audio involving the likeness of actual places and people (alive or dead). Meanwhile bad actors can use AI to easily create realistic fakes that target people with disinformation or abuse them.

Companies can’t absolve themselves of responsibility for harm by blaming the algorithm or an external data source. They should be able to trace all the data sets, processes, and decisions made by the AI so they can analyze its output. Further, they should be able to trace development decisions made by humans as AIs are created and managed.

Policies that require a clear statement of the intended outcomes for AI systems, along with ways of testing, monitoring, and auditing them, can ensure companies design AI systems to limit problems that come from data, algorithms, or misuse according to an MIT policy brief.

Do all AI projects require the same level of oversight?

No. AI used to guide missile systems is riskier than the algorithms that Netflix uses to push movie recommendations. The EU recognized this difference when it passed its Artificial Intelligence Act in 2024. The law will go into full force in 2026. It classifies four levels of risk from AI that need different governance policies:

Minimal risk. Many business uses of AI fall in this category. Will algorithms from Netflix or Amazon send you some bad recommendations? Sure. Do these bad recommendations pose a threat to citizens’ rights or safety? Not unless you start a fight over the TV remote, and that’s on you, not the AI.

In finance, processes that involve structured, numerical data, such as numbers from spreadsheets—where there’s no room for AI to misinterpret what it’s seeing or make stuff up—carry minimal risk. Though you should always have a human checking AI’s work to make sure the final numbers are correct, keeping AI inside the organization and using easily verifiable data means there’s minimal threat to customers or employees.

Limited risk. Using AI to gather unstructured data (e-mails, news stories, blog posts, social media comments, and the like) to create a report for management poses the risk of sweeping people’s confidential or copyrighted data into the AI net. In this case, the EU will require informing stakeholders that AI was involved in generating the content. You’ll need to be able to explain how the AI came up with its answers and where the data came from.

High risk. AI gets riskier when it is used in applications that control things that society can’t do without, such as water, gas, and electricity infrastructures; things that potentially cause people harm, such as medical devices; and systems that are used to protect people and their rights, such as public safety and justice systems.

More relevant for corporate leaders are systems that decide access to education or jobs (such as university admissions and recruiting systems) and applications that use biometric identification.

But perhaps the biggest risks for businesses are AI systems that interpret human sentiment and emotions. As noted earlier, these can be useful when trying to interpret the tone of competitors’ quarterly earnings calls or to gather intel on whether their customers are happy. But this highly interpretive use of AI is still new and untested. An AI crawling the Internet could get the data wrong or probe too deeply into personal information.

This is where EU regulators would like to perch on your shoulder and make sure you are mitigating the risks. You’ll need to ensure high-quality data, keep activity logs and detailed documentation, track user information, provide human oversight, and maintain high levels of accuracy and cybersecurity.

Unacceptable risk. The EU will ban the use of AI altogether if it is considered an outright threat to people, such as when it is used to manipulate their behavior or that of specific vulnerable groups, such as voice-activated toys that encourage children to engage in dangerous behavior). AI is also banned regarding “social scoring,” that is, classifying people based on their behavior, socioeconomic status, or personal characteristics. Businesses must further steer clear of identifying or categorizing people based on biometric data. The prohibition includes real-time and remote biometric identification systems such as facial recognition, with some exceptions for law enforcement.

Penalties for violating the EU AI law include will be fines ranging from 7.5 million euros or 1.5% of revenues to 35 million euros or 7% of revenues.

Insights advice

Build or buy AI that follows the toughest regulatory standards. Expect politicians to see regulating AI as a bid for job security, and for the AI market to respond to the highest level of regulatory risk protection. If you’re a multinational organization, it makes sense to seek out the orneriest regulations and use them as a benchmark. But even if you’re focused exclusively on the home market, it pays to emphasize risk protection because it gives cover against changes in local regulations and legal challenges from people who think your AI has wronged them.

Create role-based rules. Developers have different responsibilities and challenges from business users or leaders when it comes to AI. Everyone should have guidance for building and using AI that is right for their roles.

Monitor for societal effects. AI changes over time, and so can the effects it has on people outside your organization. It’s important to keep comparing the data that goes into the system with the results that come out to make sure nobody’s getting hurt by it.

Don’t leave AI to the developers. Most software projects involve stakeholders early on to create a list of requirements, but then the businesspeople go back to their day jobs. Their perspective is important at all stages of the AI development process, not just for making the software useful but also for guarding against bias and other unintended consequences. Having people with diverse backgrounds and experiences on the team will reduce the probability of AI losing its artificial mind.

Believe the hype—to a point

Remember when the NCSA Mosaic Web browser first appeared in 1993? There was a lot of excitement at first, because it gave the Internet a human face—you could add pictures and words to the Internet. Businesses rushed to take advantage. The result? Mostly a lot of boring “About us” pages, with companies left wondering when, or if, it would have any effect on their fortunes.

Today? The Web browser is a centerpiece of marketing strategy for just about every company. For many others, it is the business—the primary channel for interactions with customers.

We’re going through the same first burst of excitement today with generative AI. It gives anyone easy access to the power of AI. And just like in the ’90s, the excitement has quickly cooled to tepid, with companies pulling back on generative AI projects and wondering when it will provide real value.

Patience is once again in order. The power and potential that sits behind the generative AI search box is even greater than what sat behind the Web browser 30 years ago. AI has the potential to think and explore in ways humans can’t and to become a useful digital colleague.

This will take time.

Don’t make the mistake of dismissing it. We don’t want to create too much FOMO here, but if you recall, back when the Internet was young, most retail businesses discounted the potential of Web-based electronic commerce. They are still reeling from having waited too long and not investing enough in the resources needed to create compelling, useful online retail experiences.

Here’s the lesson. Beyond capitalizing on the advantages that exploratory and generative AI offer to finance as a business function, CFOs have a big say in where the business invests in experiments using new technologies and business models. They should know that AI is going to upend how business is done and how companies run—eventually.

As finance leaders, CFOs should be AI optimists, encouraging the C-suite to experiment and approving the money to do so. Just as with the Web and electronic commerce, some internal and external resources need to be developed—skills, infrastructure, and software—to take full advantage of AI. CFOs should, gradually and carefully but with a firm belief in the power of AI over the long term, start building them now.

Read More