Even as A.I. goes mainstream, the majority of leaders still struggle to use it to its full potential.
According to Deloitte’s 2022 State of AI in the Enterprise report, which surveyed more than 2,600 global business leaders, more than three-quarters of respondents had fully deployed three or more types of A.I. at their firms, but many companies have yet to see the value.
In fact, compared to the prior year’s study, there was a sizeable increase (29%) in the number of respondents who identify their A.I. tools as underachieving.
When it comes to scaling A.I., the top challenges cited were: Managing risk associated with A.I., maintaining and supporting initiatives after the launch of A.I. systems, and finally, a lack of executive commitment to A.I.
Clearly, knowing that you “should” adopt A.I. doesn’t easily translate to knowing how to do so.
In Power and Prediction: The Disruptive Economics of Artificial Intelligence, economists Joshua Gans, Ajay Agrawal, and Avi Goldfarb argue that while leveraging the predictive capabilities of A.I. could vastly improve decision-making at scale, it has to be done with the consequences in mind.
The book couldn’t be more timely. “Artificial intelligence is increasingly being used to make workplace decisions–but human intelligence remains vital” reads a recent headline on a commentary in Fortune. Some “85% of business leaders would let a robot make their decisions and avoid the challenges posed,” reports another recent survey. An article in Forbes even offers “5 Strategic Principles For Using AI To Make Better Decisions.”
A.I. is increasingly helping leaders make decisions, and to help them take full advantage of A.I., the authors of Power and Prediction developed a framework called the “A.I. Systems Discovery Canvas.”
Below is an abbreviated exercise the authors outline in their book to help leaders determine if their firm is ready to adopt A.I. technology to guide decision-making.
1. Are you clear on your organization’s mission?
Though clarifying your mission might seem easy, co-author Goldfarb tells The Workback that it’s actually not uncommon for this to be the hardest step for businesses, which makes the entire exercise a challenge.
Goldfarb isn’t alone in identifying this issue. A previous edition of Deloitte’s State of A.I. in the Enterprise report states that all too often, business leaders focus too much on use cases or isolating A.I. strategy within IT or data sciences instead of weaving it into the overall strategy.
“The strongest A.I. strategies tend to begin without ever mentioning A.I.,” write the report authors. “Instead, they should begin with the organization’s North Star: The core business strategy.”
Goldfarb adds that sometimes, leaders run into issues with certain players not wanting to get on board for their own personal reasons: “The people who benefit from the way the current system operates are likely to resist change.
With a clear well-communicated mission, identifying the core decisions is a matter of attention and effort, and resistance can be overcome by referring to what the organization is trying to accomplish.”
2. Can you boil down your business to the fewest possible decisions you’d need to achieve your mission?
While there may be infinite micro-decisions involved in any business, the authors argue that to best utilize A.I., leaders have to cut through the fog.
They use the example of a home insurance company to illustrate this exercise. Within the marketing department, core decisions are: Which customers do we target, and how should we allocate resources to reach them?
3. Can you identify which predictions would help you make better decisions—and what would happen if those are wrong?
Now, think about how being able to make better predictions about the outcome of key decisions could help serve your mission. The trick is to think beyond what you can do with the data you currently have and to think what you could do if you had super-powerful A.I. tools.
For example, the authors argue, if a home insurance company could use A.I. to make better predictions about a customer’s risk profile—which the authors argue is “a near-perfect application” for A.I.—the marketing department could target those customers that have a lower likelihood of filing a claim and thus cost less to the company over time.
However, what if a company were to think even more outside the box and not only predict which customers might have losses but actually help their members lower their risk? Indeed, the insurance industry may very well be ”on the verge of a seismic, tech-driven shift,” in the words of a 2021 McKinsey report, which states that we’re moving in a direction in which “insurance will shift from its current state of ‘detect and repair’ to ‘predict and prevent,’ transforming every aspect of the industry in the process.”
As Agrawal explains to The Workback, if an insurance company was able to make better predictions about which homes are at greater risk of dangers, such as an electrical fire, it could create a whole new approach: offer those customers the option of installing devices in their homes that use sensors to collect data about their risk of fire. This data can then be used to make early warning predictions that enable interventions. The company would share the information they collect with customers to help the latter avoid causing fires and offer them a lower premium in exchange for their cooperation.
While devices like Ting can already monitor fire risk, linking such a tool to a person’s insurance premium would be a major development for the industry — effectively shifting from transferring risk to mitigating it. It would allow a company to attract more customers, pay fewer claims, and enjoy higher profits while providing a valuable benefit to society.
Of course, no prediction is perfect, so you have to consider what will happen if the one you’re working with is wrong. That’s where human judgment comes in. Put simply, if the prediction we generate through A.I. is wrong, how bad is that outcome?
“Judgment is your assessment of the cost in the case of a mistake in the prediction,” explains Agrawal.
A basic example of this judgment step would be using a weather report to decide whether to wear a rain jacket. If you do carry one based on a report of rain, but the report turns out to be wrong, you may have worn a hot rain jacket all day for no reason. How “costly” that decision was is subjective: It depends on how annoying you find it to sweat through a raincoat on a humid day.
Agrawal explains that if you’re predicting the risk of fire for a particular home, you might:
Wrongly predict their risk is high, and spend unnecessary money on sending them a monitoring device.
Wrongly predict their risk is low, and choose not to send them a device. If their house does catch on fire, you might have to file a claim that could’ve been avoided.
In the future, such monitoring could be even more complex and extend to behavioral risks, like frying with oil (currently a major cause of house fires that insurance companies have no clear way of predicting or preventing), but each would raise their own questions and potential.
Ultimately, the goal of the exercise is to figure out whether the benefits of an A.I.-generated prediction being right are higher than the costs of it being wrong. With a clear-eyed understanding of that gamble, you can move confidently into uncharted territory. Companies that do so well will define the future of their industries.
Regardless of the business you’re in, the authors of Power and Prediction say the exercise will work best if you adopt a “blank slate” approach. In other words, try to imagine how things could be if you were starting from scratch.
Only then can you see the true potential of A.I. and work through how to implement it wisely.
Replies to This Discussion