Groundbreaking ideas and research for engaged leaders
Rotman Insights Hub | University of Toronto - Rotman School of Management Groundbreaking ideas and research for engaged leaders
Rotman Insights Hub | University of Toronto - Rotman School of Management

'Don't fire the humans.' Why people are still needed through AI's boom

Read time:

Ningyuan Chen

Artificial intelligence tools are capable of ingesting massive amounts of data to make forecasts and recommendations that would be otherwise impossible, but it’s still important to take those insights with a grain of salt.

That’s according to research conducted by a team of Rotman School of Management professors. In their paper “Algorithmic Decision-Making Safeguarded by Human Knowledge,” the researchers found that business outcomes — namely optimizing profitability — were best achieved when AI was paired with a critical human eye, and a few practical guardrails.

“Don’t fire the human decision-makers yet because you need someone with good business sense and intuition to moderate or safeguard AI decision making,” says one of the study’s co-authors, Ningyuan Chen. “At least for the next few decades, the combination of business sense and intuition with the AI algorithm will [produce the best outcomes].”

That recommendation is applicable anywhere software is tasked with using data to make decisions, but Chen and co-researchers Ming Hu and Wenhao Li began with very a specific use-case in mind. In the study, they explored the case study of gas stations, which are often owned and operated by large retail chains, and many of which lean on AI to determine prices at the pump. Station managers, however, often have insights about the local market that are hard to incorporate into an algorithm.

So what happens when the AI’s recommended price of fuel clashes with the operator’s intuition?

“[AI] is a black box, and [users] don't really know what's going on inside, but they have their own intuition, so how can they reconcile the two when there's a conflict?” says Chen. “Usually they have the authority to override the AI, but they're not sure if they're more right than the AI, and this was the starting motivation of our project.”

In order to determine just how much stock human operators should put in AI decisions that go against their intuition, the researchers identified three common pitfalls of the technology. The first is when the algorithm is being used in a competitive market but is unable to incorporate real-time competitor data.

“The AI will assume that it’s pricing as a monopoly, when in fact it's actually competing with someone else,” explains Chen. “Because of this, the price provided by the AI will have an upward bias, meaning the price set by AI is higher than what the optimal price should be under a competitive scenario.”

Chen adds that while it’s possible to incorporate competitor pricing into the AI’s decision making process, it’s not often practical. He says employees can collect competitor pricing information and feed it into the software manually, but the AI would also need to be programmed to understand how much weight to give that information compared with other data inputs, and that can be highly specific to individual use-cases.

Instead, Chen says it’s often more efficient to check the AI price against the competition manually, and implement a simple rule. “If the AI is below the market average, take the AI price; if the AI is above it, take the market price,” he said. “This way the AI doesn't have to know the market price.” 

The next common mistake that is difficult for the software to correct on its own is in relation to demand elasticity. Chen says AI programs are typically designed to use a relatively simple equation to measure how price affects demand, but the reality can be far more complex. For example, the software might assume a one per cent increase in price reduces demand by three per cent, but the relationship between the two factors is often more nuanced, and inconsistent.

“AI has to simplify it a little bit, and using a linear model means that if a true relationship is not linear there's going to be an error,” Chen says. “This is where the human comes into the picture.”

This scenario offers another example of why it’s always best to impose guardrails on AI decisions, Chen says, especially when it comes to pricing models. For example, if the human operator determines that the price should be somewhere between three per cent and seven per cent higher than yesterday, but the AI suggests a price increase of 10 per cent, the research suggests the price should be set at the human-imposed guardrail of seven per cent.

“In this scenario AI uses a model to output a price, and the human uses his or her own belief to construct an interval, and pulls the AI price towards the interval if it falls outside,” said Chen.  

The third and final mistake that often leads AI systems to give less than optimal advice, according to the study, results from data errors, which Chen says are also common. He explains that when data is collected or inputted incorrectly it can create errors and outliers that lead to unreliable results.

“If the data is contaminated, the outcome is also corrupted,” said Chen. He explains that while AI might be better at making sense of large data sets, humans are still better at discerning what constitutes reliable data and identifying extreme outliers.

Bad data offers one more argument in favour of human intervention in AI decision-making, says Chen. That is why imposing simple safeguards and interventions can ultimately correct the most common mistakes of algorithmic decision-making.

As artificial intelligence is tasked with informing more of our decisions, Chen says it’s important to maintain those human-imposed guardrails. Their study ultimately confirms that, across a range of use-cases, combining the powerful data processing capabilities of AI with the common sense and intuition of humans ultimately results in better outcomes than either could achieve alone.


Ningyuan Chen is an assistant professor of the department of management at the University of Toronto, Mississauga with a cross-appointment to the Rotman School of Management.