Groundbreaking ideas and research for engaged leaders
Rotman Insights Hub | University of Toronto - Rotman School of Management Groundbreaking ideas and research for engaged leaders
Rotman Insights Hub | University of Toronto - Rotman School of Management

Why the proposed AI development pause may do more harm than good

Read time:

Kevin Bryan

Concerns about the rapid development of AI technology aren’t unfounded, but a six-month pause proposed by tech leaders would do very little to address real problems and could even create unintended negative consequences of its own.

That’s according to Kevin Bryan, an associate professor of strategic management at the Rotman School of Management. He believes the release of ChatGPT by OpenAI on Nov. 20, 2022 marked the most important tech development since the creation of the internet. He also agrees that such powerful technology needs certain guardrails to prevent unintended consequences.

But Bryan is skeptical of an open letter calling for a six-month pause on the development of AI technology. The letter, which was released on March 22, 2023, also calls on AI developers to use those six months to work with regulators to develop better policies to govern AI, work on the creation of broader oversight systems, work on the development of watermark technology, among other guardrails.

“The letter is targeted at one person, [OpenAI CEO] Sam Altman, because OpenAI is so far ahead of everyone else,” he says. “The letter is essentially saying Sam Altman needs to pause the release of GPT 5 because we don’t know enough about how to guardrail these systems, and we don’t understand how they work well enough to be able to do that.”

Signed by more than 1,000 members of the tech development community — including Elon Musk and Steve Wozniak — the letter is already having a chilling effect on the industry. In late March 2023, Italy announced a ban on ChatGPT, while Germany is openly considering doing the same.

Such concerns are not unfounded, according to Bryan, nor are fears about the long-term implications of AI on humanity, and the potential for unintended consequences. From the Luddites of 19th century England to more recent calls to ban apps like Uber and Airbnb, Bryan says it’s not uncommon for concerned citizens to request a slowdown in technological development for fear of these unintended consequences.

At the same time Bryan says the proposed ban would do very little to address concerns outlined in the letter, and could even hinder our ability to solve them. For one thing, he says a six-month pause would do very little to advance the regulatory landscape, as the letter suggests — though it would give some of OpenAI’s competitors time to catch up.

“The idea that you give a six month pause and regulators can do anything other than name people for a committee is a complete misunderstanding of public policy. That is not how these things work,” he says.

Bryan points to the well-intended efforts by the European Union to protect online data privacy through The General Data Protection Regulation (GDPR) — which was first proposed in 2012, and implemented in 2016 — as an example of how long such an effort would take, and the likely result.

“The GDRP has some good ideas in it, but the main way you interact with [the policy] is when you use the internet, even when you’re not in Europe, you have to click ‘accept cookies’ 17 times a day,” he says. “Everybody knows this is pointless — it does nothing to help privacy — it just makes the internet more annoying, and despite years of complaints, the regulation has not been updated.”

Not only did GDPR fail to deliver on the promise of data protection, but it may have also inflicted damage to the E.U.’s digital economy, and Bryan is convinced pausing AI development would have similar consequences. In fact, he says that if Canada were to join Italy in banning AI, it would do little more than initiate an exodus of talent to other jurisdictions.

Furthermore, each successive iteration of OpenAI’s GPT technology is smarter, and potentially better able to align itself with human values and principals than those that came before it, he says. Hindering the development of AI would only slow our ability to manage the unintended consequences of earlier iterations. “If [more advanced AI] is actually the solution, then the pause isn’t neutral; it’s outright negative," he says. 

The real threat of AI technology as it exists today has more to do with issues of disinformation like “deepfakes,” hacking and cybercrime, than a Terminator-like apocalypse, Bryan says. He and others are skeptical of the signatories’ self-proclaimed aim to reduce the harms of AI innovation given that the letter doesn’t cite those current, real challenges.

Bryan also faults the letter for ignoring some of the undeniably positive promises of the technology. 

“For example, automobiles kill something like 1.2 million people a year — all completely preventable with a sufficiently good AI,” he says. “That’s 1.2 million people, from one use case, like how worried do you have to be about AI development to think ‘let’s slow that down?’”

One positive outcome of the letter and the media frenzy that followed it, is a broader awareness of the potential harms of the technology, which Bryan says will add pressure on the industry to proactively seek to reduce the risks.  

“OpenAI and companies that are developing these kinds of models are going to do more in terms of releasing early versions of the models to researchers than they otherwise would have, which is probably not a bad thing,” he says. “If we can make it easier to prevent those malicious things by letting the white hats have access to this stuff just a little earlier to make sure we’re not missing anything, that just seems like a good thing; and a thing a company from a pure internal profit perspective may not have done otherwise.”


Kevin Bryan is an associate professor of strategic management. His work primarily consists of applied theoretical and empirical analyses of innovation and entrepreneurship.