Groundbreaking ideas and research for engaged leaders
Rotman Insights Hub | University of Toronto - Rotman School of Management Groundbreaking ideas and research for engaged leaders
Rotman Insights Hub | University of Toronto - Rotman School of Management

A leader's guide to ethically implementing AI at work

Read time:

Walid Hejazi

When the dot-com bubble went bust, tech stocks cratered. And a small, money-losing DVD rental company called Netflix got caught up in the fray. In the year 2000, Netflix was far from a household name. The California-based start-up had not yet gone public, and the market conditions weren’t looking especially favourable for it to do so.

Netflix’s co-founders approached the market-leading video rental company Blockbuster with a US$50-million acquisition proposition. Blockbuster declined, believing digital video was just a fad. They could not have been more wrong, and today, Netflix has more than 260 million paying subscribers and a market capitalization above US$200 billion. Blockbuster filed for bankruptcy in 2010, and now has a single location in Bend, Oregon.   

“Even well-managed companies often resist change,” says Walid Hejazi, a professor of economic analysis and policy at the Rotman School of Management. “But we live in a world of constant change. Leaders are there to help their organizations navigate it.”

Generative artificial intelligence (AI) is the next big change, and like streaming before it, has the power to disrupt entire industries. The companies that emerge strongest from this period of transition will not necessarily be today’s market leaders; they will be the ones that find innovative ways to apply AI, Hejazi says.

Leaders will need to make the case for how the technology will strengthen their organization. Ensuring that AI adoption is ethical will be essential to overcoming internal opposition and protecting an organization from risk.

“Generative AI is here. And it's big. The traditional way of doing things will be left behind,” says Hejazi. “You have to embrace it, but you need to do it right. There are unbelievable opportunities, but AI has got to be transparent. It has to be ethical, and it needs to have guardrails.”

One challenge for leaders is the opacity of the technology. Generative AI draws on enormous data sets and uses that information to make predictions about the text or images users are seeking to obtain. But exactly how it makes the connections within these large-scale data sets is a bit of a mystery. This carries significant risk. The data sets AI programs use often come from the internet and can reflect all of the various biases of the web users who created that data in the first place. And when generative AI is asked to identify efficiencies, it will not necessarily consider the consequences of the actions it recommends.

Take the example of Amazon, which used AI in its hiring process. Like many organizations, Amazon deployed AI to help sift through the thousands of applications it received.

The AI sought out attributes in resumes similar to existing successful employees, but excluded candidates who had solid credentials, but slightly different backgrounds. For example, candidates with degrees from prestigious Universities with a largely female or Black student body were sorted out because the existing Amazon employee make up didn’t include alumni from those universities. As a result, candidates who were predominantly white and male were put through to the next round of interviews. When Amazon learned of the bias, it scrapped the algorithm.

With more powerful generative AI technologies, the stakes are even greater. 

Getting organizational governance right is essential — and so is communicating it effectively, says Hejazi. Data science teams that work directly with the technology should understand their organization’s goals, ethics and values, so they don’t inadvertently instruct an algorithm to take actions that are inconsistent with them.

“An algorithm could help a company achieve its financial objectives, but hurt it in the long run,” says Hejazi. “Some things that might drive profits are unethical, like nudging kids to smoke, or to eat unhealthy foods. Senior management needs to set the guardrails about how AI will be used. You need to ask questions like what data will be used, and for what purposes? What permissions will be obtained, and how will data be secured? When you apply an algorithm, you need to make sure it is acting in ways that are consistent with the values of the company.”’

The type of guidance that will be needed from senior management does not require technical knowledge of the inner workings of AI algorithms, but does demand a general sense of how AI operates and what risks are involved.

“Senior leadership needs to be able to clearly communicate what data are they taking in, how they are using it, and what their objectives are,” says Hejazi. “All of that needs to be explainable.”

As companies look towards the future, management also needs to ask themselves how their organization can embrace AI to achieve their organization’s vision, but in a way that is responsible, transparent and fair.

“The last thing you want is to see your organization’s name on the front page of The Globe and Mail, because a Freedom of Information Act request found out the company has been doing something unethical,” Hejazi says. “Senior leaders don't need to know the technical details, but they do need to understand the right questions to ask.”

A new course offered by Rotman Executive Programs will help leaders navigate the transition. Generative AI and Organizational Transformation will demonstrate what AI can do – and what it can’t. The three-day course will build understanding of the implications of generative AI, and help leaders develop a strategy to leverage the once-in-a-generation opportunities that are only beginning to present themselves. Generative AI and Organizational Transformation course will be held October 30 to November 1, 2024.


Walid Hejazi is a professor of economic analysis and policy at the Rotman School of Management.