Groundbreaking ideas and research for engaged leaders
Rotman Insights Hub | University of Toronto - Rotman School of Management Groundbreaking ideas and research for engaged leaders
Rotman Insights Hub | University of Toronto - Rotman School of Management

Leadership forum: creative destruction in healthcare

Read time:

Alistair Erskine, Marzyeh Ghassemi, Avi Goldfarb, Jennifer Gibson, Tom Lawry

AlistairErskineAlistair Erskine

Chief digital officer, Partners HealthCare

My colleauge, best-selling author Atil Guwande, recently wrote an article for the New Yorker titled, “Why Doctors Hate Their Computers.” In it, he points to a number of workflow-related issues that are problematic for physicians. Some come down to fragmented workflows due to partial digitization, and some are associated with the fact that the computer feels like it’s ‘in the way’ between the patient and the provider. The thing I hear from clinicians again and again is that between login screens and access to different systems, it takes way too many clicks. Clearly, there are opportunities for improvement.

When it comes to AI in healthcare, some practical applications are already in use. A dermatologist is now equal to a computer in terms of interpreting certain kinds of moles for melanoma; images of the back of the retina can be processed by AI to identify retinopathy in patients with diabetes; and a machine can now interpret and diagnose a mammogram better than a human pathologist.

AI can even save lives: If an 86-year-old patient comes in complaining of dizziness and gets a CAT scan on a Thursday indicating a 96 per cent chance that she will have a stroke within hours or days, an algorithm can place her results at the very front of the line to be dealt with immediately — instead of a week or more. For the radiologist and her workflow, nothing changes — she still grabs the next film in the queue; but regardless of when the test was done, the AI literally reorganizes the stack. 

All of these new tools are augmenting, not replacing what clinicians do, and they are creating a new currency: data. The secondary use of healthcare data is becoming incredibly useful to biotech, to pharma and to researchers. Increasingly, people accept that all of the digital information being collected will be shared between hospitals and other enterprises. 

In the past, quality and cost were the key value-drivers for patients, but we need to add digital innovation to that list. People want access to care, and they want it online. They want a caring team around them at all times. Even if the cardiologist herself isn’t the one reaching out digitally to them, somebody can — a care navigator or other member of the team.

Relationship management is going to change dramatically. Imagine that a health team member enters a phone number and the AI calls up the patient’s record. Immediately, he will see that the patient prefers to be called Jenny; that she is scared of hospitals; and that he shouldn’t book her on a Tuesday, because she never shows up on a Tuesday. This frees the agent up to figure out, ‘What else can I do for Jenny today?’ These sort of transactional systems have been used in other industries to provide a better customer experience, and the time has come to adopt them in healthcare.


MarzyehGhassemiMarzyeh Ghassemi

Faculty member, Vector Institute and assistant professor, depts. of computer science and medicine, University of Toronto

I was motivated to do my PhD in machine learning for health (at MIT) because I believe we can improve healthcare with machine learning and AI. During my program, I would shadow physicians at morning rounds in the ICU, and I would often notice that two patients who looked very similar on paper would get very different treatments. At other times, the same patient would get very different treatment recommendations from two different doctors. When I asked my colleagues, ‘what is going on here?’, I was shocked to be told that doctors often have to make choices in an absence of evidence.

Traditionally, what we have used as evidence in healthcare has been the results of randomized controlled trials (RCTs), which entail recruiting a study population. The question I’m concerned with — and that we should all be concerned with — is, do these studies generalize? And if so, for whom do they generalize? Even with the best intentions during recruitment, certain kinds of people see RCT advertisements. As a result, in practice, much of what we have learned about medicine has related to particular sub-groups of the population. And that translates into a lack of relevant evidence at care time. To make matters worse, healthcare professionals disagree — not just with each other, but with themselves. For radiologists, inter-rater reliability on the same image is around 67 per cent. If we don’t know for certain how to label our data, how can we use it to make predictions?

Looking ahead two years, I think the biggest impact of AI will be the automation of processes that are currently cumbersome and inefficient for people; things like order sets, or checking whether medications conflict with one another, or scheduling. Automating these things is going to make a big difference.

In 10 years, I believe AI can provide doctors with new super powers. For example, one thing that is often missed by the healthcare system is domestic violence. This is a really hard thing to recognize if you’re not looking for it. A patient shows up to the clinic with a small fracture here, a bruise there. In order to catch it early, the physician would need to sit down and review the patient’s entire record, have time to think about it, and have some insight into the person’s home situation. Given the extraordinary constraints on clinical staff’s time and attention, that is not a likely scenario. But these kinds of patterns could be detected with algorithms, and used to direct attention as needed.

A clinician once told me that he felt the top 10 per cent of doctors operate in lock-step: If you give them a similar patient, they will probably have almost exactly the same recommendation, because they’ve all read exactly the same articles and seen the same volume of patients with the same range of severities. If variation in care comes from inexperience or lack of access, that would be another huge win for machine learning. We should focus on all the tasks that are not valuable for a human to do — all the things that we could train a machine to be very good at. That way, people will be freed up to make good judgment calls about what should happen next — and to talk to patients in a compassionate way about it.


AviGoldfarbAvi Goldfarb

Rotman chair in AI and healthcare, professor of marketing and chief data scientist, Creative Destruction Lab, Rotman School of Management; co-author, Prediction Machines: The Simple Economics of Artificial Intelligence

The reason we hear so much about artificial intelligence today — in healthcare and elsewhere — is that a very particular technology has gotten much, much better: prediction technology. When I say prediction, I don’t necessarily mean predictions about the future. I’m talking about using information that you have to fill in information that you don’t have. And every organization can benefit from that.

Recent advances in AI can be seen as better, faster, cheaper prediction. To understand why cheap prediction may be transformative, consider an earlier technology. Think about your computer for a moment. It might seem like it does all sorts of things, but it really only does one thing, and it does that thing so well that over the years we have found all sorts of uses for it. What your computer does is arithmetic. That’s it. And because arithmetic has become so cheap and so instant, we have found endless applications for it that we might not otherwise have thought of. 

The first applications for machine arithmetic were good old-fashioned arithmetic problems. In World War II we had cannons that shot cannon balls, and it was a very difficult arithmetic problem to figure out exactly where those cannonballs were going to land. So, we had teams of humans figuring out the trajectory. The movie Hidden Figures was all about these teams of humans with the job title Computer. But before long, machine arithmetic came along that was better, faster and cheaper than the humans. Over the years, arithmetic would continue to get cheaper and cheaper, and we would find all sorts of new applications for it. For instance, it turned out that games, mail and photographs were arithmetic problems.

This is Economics 101. On day one in Economics 101, the first thing we teach is that when the price of something falls, we buy more of it. Once you know what has gotten cheaper and what has gotten better, you can map out all sorts of consequences. So when arithmetic got cheaper, we used more arithmetic. With recent advances in artificial intelligence, as the price of prediction falls, we are going to do more and more prediction. Cheap prediction means we can use prediction in new ways, such as medical diagnosis through image recognition.

The consequences of cheap prediction don’t stop there. On day two of Econ-101, you learn that when the price of coffee falls, we buy less tea. Tea and coffee are substitutes, so when coffee gets cheaper, people buy more coffee and less tea. Likewise, human prediction and machine prediction are substitutes: When machine prediction gets cheap, the prediction aspects of your job will increasingly be done by a machine.

So what is left for humans? Thankfully, on day two of Econ-101 you also learn that when the price of coffee falls, we buy more cream and sugar — the ‘complements’ to coffee. Complements become more valuable as something gets cheaper. The question for AI is, what are the cream and sugar to prediction? The answer: The other aspects of decision-making, most notably the judgment to know which predictions to make and what to do with those predictions once we have them.

If you map out the workflow for any organization, you will find that it contains a series of decisions. Once you figure out which of those decisions involve predictions at their core, you can drop in a prediction machine to handle the decision. This will incrementally improve productivity and help the organization. But importantly, sometimes prediction can do much more than this. Better prediction can change strategy; it might even change what your organization does and lead to entirely new opportunities.


JenniferGibsonDr. Jennifer Gibson

Director, Joint Centre for Bioethics and Sun Life financial chair in Bioethics, University of Toronto

We need to be asking broad qusetions like, who stands to benefit from AI? And, who is at risk of being either burdened or potentially put at risk by these technologies? How do we avoid, or minimize, bias in the data? We all know that the quality of our data isn’t great. To state the obvious, if we’re working with data that is not of great quality, we are going to see AI outputs that are not of great quality.

Lots of questions have also surfaced around the nature of the patient-provider relationship. For instance, is AI going to reshape this relationship in the direction that we want it to? And who has the right to determine what that reshaping will look like? Who will be displaced? And what about unforeseen consequences?These are all urgent questions. 

When I’m teaching I often use the spread of electricity as an analogy for what is happening with AI. I will put a photo of a lightbulb on the overhead screen and ask my students, ‘what has the lightbulb offered us?’ Often what comes up are things like, ‘it means we can work 24 hours a day’; ‘we can relax in the evening’; ‘we have safer homes because we’re not worried about fires’; or ‘they make our streets safer’.

All of this is true; but I wonder, how many of you find yourself searching Google for ‘how to deal with insomnia’? Yes, we can work longer thanks to electric light, but we’re also working longer in a way that is making us less productive. Too much light actually causes harm. And it harms the environment as well, in terms of light pollution and the impact on certain animal species. Clearly, there have been many unintended consequences to this technology.

Today we see electric light as a tool that allows us to get tasks done. But I would argue that it is also a complex social phenomenon. The lightbulb is useful because it is so widely used. We have come together and mobilized around it, and by virtue of its use, it has reshaped our lives in profound ways. We need to see AI through a similar lens — as a form of innovation and a new set of tools, but also as a complex social phenomenon — and perhaps even something that could reshape how we relate with each other.

How can we, on the one hand, take advantage of the benefits of AI-enabled technologies and, on the other, ensure that we’re continuing to care? What would that world look like? How can we maintain the reason why we came into medicine in the first place — because we care about people — and ensure that we don’t inadvertently lose that?

The optimistic view is that, by freeing up time by moving some tasks off clinicians’ desks and moving them away from their screens, we can create and sustain more space for caring. AI could free up time, for what matters most. That’s the aspiration, at least. The question we need to ask is, What are the enabling conditions for this to be realized?


TomLawryTom Lawry

Director, Worldwide Health, Microsoft

We are all new to AI right now. You’re new to it, Microsoft is new to it — and anyone who says they’ve got it figured out is either naïve or making liberal use of the new cannabis laws in Canada. In my role, I serve as a strategic advisor to organizations around the world. Put simply, I get to work with smart people trying to figure out old problems with new technology. I’ve been able to see firsthand what works and what doesn’t, and it’s enabled me to come up with some unofficial rules for strategic AI leadership.

First, we must all familiarize ourselves with the power of data. Everyone working in healthcare today has the ability to have superpowers if they choose to embrace and use data in the way it can be used. If you look at all the information in the world today, 90 per cent of it has been generated in the last two years. If I were a newly minted physician in 1950, I would have spent my entire life waiting for medical knowledge to double. By 1980, it was down to seven years. By 2010, it was 3.5 years. Any guess what it will be in the next year? Seventy-three days. Think about it: How can even the most committed doctor keep up with that?

Here’s an example of what is possible. A good friend of mine, physician and researcher Eric Horowitz, has developed deep learning algorithms and applied them to people doing general searches on the Internet. In doing so, he was able to identify a group of internet users with pancreatic cancer — before they had been diagnosed. Eric’s theory is that the data is ‘whispering’ things to us all the time; we just aren’t listening or we don’t have the tools yet to listen. If Eric could accomplished this with one data set, imagine what your organization’s data might be able to do.

Second, AI requires all of us — especially leaders — to think and act differently. I’ve yet to meet a leader who is against innovation. But most of them say, ‘I’m all for change; but you go first’. We have been trained to keep the bar very low when it comes to data and what we do with it, and that must change. Third, work is going to change, and we need to be honest: There will be jobs that are automated out of existence. We need to own that right now, and figure out how to deal with it. On the upside, for all of the knowledge workers out there — many of whom are doing a lot of low-level, repetitive things — just imagine the ability to reduce or eliminate that kind of work. We can actually free people up to work at a much higher level and make them happier, as well. That’s the challenge and the value proposition of AI.

We need to get really smart about what AI does well and what it doesn’t do well — and perhaps, never will. When I think about the care process, I think about reasoning, judgment, imagination, empathy and problem-solving skills. A machine will never be able to do those things. Recognizing how to manage what AI does well and what it doesn’t is going to be key for leaders everywhere.

Bias is another important issue. There is bias in data, and there is also bias created by the human beings setting up the machine learning system. And if we are simply seeking the shortest path to something, there will always be bias. The problem is, too many people are racing ahead trying to create value, and they’re not having these important conversations.


This article originally appeared in the Winter 2020 issue of Rotman Management magazine.

Alistair Erskine, MD, is the chief digital health officer at Partners Healthcare which includes Brigham Woman’s and Massachusetts General Hospitals, teaching hospitals for Harvard Medical School.
Marzyeh Ghassemi is a visiting researcher at Verily/Google and an assistant professor in computer science and medicine at the University of Toronto.
Avi Goldfarb is the Rotman chair in artificial intelligence and healthcare, and professor of marketing at Rotman. Avi is also chief data scientist at the Creative Destruction Lab, senior editor at Marketing Science, and a research associate at the National Bureau of Economic Research.
Dr. Jennifer Gibson is the director of the University of Toronto Joint Centre for Bioethics (JCB), and the Sun Life financial chair in bioethics. From 2007-2013, she was the associate director of partnerships & strategy at the JCB. She is also associate professor in the Institute of Health Policy, Management, and Evaluation.
Tom Lawry is the national director for artificial intelligence – health & life sciences at Microsoft, U.S. In this role he works with providers and life science organizations in planning and implementing innovative analytical solutions that improve the quality and efficiency of health services delivered around the globe.