Groundbreaking ideas and research for engaged leaders
Rotman Insights Hub | University of Toronto - Rotman School of Management Groundbreaking ideas and research for engaged leaders
Rotman Insights Hub | University of Toronto - Rotman School of Management

How to thrive in the age of AI

Read time:

Ravi Bapna

Your book — Thrive: Maximizing Well-Being in the Age of AI — debunks many of the doom prophecies associated with AI. Why are you and your co-author such AI optimists?

My co-author, NYU Professor Anindya Ghose and I have spent the last 20 years helping companies convert their data into an asset. With 40 years of collective experience in this space, we’ve found there are always two sides to every new technology. You can go back all the way to the kitchen knife, which can be used to inflict harm or to create beautiful sushi. AI is a very powerful, general-purpose technology that is more complex and layered than anything we’ve seen in the past. It is also intangible, leaving room for voices that are either dystopian or utopian. But, like all technology, it is up to society to make it work for us.

Like it or not, AI is here to stay, and we wrote this book to demystify it and give people more agency. Once people understand it, they can start using it to improve their daily lives. The book explains how individuals and organizations can use AI to improve productivity and create new business processes and sources of value. So, yes, we’re very optimistic and excited about AI’s future.

Can you describe your “house of AI” concept?

The ‘four pillars’ of the House of AI are four types of machine learning: descriptive, predictive, causal and prescriptive. Data engineering is the foundation for all of them. Once data is clean, the first pillar (descriptive) uses advanced analytics to go beyond drawing pictures and reporting or finding interesting co-purchasing patterns. There is a lot of business value in this pillar. Anomaly detection is a very popular algorithm. It’s what banks use to detect suspicious transactions.

 The second pillar is predictive. Amazon has been doing this for 25 years. If you buy our book Thrive and you also bought Smart Rivals [by Feng Zhu and Bonnie Yining Cao], Amazon will suggest other books you might like. The third pillar is causal — the idea of separating correlation from causation. A lot of bad decisions (like investments) are made when people can’t separate these two things. Does X truly cause Y? This type of machine learning addresses that.

The final pillar is prescriptive, taking into account organizational constraints. This includes making sure that the algorithms we deploy are fair, so we can optimize certain goals. The House of AI concept provides a comprehensive overview of AI’s different components as they play out in our day-to-day lives.

Can you talk a bit about how AI has revolutionized everyday life?

 One prime example is dating. Before online dating, matches were a product of where we lived, what school or college we went to, our neighbourhood bar or our workplace. We had limited choice sets. Sites like Match, OkCupid and Tinder use the full suite of modern digital capabilities, including expanded search and the idea of self-expression in words or pictures. They basically lower the “search cost” of finding people that were otherwise constrained by the physical world.

Historically, a lot of social frictions have existed in dating. In heterosexual dating, for example, men send eight times more messages than women, and women are also less likely to make the first move. So, Bumble was designed to empower women to message their matches first. This gives them agency — they are no longer back-seat drivers. These patterns have led to the fact that today, close to 60 per cent of matches start online.

What are the dangers of “coded bias” in AI?

Years ago, the hottest topic at AI conferences was résumé screening in the hiring funnel. A company like Amazon gets 1,000 résumés for a single job posting. So, based on past data of high-performing employees, they built models to detect potential top candidates and invite them to the interview stage. Then the New York Times and Wall Street Journal came out with stories about how Amazon’s AI screener is biased against women for tech jobs. Essentially, the training data that was being used was based on past hiring practices and was inherently biased. Historically, not a large fraction of women majored in STEM in college, so the dataset patterns it was generating were flawed and unfair.

Now we use reinforcement learning — a very advanced form of machine learning that scores candidates against their likelihood of being high performers, but also strategically decides to ‘exploit and explore’ at the same time. This strategy has been shown to remove coded bias, which can also exist in medical data, education and other applications. A lot of AI is now being developed to correct for bias — and that is really good news.

Somewhat counterintuitively, you argue that AI fosters greater human connection. How?

In 2022, TIME magazine’s person of the year was Sarah Friar, CEO of Nextdoor, the social networking platform for neighbourhoods. Nextdoor built models to detect hate speech in real-time. It alerts people and gives them reason to pause when an online chat starts to get unsavoury. This real-time ‘kindness-checker’ became a powerful feature that actually improves the way we form relationships.

Platforms like Airbnb are very advanced users of many different types of AI. Last summer, my wife and I rented an Airbnb in San Sebastian, Spain. There is a lot going on behind-the-scenes on platforms like Airbnb to allow this kind of matching to happen at scale, with minimal risk. It creates a level of trust between strangers. Our host’s restaurant recommendations were so personal and superior to anything we found on Google. These algorithms are creating better human connections and improving our lives.

Can you describe how AI is being used to optimize our health and well-being?

Devices like the Apple Watch enable us to track multiple metrics about our daily habits and access granular data so we can avoid behaviours that may lead to, say, heart disease. For example, certain heart variability patterns can predict premature birth. Almost 30 per cent of U.S. counties are ‘maternity deserts’ with no local access to obstetric care. If you’re in rural North Dakota and your WHOOP tracker says your baby is coming two weeks early, you had better drive 100 miles to Fargo to see your obstetrician before the baby arrives.

The idea of an AI-monitored pregnancy or health coach is a really powerful concept that can extend to many other things, giving us real time nudges to look after our health and well-being. But we must embrace AI in healthcare more broadly. Deep-learning models can detect retinal disease in premature babies at higher levels than a panel of highly trained physicians. Other health applications include a clinic in Budapest that uses AI to scan mammograms. There were 22 instances where AI detected cancer that oncologists and radiologists had missed — literally saving 22 lives. Our book highlights many more examples of how healthcare is being significantly augmented by AI.

As managers, executives and leaders, how can we successfully deploy AI?

We use “climbing Mount Everest’”as a metaphor to educate yourself on the art of the possible, to understand all the uses of the different components in the House of AI. Then, within your organization, prioritize how you could use it to improve internal processes. Do you want it to create better marketing messaging? Better content? Or do you want to use it to augment existing products or services? Is there a way to power your e-mail with AI? Superhuman, for example, is amazing at decluttering my inbox.

What are you going to use AI for? If you understand the discipline, you can prioritize these projects, manage the risk and reward and manifest the change needed within your company. AI shouldn’t come at the cost of jobs; it presents us with tools that enable us do bigger and better things.

You liken the AI-enhanced home to the futuristic smart home in the old Jetsons cartoon, only different. What benefits and possible risks lie ahead?

The low-hanging fruit is that many homes currently consume more energy than they should and aren’t optimized for their inhabitants. We’re already at the stage where sensors can monitor the physical functions of our homes and make the environment smarter. I can take a photograph of my refrigerator and it can basically create a shopping list for me, based on what is missing in there. Then we can have an agent go out and basically execute that order online and have it delivered.

Devices like Alexa and others are already embedded in many homes. If we use this technology carefully, we can augment human capacity to allow us to do bigger and better things and easily avoid the kind of Orwellian ‘surveillance capitalism’ people worry about. Anindya and I want to emphasize that we have opportunities to use AI to solve problems that we can’t, to make our lives better.

The power of AI rests in the hands of its developers — OpenAI, Google, Meta, etc. Yet your book suggests that part of the responsibility for optimizing AI for social good ultimately rests with individuals. Shouldn’t developers be more accountable?

It’s definitely a joint responsibility — not an ‘us versus them’ story. It is in Big Tech’s best interest to develop AI in a way that augments both human potential and organizational capabilities. Right now, the tables are skewed. The average person on the street has no real understanding of AI, and when people don’t understand something, there is room for manipulation and misinformation. That was our motivation for writing the book: to help create a level playing field where people can become smart consumers, talk to their legislators and advocate for regulation in a smart way. Good regulation is contingent on people understanding the fundamentals.

So, you want to see citizens at the centre of the regulatory debate?

Yes. It’s the age-old paradigm around which we hope our liberal democracy moves forward. Nobody would argue that when citizens are better informed, they will hold lawmakers to account. And I think that will go a long way toward ensuring we have regulation that allows us to harness the power of AI, while at the same time minimizing the risks. We’re not downplaying the risks, we just want to shine a light on the many positive advances and tangible benefits of AI in our everyday lives — now and in the future.

This article originally appeared in the Spring 2025 issue of Rotman Management magazine. If you enjoyed this article, consider subscribing to the magazine or to the Rotman Insights Hub bi-weekly newsletter


Ravi Bapna is the academic director of the Carlson Analytics Lab at the University of Minnesota’s Carlson School of Management. He is co-author of Thrive: Maximizing Well-Being in the Age of AI (MIT Press, 2024).