Groundbreaking ideas and research for engaged leaders
Rotman Insights Hub | University of Toronto - Rotman School of Management Groundbreaking ideas and research for engaged leaders
Rotman Insights Hub | University of Toronto - Rotman School of Management

Ratings game: How social learning impacts our online choices

Read time:

Jingqi Yu

How do you define "social learning," and why is it so fundamental to our success?

Social learning simply means that we learn different kinds of social behaviour by observing and imitating other people. A classic example is kids. They learn from watching and listening to their parents, their peers and their teachers throughout their early years. From the beginning of time, we have watched and imitated other people.

Does this habit continue once we become adults?

Absolutely. Social learning takes place everywhere, and it never really stops. As we progress into adulthood, it takes place mainly in the workplace and helps us understand the "right way to behave" in order to succeed.

The internet has had an interesting impact on social learning. Can you describe it for us?

Like a lot of people, I love to shop, and it is getting increasingly difficult to decide between products online. There are just so many alternatives and so much information to sort through. One key way that the Internet has impacted social learning is that we are able to take cues from our fellow shoppers by reading their reviews and feedback. We can basically learn about a product from people we’ve never met. As I mentioned before, when we are kids we learn through direct observation, but today people can freely share their ideas and experiences and learn from each other. Essentially, the Internet has expanded the stage on which social learning can occur.

In a recent paper you looked at how to make optimal use of online reviews. What were your key findings?

There are three important takeaways from my research. The first is that "one person’s trash is another person’s treasure" — and that has never been more true than it is online. When we look at ratings, we just see numbers; we don’t consider that people might have given something five stars for entirely different reasons than we would. I don’t like cheese, so if I want to order from a pizza place and I see that people are giving it five stars, but raving about how cheesy the pizza is, it is definitely not for me, despite the rating. It’s important to understand the reasons behind the numbers.

Second, not only are people giving ratings for different reasons, but those ratings mean different things to different people. For some people, four out of five stars is really good, because in their culture, a five means perfection and people rarely give that rating. But for other people, five might just mean the product is good, and four is considered mediocre. Variations in how people use a scale may result in miscommunication and thus complicate decisions.

A third learning is related to the second one: It’s not only people who can attribute different meanings to a rating scale, but also companies. And they might be using these scales to make organizational changes or even fire people. One example is Uber, where anything below a five out of five is considered a bad rating. When I was doing my research, I heard that some drivers were really upset with the fact that most people think four is a good rating. What people don’t know is that once a driver’s average gets below 4.6 or even 4.7, they are at the risk of being fired. Some drivers have actually posted signs in their cars saying,"If you arrive alive, please give me a five."

Tell us about the differential impact of average ratings and number of reviews.

We found that there is a popularity bias: people give more weight to popularity, which is the total number of reviews. However, there is no direct link between popularity and the quality of a good or a service. A much better indication is when the average rating is high and there are lots of reviews. In general, that is a good indication of quality. And if the average rating is low, the impact of the number of reviews on people’s choices becomes even more obvious. Put simply, people tend to prefer a product with more reviews, even though it is statistically predicted to be of lower quality.

What are the key implications for companies?

There are two. First, we found that businesses focus more on low average ratings and try to get rid of them through different means. But for better or worse, people like to pick a product that has more total ratings. So, one implication is to focus less on the average rating and more on getting more reviews.

Secondly, there is more to consider here than just ratings and number of reviews. Shoppers can also look at distributions. For example, how many of the total reviewers gave the product a five and how many gave it a one? We revealed heterogeneity in the type of rating distributions people prefer. When I showed study subjects one distribution where half of the reviews were five-star and half were one star, and a second one where all of the ratings were three stars, people started to show strong preferences. Through repeated testing, I found that many shoppers consistently don’t want to settle for mediocrity. They would rather gamble for a truly amazing five-star experience knowing that there is a chance they could also end up with an awful experience. These are risk-takers who prefer a bimodal distribution to a unimodal distribution. And of course, there are also people who are more drawn to a sure outcome despite being expectedly mediocre.

Going forward, do you think companies will be more or less beholden to online ratings and reviews?

I would answer this from two perspectives. One is that it’s time for us to reconsider what we mean by online ratings and reviews. I recommend expanding the definition beyond just rating scales. We should think about online ratings and reviews as a synonym for consumer feedback in general. So if we are talking about consumer feedback, then companies rely on this source of information more than ever.

In addition to average ratings and number of reviews, my research shows that similarity is an important dimension that people like to use when making decisions. And there is a hierarchy as similarity in consumption goals would trump, say, similarity in birthplace. That said, the hierarchy is fluid, and similarities in things as simple as shared geographical locations or birth dates can nudge people’s choices. Considering that people are inclined to rely on different cues, it’s important for companies to analyze consumer feedback to understand the relative weighting of cues at different stages of the customer journey.

However, I do want to raise the possibility that, at some point, consumer feedback might hit a plateau, if this information becomes too polluted or the process of integrating feedback into purchase decisions becomes too complex. We’ve already seen a huge surge of fake reviews and people giving a product five stars to boost their own business and one star to turn people away from competitors. Many people might say, OK, if that’s the case, I won’t read reviews anymore, especially on websites like Amazon. You can do a free return within 30 days anyway, so it’s not that important. People would rather sample the products themselves than spend their valuable time reading reviews — some of which they know might be fake. But I don’t think this will happen anytime soon. Based on surveys, around 90 per cent of people still consult online reviews to make their decisions.

The second important idea is, who are the users of ratings and reviews? We usually think it’s just consumers, but product developers also look at them, as do systems developers. Product developers want to know which features people are seeking that their product currently doesn’t offer. It’s also important information for R&D to learn about the next generation of products to produce to satisfy more people’s needs. And systems developers want to know how to improve people’s experience with the ratings system itself.

How can we improve our ability to detect important social cues?

It’s very important to note how the ratings on a particular platform are generated. Once you understand that, you will be better at taking in the information being provided. Platforms should also take some responsibility for educating people. They could post notes like, "Please keep in mind that different people can have very different interpretations of our rating scale." Or when people give a rating that doesn’t align with what came before it, an automatic message could pop up asking, "Did you really mean to give this three stars or would you like to change your rating to a four or a five?" Basically, online platforms could have systems in place to reflect people’s behaviour around ratings. Over time, this would make online ratings more credible, and people would get better at taking in social cues, whether they be positive or negative.


This article first appeared in the Fall 2022 issue of Rotman Management magazine. Published in January, May and September, each issue features thought-provoking insights and problem-solving tools from leading global researchers and management practitioners. Subscribe Today


Jingqi Yu is a post-doctoral research fellow at the Rotman School of Management, where she is working with Behavioural Economics in Action Research Institute (BEAR) and conducting research on consumer information processing and decision-making. She holds a dual PhD in cognitive science and psychology from Indiana University at Bloomington.