Communication Currents

Instructor's Corner #1: Think RateMyProfessor.Com Doesn't Matter? Think Again.

December 1, 2013
Instructional Communication

A few years ago, I received a phone call from Sarah,* a friend from graduate school. By that point, we were beginning our careers as university communication professors. As we caught up on our new jobs and teaching experiences, Sarah shared with despair she had recently visited the site RateMyProfessors.com to discover what her students were saying about her. “I’m devastated” she exclaimed. “I got a really negative review from a student who said I was a terrible teacher. The worst part is, the post accused me of being biased against Hawaiians. I didn’t even know I had any Hawaiian students, but now it’s out there for everyone to read!” Knowing Sarah well, and being confident the accusation was unfair, I offered sympathy and the two of us laughed at the absurdity of the incident. As our chuckling died down, Sarah asked quietly “Do you think it even matters? It doesn’t matter, right?”

It was a good question, and one professionals in many fields—including medicine, law, and education—must be asking themselves in the face of increasingly prevalent online rating systems. It was this general question my colleagues and I determined to address through a series of controlled experiments.

RateMyProfessors.com (RMP) is the largest of several online instructor-rating systems, containing over 15 million anonymous ratings of more than 1.8 million professors. On RMP, students use 1 to 5 scales to rate their instructors on helpfulness, clarity, and easiness. RMP averages these ratings to give each instructor an overall quality score and corresponding face icon: smiling (good quality), neutral (average quality), or frowning (poor quality). In addition, student raters may indicate the physical attractiveness (‘‘hotness’’) of an instructor by putting a ‘‘chili pepper’’ next to the name. Increasingly, students visit such sites to make choices about course selection and to form early impressions of future instructors and classes.

In 2006, when we began our research, there were only a handful of studies examining RMP, and none directly addressed the effects site content might have on teaching and learning experiences. Yet, in many other contexts, word-of-mouth communication (whether face-to-face or computer-mediated) exerts powerful effects on people’s attitudes, choices, and experiences concerning goods and services. Thus, we reasoned RMP ratings and comments could influence how students perceived their instructors and courses and how much they learned from them. Drawing from communication theories that seek to explain how people process information and form judgments (the Heuristic-Systematic Processing Model) and how initial expectations influence experience (Expectancy Effects), we designed and conducted several experiments aimed at determining whether and how much students were influenced by exposure to RMP content prior to having contact with an instructor.

In each experiment, we presented undergraduate students with ratings and comments about a professor, had them view a 10-20 minute lecture by him, and then assessed their perceptions of the educational experience. The participants did not know until later the professor was fictitious and the ratings and comments were fabricated. Although some participants were shown positive ratings and others were shown negative ratings or none at all, they all viewed the same taped lecture to ensure any differences in their evaluations of the professor were due solely to the RMP content and not to variations in his performance.

In our first study, we found students who viewed positive RMP ratings about the professor perceived him as more credible and attractive, and reported higher levels of affective learning (i.e., “liking” of the lecture material) and motivation to learn when compared to students who received negative RMP ratings about him or none at all. Students who received negative ratings judged the professor least favorably.

Next, we wondered whether RMP could affect not only how students felt about the instructor and the material, but also how much they actually learned from him. To test this possibility, we conducted a second study that measured students’ levels of cognitive and behavioral learning following the lecture. From an educational standpoint, the results were remarkable. Students who read positive RMP ratings before viewing the lecture performed significantly better on a 20-item quiz over the content (a letter grade higher, on average) and reported a greater likelihood of actually engaging in the behaviors the professor recommended. The study demonstrated these outcomes occurred, in part, because of the heightened positive expectations and liking produced by reading positive reviews beforehand.

These earlier studies looked at the effects of exclusively positive or negative ratings and reviews. Yet, in reality, users of online rating systems like RMP regularly encounter contradictory opinions about a target. So, in our latest study we investigated the effects of “mixed reviews” on student perceptions of instructors and classes. Again, students who received positive ratings evaluated the professor and course more favorably. However, students who received a mixture of positive and negative ratings, all negative ratings, or no ratings at all rated the professor identically. The Heuristic Systematic Processing Model helps explain why. When faced with unanimous positive reviews, communicators may use heuristics, or mental shortcuts (e.g., “Consent among peers is trustworthy”) to base their perceptions on word-of-mouth information rather than a critical evaluation of the target. Yet, when information is mixed, uniformly negative, or missing, communicators are prompted to evaluate the content systematically, by focusing on direct observation and experience.

Sarah, like others today, wondered “Do RMP ratings matter?” On the basis of these experiments, I would answer “Yes, but the nature and size of the effects depends on the content.” More specifically, I would offer the following guidance:

For professionals or practitioners who may be rated online 

  • Online word-of-mouth communication significantly influences others’ perceptions of you and the goods or services you offer.
  • Positive word-of-mouth is much more influential in shaping perceptions and experiences when compared to negative word-of-mouth.
  • In the presence of mixed reviews, people are likely to suspend judgment pending direct interaction or observation.

For students and other users of online rating systems 

  • Understand your perception of the person, good, or service may be altered, regardless of whether the ratings reflect “objective” information about the target.
  • Realize ratings you post online serve to create expectancies and may influence the future experiences of the other users who read them.
  • Consider the value of posting positive information, when warranted, as it may help create beneficial realities for others.

*Pseudonym

About the author (s)

Autumn Edwards

Western Michigan University

Associate Professor

Chad Edwards

Western Michigan University

Professor