We still haven’t found how the conclusion of 43% is reached.
Note: This is part 3 of a series of posts reviewing some of the origins of scientific claims that compare text vs visuals. Access part 1 here. Access part 2 here.
If you would like to follow along, you can find the original article that I’m reviewing from University of Minnesota website.
This article is a review of “Persuasion and the Role of Visual Presentation Support: The UM/3M Study” prepared by Douglas R. Vogel, Gary W. Dickson, and John A. Lehman.
We have been talking about various quantitative claims that many use to validate power of visual communication. While I don’t disagree about the power and potential of using visuals to convey a message, some of the cited references had some hazy origins. Since my work depends on scientific and medical accuracy, I constantly read journal articles and attend scientific talks. I am very familiar with the general setup of a research paper.
Last time we took a look at the first part of the results of a paper that claims that using visuals will make a presentation 43% more persuasive. Instead of showing us a data table, the paper showed a graph comparing a group of 35 students who didn’t see any visuals and a group of 280 students who did see visuals. The data from 35 students were compared against 280. That sounds kind of odd.
Usually in a group study, the group is divided into roughly the same number of people. If they were to compare the use of visuals and non-visuals in a presentation, then they should have split the group of people in two. Half of the students don’t see the visuals, half the students do. Compare results. But that’s not what they did.
This time we’ll take a look at part 2 of the results.
Results, Part 2!
Now we get to the nitty-gritty data, my lovelies. We are going to get a break-down of how the visual attributes color vs b/w, 35mm vs transparencies, and text only vs text+graphics affect the students’ decisions.
figure 5. Vogel et al, 1986.
These figures SHOW NO NUMBERS at all. The arrows again, correspond to statistical significance. I assume that the statistics are compared against the initial outcome of money/time commitments? More arrows, lower the p-value.
Why can’t they just say things like, “25% of the students answered that the presenter was interesting with overheads, but only 5% of the students felt that the presenter was interesting with 35mm slides.” For some reason we’re still talking about how the students felt about the presenter. Can we talk about how 35mm projector affected the students’ decision to take the course?
Actually, I finally figured out how the 8 groups were split. There were 4 groups that saw projections on 35mm, and 4 groups that saw projections on transparencies. Now I got it…I think? This makes no sense.
figure 6, Vogel et al. figure 7. Vogel et al.
The authors continue to give us little arrows made of p-values. Why can’t we have a simple table with number of students saying YES of NO to these questions? How are we supposed to get that magical 43% from these cute arrows! Also, many aspects have missing arrows. All we know is that the results were not statistically significant. But that doesn’t mean that they can just omit the results.
And now, a word from our sponsor.
3M is a collaborator on this project. As you know, they make scotch tape, scouring pad on sponges (that green part), ear plugs, along other things. I didn’t know this, but back then, they also used to make overhead transparencies. You know, that thing that you used pens that gave blue-green tint to teachers’ hands.
With that in mind, look at fig 6 and 7 again, notice how figure 6 has more “4 arrows” throughout. That’s the one describing overhead transparencies. Then check out the next figure and be amazed.
fig. 8. Vogel et al, 1986.
Oh snap! Effectiveness of color overhead transparencies compared to “others” and “no visuals”. Well, I guess next time I’m in an office supply store, I have to buy more color overhead transparencies because it’s so much better than 35mm projectors, b/w transparencies and no visuals at all.
Then the text flies back to figure 1. How are the visuals used as persuasive devices? Oh good, maybe we’ll get to see how they reached the conclusion of 43%.
Here’s the list of supposed findings from the graphs. They found these 4 things.
- Attention and Yielding are influenced by the perceptions of the presenter
- Comprehension and Retention are improved if color is used rather than black and white, and may be increased by selective use of image enhancement.
- In terms of “action,” color overhead transparencies had the greatest impact.
- The two treatments that stand above all the others (given the problems of perceived legibility of the 35mm slides) are those of color overhead transparencies (both plain text and image enhanced graphics).
????????????????????????????????????????????????????????????????????????????????????????????????????????????????
My head hurts. Now they are somehow mixing the results to come up with new results we have never seen before.
First bullet has nothing to do with this paper
Attention and Yielding are influence by the perceptions of the presenter-What does this have to do with anything about visuals?
Second bullet result is not even mentioned until now.
When were comprehension with color compared? the only place I saw comprehension and retention were in figure 3. I haven’t seen color slides mentioned until figure 6.
Here’s what they did:
Compare visuals and non-visuals–>Visuals help retention by 10%–>Talk about presenter–>Talk about presenter and technologies (slides/transparencies)–>Compare technologies and color.
Then somehow it made a non-existent connection between colors and memory retention.
Third bullet is a random observation that has nothing to do with visuals vs non-visuals.
Fourth bullet doesn’t even make sense and just sort of repeats third bullet.
Then the paper continues.
It just starts rambling about how awesome color overheads are. I’m not joking.
fig. 9. Vogel et al. 1986.
First problem is that this graph is no longer comparing visuals and non-visuals. It’s now mixing control with variables.
I don’t understand what these percentage on the x-axis means. Someone help me. % change from what? average? non-visuals? Ummmm so how come this graph is completely different from Figure 3? They at least need to have different titles for graphs for starters.
fig. 3. Vogel et al. 1986
By this point most people would be confused, frustrated, and tired. I certainly am. It’s a long paper! 21 pages! We’re only half-way through.
Could this be called scientific research? This is not independent analysis. There’s obviously a conflict of interest. Granted, they mentioned that 3M is involved in this research, but the results clearly show that they prefer transparencies over 35mm. Is this a glorified commercial?
And the saga continues in Part 4: Discussion. After the results, the paper moves on to start giving advice for great visual presentation. I guess that would be the discussion section? I’m not sure anymore. What do you think? I would love to hear your thoughts in the comments below.
Part 1: Abstract and Introduction
Part 2: Results, part 1
Part 4: Discussion
Part 5: New experiment begins and wrap-up
I love that you have taken the time to examine this research paper so closely! Thank you! I’m preparing to give a talk in which I reference visual processing speed as compared to text processing and was wondering where some of the numbers come from that you see sloshing around on the Internet. I’m going to be plenty cautious about what I quote to my participants, after reading this masterly take-down you’ve so helpfully provided here. My hat is off to you.
Thank you so much! I’m so glad to be of help. When I wrote this back in 2013, I had no idea how the US would make a nose-dive in attitudes regarding scientific research and logic in general. I’m inspired to do more in-depth analyses of other “famous” papers of dubious origin. 🙂
I see that Univ of Minnesota has changed/moved the link to the article…I wonder if we can find the original document again. I’ll relink if I find it.