Visuals vs Non-visuals? Yeah whatever, let’s start a new experiment!
Note: This is part 5 of a series of posts reviewing some of the origins of scientific claims that compare text vs visuals. Access part 1 here. Access part 2 here. Access part 3 here. Access part 4 here.
If you would like to follow along, you can find the original article that I’m reviewing from University of Minnesota website.
The paper is titled, “Persuasion and the Role of Visual Presentation Support: The UM/3M Study” by “Douglas R. Vogel, Gary W. Dickson, and John A. Lehman.
We have been talking about various quantitative claims that many use to validate power of visual communication. While I don’t disagree about the power and potential of using visuals to convey a message, some of the cited references had some hazy origins. Since my work depends on scientific and medical accuracy, I constantly read journal articles and attend scientific talks. I am very familiar with the general setup of a research paper.
Last time we took a look at the “discussion” section of a paper that claims that using visuals will make a presentation 43% more persuasive. Instead of showing us a repeatable experiment, we saw a convoluted setup with curious data results that somehow created new deductions from the said results. Does this sound confusing? Well, that’s what this paper is all about.
This time we’ll take a look at the last 4 pages of the paper. Please note that there’s no reference section in this paper.
Technological caveats and the role of the presenter
This section talks about how 35mm and overheads compare as visual presentation supplements, or so I thought.
The study first has few things to say about 35mm projectors. (p.17)
- Not robust technology
- Problems with legibility
- Visibility dependent on room conditions (darkness)
- Necessity of high-quality images on light background for visibility
Fair enough arguments. Let’s see what it says about transparencies.
Basically they badmouthed 35mm slides and moved onto the next point–which is use of “enhanced graphics”.
The text finally starts talking about what kinds of visuals are used. Wow! That only took 17 pages out of 21 pages to get to some materials and methods info.
The paper describes two examples in detail. They didn’t provide the figures for these, I’ll draw this out.
Below is an example of an “effective graphic” comparing benefits of working smarter instead of harder.
The study explains that the image was successful because of the clock shown (smarter one finishes work faster) and a lightbulb. The aspects are marked 1 and 2 respectively.
Here is a “poor” example. They wanted to show that a lot of money is wasted by poor time management by showing stacks of money. Students didn’t get it.
In this case, the authors say use plain text. So I am still kind of confused that the “Visual” that uses “Text only” is still counted as a “visual”. So in this case, the authors suggest to avoid using images if it’s confusing to the viewers.
That’s it. No comparison between text+clip art, text only, how did these slides do in 35mm and overheads? Nothing, nothing. Why should we be surprised at this point.
The next two pages are dedicated to the quality of the speaker. Why do we keep going back to speakers?
Somehow I am still shocked that the study begins a completely new experiment 3 pages from the end. Oh by the way, let’s check this out too while we’re at it.
Out of nowhere, they “decided” to compare hand-drawn visuals vs computer-generated visual support materials. They added a second, more effective speaker in addition to the average speaker who presented in the first experiment. From nowhere they found 4 more groups of humans and went through the same presentation from experiment 1 (I think?) and skip right into results.
- A “typical” presenter using visuals can be as effective as a “better” presenter using no visuals
This sounds like a great research topic. I hope someone else conducted this study in a more controlled experiment. What’s with all the quotation marks?
- The better the presenter one is, the more one needs to use high quality visual support
Better the speaker, more need for a higher quality visuals…to do what? Be more persuasive? Get more attention? Also a nice claim, if only these were supported by facts.
The “study” then spits out what I think is the weirdest, most random, and very unscientific figures showing “results”.
The figures are “supposed” to be “comparing” two presentations, but the “results” only show one set of “data”. Are the arrows still signifying statistical significance? I also think they are misguiding and actually look like they contradict their “results”.
I’m not sure anymore.
Last but not least, they bring up a graph that’s somewhat reminiscent of the first graph we saw. Unsurprisingly, the numbers have changed and no explanations given.
Here’s figure 2 again for comparison.
It’s really hard to compare data when the data keeps changing and the authors don’t explain why.
Conclusion (thank goodness)
Conclusion is very short and sweet. I’m going to block quote it with my comments under each bullet.
We have drawn three major conclusions from this study (only 3? I thought the list we just looked at had like 10 bullet points!)
- Perceptions of the presenter as well as audience attention, comprehension, yielding, and retention are enhanced when presentation support is used compared to when it is not. Presentations using visual aids were found to be 43% more persuasive than unaided presentations.
Oh yeah! I completely forgot about the 43%. Since its last mention in page 8 (right after figure 2), the study has been mysteriously silent about this number until the very end (page 21). So where did they get 43?
They never explained where 43% came from or how they got the number.
Closest they came was adding the percentage change in time commitment, which…you simply can’t do if you know anything about math. That makes no sense.
- The persuasive impact of a presentation depends on characteristics of the support used. Presentation support in color is more persuasive than that in black and white. Image enhanced graphics are effective only when used selectively and carefully. Use of overhead transparencies results in the presenter being perceived as more interesting but less professional compared to use of 35mm slides.
They listed the same unsupported results again. By the way, overhead transparencies are better than 35mm slides! What does selecting images carefully mean?
- Presentation support effectiveness varies as a function of speaker quality. A “typical” presenter using presentation support has nothing to lose and can be as effective as a better presenter using no visuals. The better a presenter is, however, the more one needs to use high quality visual support.
This paper was never about quality of speakers. But it changed its mind on page 18. So the “results” need to show up at the end.
- This baseline study will be used to support subsequent work to further probe the subject of audience persuasion.
No! This study should not support any subsequent work unless as an example of how not to write an experiment. I think this was an essay, not a study.
And….no reference section, whatsoever. No appendices, no corresponding author contact information, nothing. After all, this was not published. It’s just an internal article.
I learned an important lesson. Just because you see the same claim repeated over and over, that doesn’t make it true. This is exactly what the commercials do-they keep saying how tasty this food is so eventually you go and give them money. I know that I don’t find KFC food to be appetizing (my personal opinion), but I went after seeing the commercial over 100 times.
In a sense, this “study” is a commercial. It’s sponsored by the makers of overhead. We read how great overheads are and how bad the 35 mm slides are in this paper. Then someone picked up this “study”, decided to read a page and a half of text, and cite it as a scientific finding.
I’m horrified that I once considered using “figures” from this “study” on my website. There is a reference to how the US Department of Labor uses this “statistic”. How did this sad “study” get cited by someone with a recognizable name? This paper is a poster-child of “how to look scientific even though we’re faking it”.
Thank you for coming along on my journey! I learned very little facts from this paper, but I learned a lot about how not to conduct experiments and how not to write a paper. So what are your thoughts? Do you ever run across this sort of “bad science”? Please let me know in the comments. Also, if you see someone using the 43% claim, you can direct them to my articles.