Jared Cooney Horvath is a globally recognized Science of Learning expert committed to helping teachers, students and parents achieve better outcomes through applied brain and behavioral science.
Generally speaking, the evolution of subtitles and video captioning has been a net positive.
Just about every video on YouTube, Netflix and other major multimedia platforms now comes automatically equipped with captioning.
And this is great … because how else could we hope to keep up with what they’re saying on Downton Abbey?
But what about the use of captioning in the classroom?
Well, when it comes to education, it’s almost universally accepted that captioning has zero downside … and it’s easy to understand why.
Captions can effectively support students with learning disabilities, aid non-native speakers, and boost the usefulness of videos with poor audio quality.
But what about learning? Do subtitles and captions actually support deep learning and comprehension?
Well, if you ask a student, they’ll almost certainly say ‘yes’. In fact, a vast majority of students use captioning at least some of the time (regardless of whether they need them or not) … while over 90% of these students believe subtitles are helpful for learning.
But is this actually the case? In this video, I examine a research article that aims to answer this question.
Here are some of the questions I tackle in this installment:
What is the redundancy effect … and how does it influence learning and comprehension?
What is the redundancy effect … and how does it influence learning and comprehension?
How does captioning impact surface learning versus deep learning?
What are the key takeaways for teachers regarding the use of captioning in the classroom -- both with videos and during live lectures?
Give it a watch, and let me know what you think in the comments.
And, as always, if you find this video valuable, interesting and/or entertaining, you can support us by liking, sharing and subscribing to our YouTube channel ;)
Regards,
P.S. There’s always a bit of confusion about the difference between subtitles and captions … so here’s a quick summary:
Video Transcript
Do Subtitles Help Learning? (Using Captioning in the Classroom)
Here's where we start to learn that students aren't always the best judge of what's helping them learn. They're really good at knowing what feels good and what they like, but that's not always synonymous with deep learning. So as teachers, although it might not be comfortable all the time, we have to be willing to press those hard buttons and say, "Look, I know you might not like this as much, but trust me, this is going to lead to deeper learning. It's not going to feel as comfortable, but this is where depth comes."
Hello everybody, and welcome to this week's "From Theory to Practice," where I take a look at the research so you don't have to. The article I've selected this week is called "Effect of Onscreen Text on Multimedia Learning with Native and Foreign Accented Narration" by Chan and colleagues.
To understand what they did with this paper, there are two longstanding principles of multimedia learning we have to understand.
The first principle is called the redundancy effect. Put simply, the redundancy effect says that when narration is combined with identical onscreen captioning, learning and understanding drops. The reason for this concerns how the brain processes reading and listening.
When you're listening to a narrator, a very specific auditory linguistic area of the brain is used to process that voice. Interestingly, this exact same network is used to process silent reading. So just like you can't understand two people speaking to you at the same time, neither can you understand a narrator while trying to read simultaneously. There's just not enough room in the brain to handle this.
The second principle we have to understand is called the voice effect. This says that when the narrator has a foreign accent different from the viewer/listener, learning and understanding drops. The reason for this has to do with decoding.
Typically, when listening to a foreign accent, a lot of cognitive energy is expended just trying to make sure you got the right words so you can basically follow the argument. Unfortunately, when all this energy is expended just on decoding the words, deeper understanding and conceptualization suffer.
What did this paper do? These authors said, "What would happen to learning and understanding if we pit the redundancy effect against the voice effect? When these two come together, what happens?"
Here's what they did: They had different groups of people watch a video lesson on the formation of lightning, and at the end of the video, they had to answer two different sets of questions.
One set we'll call surface retention questions. These were just basic facts that they heard from the video—things like "How hot does lightning get?"
The other questions are called deep retention questions. These are questions that don't directly mirror things learned in the video, but if you truly understood the concept, you should be able to answer them. For instance, "Which of these clouds is more likely to produce lightning?"
In the first part of this experiment, they had three different groups of people watch this video when the narrator had the same accent they did. The three different groups were:
What happens as far as surface retention is concerned? All three groups performed significantly the same—everyone got the same basic facts from it.
But once you move into deep retention, that pure narration group (the one with no text) was almost two times more likely to answer deep questions than either of the groups that had text. So boom! We just demonstrated the redundancy effect: add text and learning starts to go down.
In part two of this study, these researchers had three different groups watch the same video, except this time the narrator had a foreign accent. In this case, they were using an Asian accent that was not native to the listeners. Same thing: we got three groups—pure narration, keywords, and full captioning.
What happens here? It turns out that with a foreign narrator with no text or only keywords, surface retention drops significantly. But as soon as you add captioning (full narration), surface retention rises to the same level as the people listening to a native-accented speaker.
So here we have evidence that adding captioning with a foreign narrator can actually serve to boost understanding and comprehension. But unfortunately, once you take a look at deep retention, all three foreign-accented groups performed worse. None of them performed as well as that pure narration without words from the native accent condition.
Let's bring this back to us. What does this mean for us? If you're going to be doing multimedia lessons, don't just automatically caption your videos or embed them—make captioning an option, because our accent doesn't always match our students. When we make closed captioning a choice, those students who match our accent can keep it off and do strong learning. Those students that struggle with our accent can have this boost.
The second thing we can draw from this is that this doesn't just matter for video learning. The same principle's going to happen in real life. So anytime we're teaching, if we've got slides with text behind us or we're giving handouts with text, we have to be very cognizant of how we're presenting information and when we're asking students to do the impossible.
There's one last bit to this research that I haven't told you about that I think is really interesting. At the end of the study, they asked all the students to rate the different conditions—when did they think they were learning best? Almost to a student, every single one of them said that when captioning was on, they were learning more. They thought that was helping them make sense of the material. They had no clue that it was hurting them.
Here's where we start to learn that students aren't always the best judge of what's helping them learn. They're really good at knowing what feels good and what they like, but that's not always synonymous with deep learning. So as teachers, although it might not be comfortable all the time, we have to be willing to press those hard buttons and say, "Look, I know you might not like this as much, but trust me, this is going to lead to deeper learning. It's not going to feel as comfortable, but this is where depth comes."
The more we can do that and show them the impact over time, ideally, the more we can get them on board to recognizing these different principles of learning and embedding them in their own practice. It's about taking agency over their learning.
So that's it. This is a really good paper, and I got a lot of cool ideas from it. Hope you did too. If you liked what you heard, if you can give me a thumbs up and subscribe below, that'll make sure that we can get these videos out to more people on YouTube so we can get these ideas out there. Thank you guys so much for joining me. I hope you're all well, and I'll see you guys soon. Bye, y'all!
Did You Enjoy This Post?
Help spread the idea by sharing it with your peers and colleagues ...
Join the LME insider community! Subscribe below to get exclusive updates on new courses, lectures, articles, books, podcasts, and TV appearances—delivered straight to your inbox before anyone else.
You Might Also Like ...
Copyright © 2022 LME Global
6119 N Scottsdale Rd, Scottsdale, AZ, 85250
(702) 970-6557
Connect With Us
Copyright © 2022 LME Global – 6119 North Scottsdale Road, Scottsdale, AZ, 85250 – (702) 970-6557