His explanation about why subtitles don't match dubbing is not convincing. Basically he says that subtitling and dubbing are done by different teams with different goals. The dubbing crew tries to match lip movement. OK, so why not use the script that the dubbing team produced for the subtitles? Why do the translation twice?
His explanation for that: (a) sometimes what's spoken is too long to fit as subtitles on the screen, (b) what's spoken needs to be summarized, like multiple people shouting over one another on a reality show, otherwise the subtitles would be a confusing jumble of words, and (c) jokes/slang/puns can be difficult to translate.
I that agree (a) and (b) are legitimate reasons why subtitles might occasionally not match dubbing, but (c) is irrelevant. Jokes/slang/puns might be difficult or impossible to translate, but whatever way it ends up being translated can be spoken and written identically. In fact his goose joke is spoken and written in Portuguese as Eu estou gansado deles in the same way (timecode 3:57). I.e., the dubbing matches the subtitles. So the examples he gives do not support what he did in his own video.
Furthermore, his video (like almost all videos and TV shows) is chock full of cases where the subtitles and dubbing are different for no plausible reason related to (a), (b), or (c) above:
All examples below are when the video is set to Brazilian Portuguese audio and Brazilian Portuguese subtitles.
Time: 0:22
Dubbing: E o motivo disso é que as legendas e a dublagem ...
Subtitles: Isso acontece porque as legendas e a dublagem ...
Time: 0:51
D: as legendas e a dublagem são praticamente idênticas
S: as legendas e a dublagem são quase idênticas
Time: 1:08
D: é sincronizar o movimento dos lábios o mais próximo possível
S: é sincronizar os lábios o mais próximo possível
Although I agree that reasons (a) and (b) above might be valid on rare occasions, I think the real reason that subtitles don't match dubbing is because they are done by different teams with no coordination, with different timelines and deadlines, and probably by completely different companies in different countries.
If you cared strongly about this issue, I don't see any reason why the subtitles and dubbing couldn't be 99% or 99.9% identical in any particular target language. The 1% or 0.1% case being when the dialog is much too long for the screen or when you have to summarize a bunch of people talking simultaneously.
It's been a while since I watch that video, but I'm pretty sure he mentioned the different team working on it reason as well, and the rest of the video was only explaining the intentional differences instead of the inherent differences that comes from different people working on the translations?
Where I live, due different dialects being widely in use, it's common for TV and movies to show subtitle in the same language as the spoken language. Even then, it is not uncommon for the subtitle to deviate from what is spoken.
Also, as an English learner I used to watch TV shows with subtitles for the visually impaired, and there are times when the English subtitle deviate from what's the actors says as well.
Sometimes it's phrases that are commonly use in speech but strange to see written down; sometimes it's just tonal things that would get lost if written as-is; sometimes long speech is summarized so it doesn't' become a wall of text; sometimes it's most likely to just be mistakes.
Unless the difference actually contains significantly different meaning and can lead to misunderstanding, I don't see why that'd an issue that is worth spending the effort on eliminating?
Especially when it comes to translation, it's not like there's only one possible way of translating a sentence, who should make the editorial choice on which version is best and would that even be helpful for the purpose of disseminating information? If anything, having two version is probably better for gaging the tone and nuance in the original language.
> Unless the difference actually contains significantly different meaning and can lead to misunderstanding, I don't see why that'd an issue
The following are some of the reasons why it's very desirable that audio and subtitles should match:
(1) It's great when you're trying to learn a language or to watch in a language in which you're not fluent. It's extremely frustrating when the audio and subtitles don't match. This point is made by many people in the comments to the video[1].
(2) Even if you're fluent in the language, if you're watching with both audio and subtitles enabled, it's jarring when they don't match.
(3) If you didn't understand something in the audio (because of poor pronunciation, poor sound quality, or whatever), and you turn on subtitles to see what was said, you expect to see exactly what was said, not something "similar in meaning".
(4) A reason quoted directly from the YouTube comments[1] (which I think is a more common a problem that people realize): As someone with sensory processing issues but technically normal hearing - sometimes understanding what i'm hearing comes with a short delay, and accurate subtitles help bridge that gap so I can still keep pace with what i'm watching! If the subtitles don't match, however, it can COMPLETELY throw me off because of the conflict in information and I end up more confused than if i'd only read the subtitles or only listened to the audio. Accurate subtitles are an accessibility feature!
> phrases that are commonly use in speech but strange to see written down
I'm curious to know if you can give an example of that?
As someone who did work on subtitles in modestly popular videos as well, I believe there should be two subtitles---one for disabled peoples and one for language learners. They serve overlapping but not entirely identical purposes. The point 1 is well said though I think Tom Scott aimed his subtitles mostly to the former, as like television regulations.
The points 2 and 3 are comparably of less concern in my opinion. I don't watch TV nowadays, but when I did I used to watch programmes with on-screen captions (as opposed to optional subtitles, pretty common in East Asian channels), which never faithfully reproduced what has been said, and I was fine. Maybe I was more annoyed if I were able to turn captions off and only occasionally turn them on, but the whole points don't really match my experience.
The point 4 is what I'm most unsure about. I believe this kind of experience can be replicated by who can hear some but not all of foreign words and need subtitles (of the second kind) to connect them. For example, I can hear and speak Japanese but very slowly, so I normally have Japanese subtitles turned on. And I think I often experienced a lack of understanding due to my weak knowledge of Japanese, but never experienced such a conflict in understanding. Maybe the sensory processing issue has a substantially different mechanism to my model then?
> I'm curious to know if you can give an example of that?
Filler words, cut-off words, etc. Faithful transcriptions need to reproduce them (and yes, I also did some transcription works and that was really annoying) but subtitles needn't and shouldn't in most cases.
His explanation for that: (a) sometimes what's spoken is too long to fit as subtitles on the screen, (b) what's spoken needs to be summarized, like multiple people shouting over one another on a reality show, otherwise the subtitles would be a confusing jumble of words, and (c) jokes/slang/puns can be difficult to translate.
I that agree (a) and (b) are legitimate reasons why subtitles might occasionally not match dubbing, but (c) is irrelevant. Jokes/slang/puns might be difficult or impossible to translate, but whatever way it ends up being translated can be spoken and written identically. In fact his goose joke is spoken and written in Portuguese as Eu estou gansado deles in the same way (timecode 3:57). I.e., the dubbing matches the subtitles. So the examples he gives do not support what he did in his own video.
Furthermore, his video (like almost all videos and TV shows) is chock full of cases where the subtitles and dubbing are different for no plausible reason related to (a), (b), or (c) above:
All examples below are when the video is set to Brazilian Portuguese audio and Brazilian Portuguese subtitles.
Although I agree that reasons (a) and (b) above might be valid on rare occasions, I think the real reason that subtitles don't match dubbing is because they are done by different teams with no coordination, with different timelines and deadlines, and probably by completely different companies in different countries.If you cared strongly about this issue, I don't see any reason why the subtitles and dubbing couldn't be 99% or 99.9% identical in any particular target language. The 1% or 0.1% case being when the dialog is much too long for the screen or when you have to summarize a bunch of people talking simultaneously.