Yes, that's an interesting source that at least shows that our eyes can perceive things at those high frequencies, but I'm not sold that it generalizes.
The study actually demonstrates that perception of flicker for regular PWM does in fact trail off at about 65 Hz and is only perceptible when they create the high-frequency edge by alternating left/right instead of alternating the whole image at once.
It looks like the situation they're trying to recreate is techniques like frame rate control/temporal dithering [0], and since this article is now 10 years old, it's unclear if the "modern" displays that they're talking about are now obsolete or if they actually did become the displays that we're dealing with today. From what I can find OLED displays do not tend to use temporal dithering and neither do nicer LCDs: it looks like a trick employed by cheap LCDs to avoid cleaner methods of representing color.
It's an interesting study, but I don't think it redeems TFA, which isn't about the risks of temporal dithering but instead claims harms for PWM in the general case, which the study you linked shows is not perceived above 65 Hz without additional display trickery.
What they are trying to do is recreating the situation that is more similar to actual light sources in computer screens and TV (varying flickering rate from different pixels/ areas). They are saying that the current threshold of 65Hz commonly reported are tested on light sources that are uniformed in flickering, which is not the case for actual screens. It is not about dithering.
Basically the claim is that when there are varying flickering frequency, the requirements for non-flicker frequency is much higher.
No, they're specifically contrasting two types of displays and identify that the traditional way of measuring flicker effect does work for the traditional displays, regardless of image complexity:
> Traditional TVs show a sequence of images, each of which looks almost like the one just before it and each of these images has a spatial distribution of light intensities that resembles the natural world. The existing measurements of a relatively low critical flicker fusion rate are appropriate for these displays.
> In contrast, modern display designs include a sequence of coded fields which are intended to be perceived as one frame. This coded content is not a sequence of natural images that each appears similar to the preceding frame. The coded content contains unnatural sequences such as an image being followed by its inverse.
What's unclear to me 10 years down the road is if the type of display they're worried about is common now or obsolete. "Modern" in 2015 could be the same as what we have today, or the problems the study identified could have been fixed already by displays that we would call "modern" from our reference frame.
I don't know enough about display tech to comment on that, but they're very clear that if your display is showing frames in sequence without any weird trickery that the research method that gets you a 65 Hz refresh rate is a valid way to test for visible flickering.
EDIT: Here's another quote that makes the contrast that they're setting out even more clear:
> The light output of modern displays may at no point of time actually resemble a natural scene. Instead, the codes rely on the fact that at a high enough frame rate human perception integrates the incoming light, such that an image and its negative in rapid succession are perceived as a grey field. This paper explores these new coded displays, as opposed to the traditional sort which show only a sequence of nearly identical images.
It's possible that this is actually a thing that modern displays have been doing this whole time and I didn't even know it, but it's also possible that this was some combination of cutting-edge tech and cost-saving techniques that you mostly don't need to worry about with a (to us) modern OLED.
That is just the motivation, the experiment is much more general and is not related to display technology:
> The work presented here attempts to clarify “the rate at which human perception cannot distinguish between modulated light and a stable field
Otherwise, they would have tested the dithering directly for the full image. Here they are testing a more simpler model: varying flickering causes higher flickering-free requirements (due to eye movements). This would applies to dithering, but potentially other situations.
You can't say "that is just the motivation", because the motivation is what dictated the terms of the experiment. I read the whole study: the contrast between the two types of displays permeates the whole thing.
They repeatedly say that the goal is to measure the effect of flickering in these non-traditional displays and repeatedly say that for displays that do not do the display trickery they're concerned about the traditional measurement methods are sufficient.
You're correct that they do demonstrate that the study shows that the human eye can identify flickering at high framerates under certain conditions, but it also explicitly shows that under normal conditions of one-frame-after-another with blank frames in between for PWM dimming the flickering is unnoticeable after 65 Hz. They go out of their way to prove that before proceeding with the test of the more complicated display which they say was meant to emulate something like a 3D display or similar.
So... yes. Potentially other situations could trigger the same visibility (I'd be very concerned about VR glasses after reading this), but that's a presumption, not something demonstrated by the study. The study as performed explicitly shows that regular PWM is not perceptible as flicker above the traditionally established range of frame rates, and the authors repeatedly say that the traditional measurement methods are entirely "appropriate" for traditional displays that render plain-image frames in sequence.
EDIT: Just to put this quote down again, because it makes the authors' point abundantly clear:
> The light output of modern displays may at no point of time actually resemble a natural scene. Instead, the codes rely on the fact that at a high enough frame rate human perception integrates the incoming light, such that an image and its negative in rapid succession are perceived as a grey field. This paper explores these new coded displays, as opposed to the traditional sort which show only a sequence of nearly identical images.
They explicitly call out that the paper does not apply to traditional displays that show a sequence of nearly identical images.
The paper's motivation is to explore the new coded display, and they are doing that by exploring an aspect that they care about. That aspect is very specifically well-defined, and if you want to show whether a display has the same effect or not, then we need to look into it. But at no point is the experiment itself relating to any kind of display tech.
I mean, they are not even using a screen during the study, they are using a projector. How are you going to even make the claim that this is display technology specific when it is not using a display?!
Did you actually read the study? I assumed you did and so I read every word so I could engage with you on it, but it's really feeling like you skimmed it looking for it to prove what you thought it would prove. It's not even all that long, and it's worth reading in full to understand what they're saying.
I started to write out another comment but it ended up just being a repeat of what I wrote above. Since we're going in circles I think I'm going to leave it here. Read the study, or at least read the extracts that I put above. They don't really leave room for ambiguity.
Edit: I dropped the points on the details, just to focus on the main point. Rest assured that I read the paper, I was arguing in good faith, and that after a bit more thinking I understand your criticism of my interpretation. I don’t think the criticism of the research being unable to generalized is warranted, considering the experimental design. But we aren’t going to agree on that. The difference in our thinking seems to be the probability of the similar effect showing up in daily lives. I know the projector was emulating the coded display, but my point is that it was reasonably easy to do it, and the same setup could conceivably show up easily in different way. Not to mention that the researchers specifically said all the displays in their office had the effect, so it is common within displays itself.
I think if we continue talking, we will keep running in circles. So let’s drop the details on research: it is there, we can both read it. Here is what I was trying to convey since the beginning:
- If you think the (original) article is an ads, with the writing not up to scientific standard: sure, I am ambivalent about the article itself
- If you think the gist of the article and their recommendation is wrong, I mildly disagree with you
- If you think led-flickering affecting people is in the same ballpark of concern about Wifi or GMOs, I violently disagree with you.
LEDs are new, and so the high frequency related research are not too numerous, but for the few exist, they generally point to a higher threshold of perceiving than previously thought. As for the health-effect, I believe that part is more extrapolation than researched (since those can only come after the more generic research on perceiving). So the final assessment is: how bad was the article in presenting information the way they did.
The study actually demonstrates that perception of flicker for regular PWM does in fact trail off at about 65 Hz and is only perceptible when they create the high-frequency edge by alternating left/right instead of alternating the whole image at once.
It looks like the situation they're trying to recreate is techniques like frame rate control/temporal dithering [0], and since this article is now 10 years old, it's unclear if the "modern" displays that they're talking about are now obsolete or if they actually did become the displays that we're dealing with today. From what I can find OLED displays do not tend to use temporal dithering and neither do nicer LCDs: it looks like a trick employed by cheap LCDs to avoid cleaner methods of representing color.
It's an interesting study, but I don't think it redeems TFA, which isn't about the risks of temporal dithering but instead claims harms for PWM in the general case, which the study you linked shows is not perceived above 65 Hz without additional display trickery.
[0] https://en.wikipedia.org/wiki/Frame_rate_control