I think the main issue I have is that if there is a situation where you actually notice the paints happening on a progressive image for it to even matter, you might want to rethink the level of quality of the photo being delivered to that device over that connection speed.
The way I'd approach photos is this - no image should take more than 3 seconds to load, and anything between 1-3 seconds has a small progress bar (mobile) or just an empty frame with a thin border (mobile thumbnails or any web photos).
I'd take into account device, connection speed, CDNs, jpeg compression to ensure that I meet the time requirement for the full image to load.
If the full image isn't consistently loading within that timeframe, I've already lost and need to rethink the quality of the images being delivered or if I'm designing the right app / site, because it's going to be a terrible user experience either way, progressive jpeg or not.
I never said baseline jpegs display instantly after file download. They render as I describe several times, top to bottom or "chop chop chop." It's progressive jpegs that display instantly after file download IF the browser does not support progressive rendering of progressive jpegs.
The difference between the rendering of the two file types is not subtle.
Agreed. But would you say that progressive jpegs don't offer a visual advantage in this case? Imagine if a photo has a caption below it: baseline starts rendering far away from the caption, but with progressive we'll get the caption in the correct place without a big gap between the caption and where the photo is rendering. In the case of baseline, the photo will draw to "meet" the caption.
Right?! This is very interesting and I just discovered it by chance. If it were just one browser we could write it off, but it's not. I'll try to find out.
This brings up some important points. Yes, we need numbers. Let's get them.
Progressive jpegs do not necessarily need to use more RAM. The FAQ I linked to also says "If the data arrives quickly, a progressive-JPEG decoder can adapt by
skipping some display passes." Win!
Also, why do you say "up to 3x" more CPU? Is that an estimate based on how many scans you're guessing a progressive jpeg has? A progressive jpeg can have a variable number of scans -- we used to be able to set that number, which is totally cool!
As for the compression benefits, you say "progressive very rarely achieves a double-digit win after that." We web performance geeks LOVE single-digit wins, so you can't burst our bubble that way.
Yes, Mobile Safari has trouble with images. Period. But Mobile Safari does not progressively render progressive jpegs (I wish it did). So we can make it a best practice without worrying about Mobile Safari. When the web is full of progressive jpegs, Apple will have to deal with them. It's not an evil plan, it's the right thing to do.
When you say it doesn't look that good, are you saying that for yourself personally, or are you saying it for your users? We need to think about what they see. As I say, perceived speed is more important than actual speed, and the thing that excites me most about progress jpegs is not the file size savings, but instead the behavior of the file type in browsers that properly support it.
Progressive JPEG needs a minimum of 2 x width x height additional bytes over baseline to decode an image (maybe more, definitely 1.5-3x more than that if you're displaying coarse scans), regardless of how many scans you have or display, as it needs to save coefficients for N-1 scans over the entire image, whereas baseline needs only to save the coefficients for a couple 8x8 blocks at a time. Though if you're clever about and sacrifice displaying coarse cans you could reduce this some.
If you don't display coarse scans (and if you're comparing progressive vs. baseline CPU usage then counting such isn't fair), then approximately the only additional CPU time progressive should take is the time additional cache misses take. It's probably a wash considering that decoding fewer coefficients / less Huffman data takes less CPU.
Maybe I'll get some numbers, I'm curious now... But unless you're serving multi-megapixel images the additional CPU and memory doesn't matter. Probably not even until you're in the double digits, if then.
Thanks for the comment. Whatever detail you can add to this conversation is much appreciated. It's a neglected topic, and it's important for us to understand it better.
I have observed that even when you set height and width attributes, the area is not always "staked-out." Of course we'd assume that it is. I'll need to get you some browser and version details about this.
Are you saying if you specify <img width="xx" height="xx"> that the browser doesn't reserve exactly the right amount of space for that image? Because that's exactly what it's supposed to do: if it's not, then it's a browser bug.
Update: this must have been my imagination. I confirmed that all of the most common browsers will stake out the image area if height and width is set in the css or the img tag.
What I'm digging about HireArt is that companies using it seem to be more organized. They think more about the position they are hiring for at the start, submit interview questions to HireArt, write better job descriptions… If companies have a clearer idea of what they want, and they can communicate that to applicants, less time will be wasted all around. Here's an article from someone who has used HireArt, and makes this statement better than I: http://bit.ly/Abh6rj
When you say
it would be better to have a placeholder there that is the same frame as the final image size, then have the final image appear upon completion
isn't that the quality of progressive jpegs?