Simply adjusting light levels and sharpness wouldn't produce this good/clear of a photo in the conditions presented - AI/ML image post-processing is a hard requirement for sensors like these.
The "hard requirement" is just multi-frame noise reduction, which has been around for a lot longer than the AI/ML hype wave, and has always had a risk of producing similar artifacts to this one.
Multi-frame noise reduction only gets you so far. Rotating objects for example present huge issues and result in their own artifacts. In the end there isn’t a free lunch, any system that improves resulting images is making trade offs.
I don't know about this - RAW files are recordings of sensor values. Those sensor values are accurate of what light the sensor measured.
Then the sensor values are converted to a JPEG. So it's still an accurate rendition of the light - even though yes, it is a rendition of the light.
But to completely replace some sensor values with some computer-generated values is a different ballgame, IMO. It's more akin to Photoshop editing, as opposed to Lightroom.
This is really splitting hairs, but even our eyes are interpretation of reality.
The camera sensor does not record the wavelengths of lights it is capturing, only RGB values. We can only reproduce the picture in a way that is convincing to our eyes, not what light what original being captured.
The least that modern phone cameras do is to blend multiple RAW files into a single picture, to improve various metrics. That brings the risk of producing results like the one seen here.
I agree with your point, but just to be clear for people:
When you bring that RAW photo into something like Lightroom or Capture One they’re automatically applying a base curve to the photo before you do anything.
In Capture One you can set that to “flat” which I believe is fairly unprocessed, and it takes a lot of work to get it to a usable state from there. They also have other options, and they recently changed their default setting and it’s pretty incredible how different it is from their old default.
There definitely though is some magic-sauce when de-Bayering [1] the RAW data and then playing games with color spaces and color profiles to end up with that final JPEG.
I agree with your point though. I dislike "computational photography".
They surely are. But not all interpretations are alike. I was recenty looking at (scans of) analog pictures I took years ago using an entry level analog camera and apart from white balance being off, the general skin tone + texture and shadows at least looks very realistic and not like some cardboard version of skin.
I was thinking in terms of creatures from Nausicaa and Mononoke but ... that works too I guess ... The difference is whether those accepted the beings live or nothing can be done to those who were touched by it, but that could be just a perspective difference.
Yeah, even your eyes don't reproduce reality perfectly but at that point it's just semantics. He means he wants to see his daughter the same way through the camera that he sees her through his eyes, in real life, otherwise known to him as "reality".
I don't think it's unreasonable to allow context of the statement to allow us to disregard "reality" as it pertains to quantum wave functions, in favor of something more human. There's a large difference in something that's goal is to capture what the eye sees and something that's goal isn't. It feels like Apple thinks they know what's better for us than we do, which I admit it's perfectly capable of doing in certain scenarios. But, when Apples thoughts do not align or go directly against our wishes it's uncomfortable, it feels like your "reality" is being ripped from your hands in favor of what some giant corporation thinks your reality should be, for any large number of opaque or intentionally obscured reasons.
The amount of post-processing your brain does to make you believe you see far more than you actually do in far higher resolution than you do, whilst combining two 2D views into a pseudo-3D view, is incredible.
There's straight-up blank spots in our raw vision, we don't see ~any color at the edges, etc., and that's just the _start_ of what our eyes/brains elide out. Really crazy stuff.
As far as I understand the goal of using ML augmentation in camera phones is to capture what the eye sees. It's to compensate for the limitations of the hardware which on its own is not able to produce a true-to-eye result. You seem to be implying that the goal is to improve the photo to be better than reality but I don't think that's the case.
Right, but it can only guess at what the eye sees when hardware limitations don't allow it to capture enough information. Maybe most of the time it guesses right, but it's still a guess, and it appears sometimes it guesses wrong. Really wrong.
If it's combining information from multiple shots or multiple camera sensors, isn't that more like sensor fusion than "guessing"? I think calling it guessing is an uncharitable interpretation of what's happening.
That's not even remotely close to AI/ML processing, and also is something you have been able to accomplish-- manually or with presets-- in Lightroom for ages.
Yes but the knowledge of how it was created affects how I see it. If it’s just denoising etc it feels different than if I know it’s painted in some other data.
That's not necessarily true, I don't know the specifics of how it's implemented but it could just be used for select pixels from different frames in the shot?
the photographer replied later that it's due to a leaf hanging down from a branch in the foreground. There was a leaf in between the woman's head and the camera lens. Nothing weird happened.
I think this particular big demonstrates a massive gap in what people think photoprocessing is, and what it has very recently become