"Cameras" making changes to the image like this make the discussion about the image processing pipeline during the Rittenhouse trial seem a little less bizarre.
It was not. Popular coverage cited the court as debating "iphone pinch and zoom," but in reality, discussion mostly focused on the actual forensic application that was used, and the effect the algorithm that application produced on the output photo. The judge displayed a good understanding of how an algorithm takes as input original pixels and produces new pixels from that.
https://www.theverge.com/platform/amp/2021/11/12/22778801/ky...
Any technically savvy person should be able to differentiate different types of upscaling algorithms.
The judge was arguing about enlarging an already recorded video, not about the merits of the original recording. This post is about image processing during the capture process.
Conflating them is disingenuous, unless we're taking very large leaps of logic
One is the playback of data, the other is the capture of real world signals into data. They may use technologies in the same domains, but the implementation varies dramatically, as do the possibilities..
If you have the recorded data, you can send it to any trusted playback device/software to get back a trusted scaling. You can workaround/bypass any distrust in a given players algorithms, and it's very easily discoverable whether something is applying processing or not. There's still the risk of intentionally faked videos, but the discussion is around real time processing introducing artifacts.
With image capture though, there's no such thing as "truth". Even RAW data isn't truth. It's just less processing, but you can't escape it altogether. Even professional full frame cameras will do significant signal processing between the photosites and the recorded RAW image. The same goes for film.
The only thing a court can do is put a strong guidelines for proving the honesty of the content. you can't disallow processed imagery because all images are processed the second they're recorded.
>Any technically savvy person should be able to differentiate different types of upscaling algorithms.
Even the company who makes the software couldn't explain what was done to the picture when their "expert" was asked during the trial. There is a wide range of different methods that can be used to get more details out of a blurry pictures, including ML/AI-based algorithms that are indirectly getting extra details from other pictures.
The device in question was an iPad, the company that made the software (Apple) was not involved, the "expert" was a third party explaining the standard enlargement methods.
If the judge mistrusted the enlargement method, he should have ordered them to display it on another device or software.
Real-time video upscaling is very standard filtering that's not introducing extra hallucinated details. At most, some TVs use ML to tune their sharpening and color rendition, but it can always be disabled. The iPad has never been shown or proven to use those for video playback, and even if it did, the courts should have a standard video player to present details with standard filtering.
The judges non-technical stance on things, isn't borne out of reality and again, any capture time post processing should be completely independently viewed from playback time processing.
Yes, yes really. When real resolution is being substituted with the best guess of a completely closed source image processor, the court should be made aware of it.
This sounds like a weird rationalization for an absurd case of technical ignorance. Like when people defended nuking hurricanes or using UV lights as a Covid therapy.
Fact is, you have no idea what kind of unsolicited postprocessing the camera in the Rittenhouse trial might or might not have performed, and neither did the court.
It's a huge potential problem, and getting worse by the day.
Agreed as we've been living in the era of deep fakes for a while. I shudder to think how computational photography can advance to such a degree to blur the context to any unsuspecting user, whether accidental or intentional.
Except, specifically to the Rittenhouse trial, it was about playback processing NOT capture time processing.
Capture time processing is also verifiable with regards to what stack a particular device uses with the use of metadata, and as such has little in the way of extra problems over other potential doctored evidence which have been possible for years without smart phone devices.
Do we question what color film would portray an image for example? Is a particular lensing affecting the truth of an image? A specific crop? There's no such thing as a perfectly true photo or video.
If a camera can replace a head with a leaf nothing taken with that camera can be trusted, especially in court, ever. Any changes to photos or videos should be avoided. This is the norm in court cases. You should read a proper article about it instead of the click baity ones.
No it didn't. The point still stands that AI enhanced images have a credibility and admissibility problem. This one example turning out to not have been altered in the way we thought it was by the enhancer doesn't invalidate the broader questions brought forth by the discussion.