Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Computational Photography explained: How software processing overtakes hardware

But there’s more to it. Smartphones are now coming with multiple cameras. There were dual cameras, triple & quad camera(First one was Samsung A9) & now we have a penta camera Nokia 9 Pureview.

The software has also improved a lot during the last few years. The AI based processing detects the scene and automatically optimizes color,contrast & sharpness of the image. Huawei introduced Hi Resolution Lossless Digital Zoom in Mate20 Pro. The Mate 20 Pro was a pocket camera beast. Xiaomi introduced Moon mode in its flagships.

All these phones had more than 1 camera. So the breakthrough was Google Pixel series. The Pixel smartphones have just a single camera yet it overtakes most of the other multi sensor camera phones. How does this happen?

The answer is simple, Google’s expertise in Artificial Intelligence & Machine Learning has helped it to design a camera app – Google Camera or simply abbreviated as GCam.

GCam essentially takes an image in flat color profile & using Google’s image processing engine – color grades it. The advantage of shooting a flat is that it overcomes the problem of blown away highlights & crushed shadow details. It is relatively easy to recover details from shadows but highlight recovery is difficult.

Other manufacturers are also replicating this techniques along with some of their proprietary algorithms for better image quality. In addition, many third party developers are porting (rewriting) GCam to other devices as well.

The first DSLR like feature is the shallow depth of field effect – simply known as Bokeh blur. There are different ways to achieve this.

Method 1 is to use software algorithms to detect objects in focus (or objects that should be, in focus) & blur out remaining part of image. This is done by Google in their Pixel phones, since they have only one camera sensor. Method 2 is to use a secondary sensor for detecting depth of field and distinguishing object and background. The data from this sensor is used to determine which part should be in focus. Technically the second method should be giving better results.

Turns out its not! Yes, Google’s method is actually doing better job at edge detection than a dedicated depth camera on most devices. This clearly shows the superiority of AI algorithms over hardware enhanced methods.

HDR photos are also the new trend. HDR photos look similar to what we see with our eyes – the dynamic range is far better than others. HDR is made capable by stacking. Multiple photos are taken & stacked together & highlights are merged to get that awesome looking landscape shot.

In terms of shooting video, the realtime software processing can only be done in a limited way. Video processing is limited to EIS (Electronic Image Stabilisation) which reduces jitters & motion blur during video shoot. EIS uses the chipset processing power to detect shaky movement & then crops off the frames at the edges to cancel out motion. Although EIS is inferior compared to OIS – it comes handy.

Samsung has recently introduced live focus tracking in videos, a feature which could only be done is professional DSLRs to focus on the subject while it is motion. This does require a lot of processing though.

Read more on our Google Pixel 3A Review – The budget Pixel trolled by its own price tag

With Google Pixel 4 & Samsung Galaxy Note 10 around the corner, it will be interesting to know what camera technologies are incorporated in them. Pixel 4 is said to finally have a 2nd camera sensor (an ultrawide), while Galaxy Note 10 is touted to house a triple camera setup with a ToF (time of flight) sensor to have real time background blue effects.

Computational Photography technology is going to be better and better in the coming days, and we surely look forward to getting more AI based enhancements to camera apps. What are your thoughts on this? Do share in the comments below.



This post first appeared on TechBuzzIn, please read the originial post: here

Share the post

Computational Photography explained: How software processing overtakes hardware

×

Subscribe to Techbuzzin

Get updates delivered right to your inbox!

Thank you for your subscription

×