Google reveals the secrets of the AI ​​that makes its photos so realistic


Google is revealing the secret of the new Pixel 4 for taking more realistic photos than its competitors with three or four lenses: an AI-enriched technology associated with the second camera lens.

Google was slow to add a second photo lens back on its devices, if we compare to its competitors who do not hesitate to offer two, three or even four. The digital giant finally gave in to the trend with its new Pixel 4 model, after being satisfied with a single rear lens over the three previous generations.

Image 1: Pixel 4: Google reveals the secrets of the AI ​​that makes its photos so realistic

This second rear camera is not there to look pretty: the Google AI team in charge of the photo software integrated into the Google Pixel explains on its blog how she operates the second lens and the integrated software to get a better estimate of the depth and generate beautiful background blurs (or bokeh) that have nothing to envy to smartphones with triple or quadruple lenses.

Previous generation pixels relied on a so-called "double pixel" autofocus system to estimate the depth needed to create the blur effect in Portrait Mode. A system equivalent to " two virtual cameras placed at each end of the main lens ". It involves dividing each pixel in half so that each "half pixel" shows a different half of the image. We thus obtain two angles offset from the same scene.

Pixel 4 improves and extends the blur of Pixel 3

On the Pixel 4 released this year, Google has a wide angle lens and a telephoto lens 13mm apart. By combining the two images, the smartphone can determine the depth of field in all directions, whereas the Pixel 3 could only do it in one. The Google Pixel 4 also uses "useful" double pixels to capture all the data that can enrich its Portrait Mode. This use of the old system helps the Pixel 4 to estimate the depth regardless of the orientation of the phone.

Google AI also reveals how it improved the blurry system of Pixel 3 and 4 to hoist it " on a professional SLR camera ". Work performed in two stages: the raw image produced by HDR + is first blurred, then a tone mapping is applied. The whole makes it possible to obtain brighter and sharper background blurs, with precise and adapted saturation.

Source: Google AI Blog



La rédaction

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *