Press "Enter" to skip to content

Adobe’s goal to make ‘creative photography’ accessible

Adobe’s vice president has flagged the his goal to ‘democratise creative photography’, by designing a AI-based ‘universal camera app’ that uses computational photography processes to serve the needs of all photographers.

Marc Levoy.

As the director of Adobe’s Emerging Products Group, Marc Levoy [right] provided interesting insight into his vision for the future of photo editing. In the interview on the Adobe Blog, he also made slightly controversial statements by dismissing the existence and merit of ‘straight photography’.

Who is Marc Levoy?

Levoy may not be a household name among photographers, but as a computer graphics researcher he’s played a major role in smart phone camera development. He spent most of the last decade working at Google as a ‘tastemaker’, AKA lead engineer, for the Pixel smart phone camera. During this time, 2014 – 2020, he led the team that designed the HDR+, Portrait Mode, and Night Sight mode in the Google Pixel.

This work, he claims, contributed to democratising’good photography’. As smart phone cameras utilise computational processes, everyday users can now achieve quality image results that would have previously required some knowledge of camera settings. The Pixel’s Night Sight, for example, allows users to adequately capture handheld astrophotography, which would ordinarily require a DSLR, tripod, and programming manual settings.

Earlier in Levoy’s career he launched a Google-funded Stanford research project called CityBlock, which would later be commercialised to become Google Street View. And before that – in the ’90s – Levoy specialised in cartoon animation, volume rendering, and 3D scanning. Read more about his career bio here.

He was recently appointed membership to the National Academy of Engineering, a private peer-elected non-profit institution, in recognition for his work in computer graphics and digital photography.

Adobe ‘democratising creative photography’

Levoy joined Adobe in mid-2020, and brought his expertise – and corporate philosophy – to the company. He partly credits the smart phone camera’s rapid development to the ‘culture of publication’ among manufacturers, including Google, ‘which allowed other companies to become “fast-followers”‘. At Adobe he’s encouraging his team to do the same and publish research for peer review.

‘At Google and now here at Adobe, I’ve tried to hire mostly PhD superstars. They’re smart, they’re creative, and they think of things that others haven’t,’ Levoy said. ‘But these folks want to be recognised for what they’ve invented, and they want to talk about them at conferences, and get feedback from their peers. In short, they want to be part of a research community. To attract this caliber of people, I need to let them publish. Industrial Light and Magic and Pixar under Ed Catmull used the same strategy. In fact, I learned it from him.

‘Does this strategy let competitors catch up faster? Sometimes yes, and this is arguably why Apple’s smartphone photography got good so quickly over the last 3 years. How can a team that publishes respond to this threat? Perhaps delay publication a bit. Otherwise, run faster and breathe deeper. Invent more cool stuff.’

Sharing secrets with competitors is an interesting strategy for Adobe, given it’s photo editing software monopoly wasn’t achieved by playing nice with others. But this approach is seemingly essential for Levoy to accelerate his goal of ‘democratising creative photography’, which means making it accessible to the masses.

‘Adobe is an attractive place to do this, because it caters to people who are trying to take their photography to the next level, and are therefore willing to spend a bit longer composing and capturing a picture.’

Levoy doesn’t divulge precisely what his team is working towards. But he points out that Adobe is primed to develop tools that ‘marry pro controls to computational photography image processing pipelines’. And, obviously, artificial intelligence (AI) plays a central role.

Good photo editing still requires photographers to possess a refined skill set. Many even label editing an ‘art’. While one-click presets and automated processes are becoming more powerful and intuitive, most professional photo editing remains a manual endeavour. There is an unquenchable thirst among enthusiast photographers to learn photo editing, and there is no shortage of workshops and masterclasses to attend. Quality photo editing is one of the few remaining barriers of entry for certain styles of professional photography, and it appears that Levoy’s vision aims to lift those barriers.

AI in photography seems relatively new. Most mirrorless full-frame cameras now come with various AI-enabled face/eye/animal/object autofocus features. In its infancy, reviews were critical of early attempts for being clunky gimmicks but they’ve since become praised as more serious tools. Levoy highlights another AI achievement in photography is automated white balancing.

‘Deciding how a scene was illuminated, and partly correcting for strongly colored illumination, is what mathematicians call an ill-posed problem. Is that park bench yellow because it was painted yellow, or because it was painted white, but is being illuminated by a yellow sodium vapor streetlamp? Until 5 years ago white balancing was solved mainly by seat-of-the-pants heuristics. As part of our paper about Night Sight at Google we described an AI-based white balancing algorithm. It worked well. There are undoubtedly other cameras that use AI-based white balancing. It’s a big success story for AI in photography.’

Camera AI is now being used to detect skies and apply ‘special processing’, he says. And smart phone cameras use ‘AI to classify scene type’ – food photos, for instance, are processed to make them ‘look appetising’.

‘AI is also used to estimate depth maps in many phones, which helps them defocus backgrounds for portraits. Several companies are working on AI-based relighting of portraits, although so far with mixed results. Adobe is pushing the boundary in this area with its Sensei-powered neural filters in Photoshop, but relighting is still a hard problem.’

AI creating images

On the topic of AI and photography but unrelated to Levoy and Adobe, another hot area technological development is generating artificial images. Below are recent results released by stock agency, Smarterpix, which yielded amusing results. They just seem a bit off.

Source: Smarterpix.

These synthetic humans are made from bits and pieces of real humans – an ear or two from Bob, a chin from Maria,  someone elses’ nose, etc, as ‘the real-life models used to create the datasets signed biometric release forms. The result is legally clean datasets, also available for licensing, and cost-efficient, litigation-free synthetic content.’

Part of the pitch is that stock photography in five to 10 years will almost entirely use AI-generated models instead of humans. While not having to pay models or secure model releases may make life easier for the financially-strapped stock photographer, there is no ‘window to the soul’ with these synthetic people.

 

 

 

 

 

 

 

Straight photography ‘a myth’

Levoy’s background has undoubtedly left him with a unique perspective on photography. But some may take issue with his dismissal of the ‘straight photograph’. Here’s the full excerpt by Levoy when asked about ‘balancing technology that improves photography with an artist’s creativity and individual expression’:

‘There’s a myth in photography of the “straight photograph”. Maybe the myth grew out of Ansel Adams and the f/64 club he founded in 1932. Similarly, cameras often have a processing option called “Natural”. But here’s no such thing as a straight photograph, or “natural” processing. The world has higher dynamic range (the brightness difference between darks and lights) than a photograph can reproduce. And our eyes are adaptive sensing engines. What we think we see depends on what’s around us in the scene — that’s why optical illusions work.

‘As a result, any digital processing system adjusts the colours and tones it records, and these adjustments are inevitably partly subjective. I was the primary “tastemaker” for Pixel phones for several years. I liked the paintings of Caravaggio, so Pixel 2 through 4 had a dark, contrasty look. Apple certainly has tastemakers — I know some of them.

‘The key to artistic creativity lies in having control over the image. Traditionally this happens after the picture is captured. Adobe built a company on this premise. If you capture RAW, you typically have more control, so Adobe Lightroom specialises in reading RAW files (including its own DNG format).

‘What’s exciting about computational photography is that — far from taking control away from the artist, it can give artists *more* control, and at the point of capture. Pixel’s Dual Exposure Controls are one example of this — separate controls for highlight and shadows, rather than a single control for exposure compensation. Apple’s Photographic Styles, which are live in the viewfinder, are another example. This is just the tip of the iceberg. We’ll start seeing more controls, and more opportunity for artistic expression, in cameras. I can’t wait!’

Do you agree? Let us know in the comments below.

Read the full interview here.

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Our Business Partners

Top