What is an AI camera and how does AI photo editing work?

Artificial intelligence (AI) is everywhere, and if you don’t have an AI smartphone yet, you probably will soon. Even your phone’s software uses AI to make decisions on your behalf. Adobe’s just-launched Photoshop camera uses AI to identify objects and scenes in your images and suggest “lenses” (digital effects) for comic and creative effects.

Is this all just marketing hubris, or is AI in a smartphone – and especially in its camera – something we should all aspire to? As the term, AI is increasingly used not just in camera phones, but in all types of cameras, it’s worth knowing what AI actually does for your photos.

AI has blurred the lines between image capture, image enhancement, and image manipulation. It’s being used in photo processing to blend, enhance and “augment” reality, to make smarter object selection, adjust processing parameters to match the subject, and help you automatically find images based on the content of your photos rather than manual keywords and descriptions. It already checks what you are photographing and makes its own decisions about how to handle it.

What is AI?

AI is a genre of computer science that studies whether we can teach a computer to think, or at least to learn. It is generally divided into subsets of technologies that attempt to emulate what humans do, such as speech recognition, dictation from speech to text, image recognition, face scanning, computer vision, and machine learning.

There are quite a few buzzwords on this topic. “AI,” “deep learning,” “machine learning,” and “neural networks” are all intertwined in this new branch of technology.

What does this have to do with cameras? Computational photography and time-saving photo processing – that’s what. And voice activation.

Voice-activated cameras

The ability of a computer to understand human speech is a form of AI and has crept onto cameras in recent years.

Smartphones have offered Google Now and Siri for several years, while Alexa comes home via Amazon Echo speakers. Action cameras have jumped on this bandwagon in recent years. GoPro action cameras and even dash cams can perform actions when you say simple phrases like “start video,” “take a photo,” etc.

It all makes sense, especially for action cameras where the hands-free feature makes it much easier to use, but is it really AI? Technically it is, but until recently voice-activated devices were simply referred to as “smart.” In some cases, you can now say very specific things, such as “record slow-motion video” or “take a photo in low light,” but an AI camera needs to do something more than earn that name.

AI software

AI is about new types of software, first to make up for the lack of zoom lenses for smartphones. “Software is becoming more and more important for smartphones because they lack optics. That’s why we’ve seen the rise of computational photography trying to replicate optical zoom,” says imaging analyst Arun Gill, senior market analyst at Futuresource Consulting. “Top-end smartphones increasingly have dual-lens cameras. However, Google Pixel 3 uses a single camera lens with computational photography to replicate an optical zoom and add different effects.”

Since the Pixel 3, multi-camera arrays and computer imaging have combined to create a hybrid technology that replicates many of the depth-of-field and lens effects you get from larger cameras. A camera phone is no longer just a camera. It’s computing, analyzing, “thinking” device that not only captures the scene as it is, but also as it thinks you want it or as it thinks you should want it …

AI can be like an omniscient assistant. After a while, you may wonder who is actually in charge.

The world isn’t necessarily ready for the full impact of AI cameras. Google used AI for its wearable Google Clips camera, which used AI to capture and record only particularly memorable moments. It used an algorithm that understood the basics of photography, so no time was wasted processing images that would definitely not make the final cut of a highlight reel. For example, photos with a finger in the frame and out-of-focus images were automatically deleted, favoring those that fit the general rule-of-thirds concept for framing a photo.

Creepy and controlling? Some thought so. In any case, Google has pulled the camera in 2019. The question is not whether AI is powerful enough to do the things we want, but whether we are still willing to hand over that much power to a machine… or to the company that owns and operates the AI algorithms behind it.

What is computational photography?

Computational photography is a digital imaging technique that uses algorithms to replace optical processes and seeks to improve image quality by using image processing to identify the content of an image.

“It’s about taking studio effects that you achieve in Lightroom and Photoshop and making them available to others at the touch of a button,” says Simon Fitzpatrick, senior director of product management at FotoNation, which provides much of the computer technology to camera brands.
“So you can smooth skin and get rid of blemishes, but not just by blurring them – you get texture.” In the past, the technology behind the “smooth skin” and “beauty” modes was essential to blur the image to hide imperfections. “Now it’s about creating believable looks, and AI plays a key role in that,” Fitzpatrick says. “For example, we use AI to train algorithms on the features of people’s faces.”

LG has already used AI for imaging on the LG V30S ThinQ. Users can select a professional image in its Graphy app and apply the same white balance, shutter speed, aperture, and ISO. LG also introduced Vision A, an image recognition engine that uses a neural network trained on 100 million images and recommends how to set the camera. It even detects reflections in the image, the shooting angle, and the amount of available light.

Depth sensors and blurred backgrounds

In recent years, many phone cameras with multiple lenses have used two or more lenses to create aesthetically pleasing images with a blurred background around the main subject. People (and therefore Instagram) love blurry backgrounds, but instead of using cameras with two lenses or shooting a DSLR and manually manipulating the depth of field, AI can now do it for you.

Commonly referred to as the “bokeh” effect (Japanese for blur), machine learning identifies the subject and blurs the rest of the image. “We can now simulate bokeh using AI-based algorithms that separate people from the foreground and background, so we can achieve an effect that looks very similar to a portrait in a studio,” Fitzpatrick says. With the latest smartphones, you can do this for photos taken with either the rear or front (selfie) camera.

“People call it bokeh, but you don’t get the true blur you get with a DSLR, where you can change the depth. With a phone, you can only blur the background, ” Gill says. “But a small and growing number of photographers are really impressed with it and use an iPhone X for everyday shooting. It’s only when they’re working that they can get out their DSLR.”

AI cameras can automatically blend HDR images in bright light, switch to a multi-frame shooting mode in low light, and use the magic of computer imaging to create a smooth zoom effect with two or more camera modules.

What about DSLRs and other “real” cameras?

Automatic red-eye removal has been in Professional DSLR cameras for years, as has face detection and, more recently, even smile detection, automatically taking a selfie when the subject flashes a grin. All of this is AI. Will Nikon and Canon ever adopt more advanced AI for their flagship DSLRs? After all, it took many years for WiFi and Bluetooth to show up on DSLRs.

While we wait, a Kickstarter-funded “smart camera assistant” accessory called Arsenal aims to fill the gap. “Arsenal is an accessory that allows an interchangeable-lens camera (such as a DSLR) to be controlled wirelessly from a mobile device. It uses machine learning algorithms to get the perfect shot,” Gill says. “It compares the current scene with thousands of past images, uses image recognition to identify a specific subject, and applies the right settings, such as a fast shutter speed when wildlife is detected.”

Canon, meanwhile, has relied heavily on AI technology for the EOS-1D X Mark III’s state-of-the-art autofocus system. More specifically, “Deep Learning.” The complexity of the algorithms is the same (the system is trained using professional photos), but deep learning is the end result… Artificial Intelligence is the ability of a machine to continue learning on its own.

However, it can be difficult to separate true AI from sophisticated automation. For years, compact camera manufacturers have offered various subject-oriented scene modes that the camera can automatically select. Is this “intelligence” or simply a slightly more advanced implementation of metering, subject movement and focus distance? Multi-pattern metering systems typically use the complex measurements of light distribution based on thousands of real-world photos and used a “deep learning” process before the term was invented.

Who is AI photography for?

Everyone. For starters, it’s about democratizing photography. “In the past, photography has been the domain of those with the expertise to create different types of images with a DSLR. AI has started to bring the effects and capabilities of advanced photography to more people,” says Fitzpatrick

Does this mean Adobe Photoshop and Lightroom will soon be unavailable? Absolutely not; AI is a complementary technology and already makes photo editing much more automated. One of FotoNation’s partners is Athens Tech, whose AI-based technology “Perfectly Clear” performs automatic batch corrections that mimic the human eye. As a plugin for Lightroom, it specifically aims to reduce the number of time photographers spend sitting in front of computers manually editing. “Professional photographers make money when they’re on the go, not when they’re processing images,” Fitzpatrick says. “AI makes professional-looking creative effects more accessible to smartphone users and helps professional photographers maximize their viability.”

AI is quickly becoming an overused term in the photography world. For now, this applies mostly to smartphone cameras, but the incredible algorithms and level of automated software that the technology allows will soon prove irresistible to most of us. It may not be time to throw out the DSLR just yet, but AI looks set to change the way we take photos.

What’s more, it may soon take over the editing and curating of our existing photo libraries. That process has already begun. Lightroom CC uses Adobe’s server-based Sensei object recognition system to identify images by subject, so you don’t have to spend hours manually adding keywords. AI may be an overused term and is often shorthand for nothing more than the latest, most advanced software, but AI promises to do something incredible for photographers. It frees up more of your time so you can take more and better photos.

Why do some phones have two cameras on one side? Dual camera designs explained

Skylum Software is one of the leading providers of photo editing software with AI support. It introduced AI Sky Replacement in Luminar to eliminate all manual masking required to do so manually, AI Augmented Skies to add clouds, planets, lightning, and more to your images, AI Portrait Enhancement tools to autonomously identify human features, and AI Structure to add definition only to areas of an image where appropriate.

The use of augmented reality in photography may yet prove controversial. It has been possible to distort, twist, and “invent” reality since the invention of image manipulation programs, but AI promises to make this so easy and convincing that no special skills (or conscience) are required.