The iPhone SE (2nd Generation), which appeared just the other day, is equipped with exactly the same single camera as the iPhone 8, which is a model two and a half years ago, while supporting portrait mode, which is a function to blur the background.
Speaking of iPhone that supports portrait mode with a single camera, it reminds me of the iPhone XR. The iPhone XR uses a technology called “Focus Pixels”, which shifts focus little by little to take photos and measures depth by how much the contrast has changed, but the iPhone SE seems to have a different algorithm.
This time, I’ll explain how the iPhone SE is realizing portrait mode.
The portrait mode of the iPhone SE is realized by software processing using machine learning.
For example, let’s look at the depth measurement of this dog’s image.
Both the iPhone XR and the iPhone SE have an object in the center in front of them, and the image is processed on the assumption that the background is in the rear.
The depth information of the iPhone XR using Focus Pixels on the left, and the depth information of the iPhone SE using machine learning on the right. If you look at the dog’s jaw, the iPhone XR is not handled well. However, the iPhone SE is generally nice although there is a slight shift, the background is firmly gradient rather than uniform depth information.
This has been improved not only by the evolution of software, but alsobecause the image processing engine and AI functions of the A13 Bionic chip have been fully utilized.
However, the depth information of the iPhone SE is still missing.
For example, let’s look at a case where the subject is not centered.
The triple camera iPhone 11 Pro still recognizes the shape of the dog firmly, but the iPhone SE can only recognize the difference between the floor and the wall.
It is also weak against this succulent messy one.
If you look at the depth map photos, you can see the difference. The iPhone 11 Pro is more accurate and accurate depth information.
The resulting photo has a difference as sedat ed. There is not much discomfort when it comes to photography, but you can say that the iPhone 11 Pro has a more natural finish.
It is said that the boundary is not recognized well depending on the background for the non-man, and the scene where the part of the tree on the dog does not blur well is seen as shown in the photograph below.
As mentioned above, the portrait mode of the iPhone SE is still incomplete. As a result, Apple seems to have adopted a mechanism that the portrait mode works only for relatively easy-to-recognize “people”.
As in these photos, the iPhone SE can also get depth information for non-people, so some third-party apps also support portrait mode for non-people.
How far does software evolve?
In general, the background blur function is achieved by measuring the depth information using the method using hardware as follows, and then adding processing by software.
- Parallax information with dual cameras (requires two or more cameras, such as the iPhone 11 series)
- Dual pixel parallax information (possible with single cameras, such as Pixel 3 series, but requires special sensors)
- Tof camera to measure directly ( Xperia 1 such as, a separate dedicated camera is required
However, with the evolution of chips and the completion of good quality software, hardware like the iPhone SE can be a simple single camera, but completely software-only, with a somewhat complete portrait mode.
It is still true that hardware background blur is more beautiful. However, if software evolves at current speed, it may be possible to realize a camera that is not inferior to an SLR camera in the future.