Next Generation AR

Article 018.
An image through the eyes of LiDAR technology

Next Generation AR & How the arrival of LiDAR Presents New Creative Opportunities

Every year we wait with anticipation for what the new iPhone will bring, and with this news, what new technologies will be available for it’s users. AR has been at the heart of many briefs and projects over recent years and with the integration of the LIDAR scanner not only in the iPad PRO but also in the iPhone 12 Pro, we wanted to take a look at what that means for creativity within AR.

So what is LIDAR?

LiDAR scanner technology is not new, we have been using it within car sensors for years and we regularly use it as part of our technical set up for real-time interaction physical installations. However it’s capabilities are new when thinking about AR and mobile. 

The LiDAR Scanner almost instantaneously measures the distance to surrounding objects and works both indoors and outdoors. The latest iOS will offer new depth frameworks by combining depth points measured by the LiDAR Scanner, data from both cameras and motion sensors, which is then enhanced by computer vision algorithms for a more detailed understanding of a scene. 

This basically means that the complexity of understanding the iphone12 has when reading it’s surroundings just got a major knowledge boost, opening the door for a whole new class of AR experiences. For example Apple already stated that every existing ARKit (the software development tool which gives access to AR features on iOS) automatically gets instant AR placement, improved motion capture and (people) occlusion (though some elements can only be implemented within an app), overall this is great for ramping up creativity, craft and fidelity of experience.

AR pre LiDAR?

To make augmented reality more immersive and more realistic, we’re always looking to improve how we anchor the virtual elements to reality. ARKit in 2017 was the first step forward by adding plane detection. It was then possible to anchor elements in reality and to move around them, but these elements were always on the foreground which breaks the immersion. It also needed some calibration time to get the proper environment detection to make everything stable and not too slippery. 

So what does next gen AR look like?

LiDAR changes everything and makes a giant leap forward for these two problems.

An image through the eyes of LiDAR technology
An image through the eyes of LiDAR technology

The LiDAR Scanner as shown in this reenactment below enables incredibly quick plane detection, allowing for the instant placement of AR objects in the real world without scanning. 

LiDAR scanner plane detection
LiDAR scanner plane detection

The Scene API - a set of functions which allow us to access the data of ARkit  - creates a topological map of your space with labels identifying floors, walls, ceilings, windows, doors, and seats. The advanced scene understanding capabilities built into the LiDAR Scanner allow the Depth API to use per-pixel depth information about the surrounding environment. When combined with the 3D mesh data as pictured below, this depth information makes virtual object occlusion even more realistic by enabling instant placement of virtual objects and blending them seamlessly with their physical surroundings. 

 Mesh data occlusion
Mesh data occlusion

Already available in Snapchat

A lot of applications will directly leverage this technology leap directly. One big example is Snapchat, which directly announced the support of LiDAR in Lens Studio (their software to create lenses for their platform). Lenses can use LiDAR data to enhance the Lens experience whether it be to do world mesh reconstruction to enable occlusions by physical objects, have more precise tracking, or understand certain types of surfaces as well as its orientation at a point. 

Snapchat lenses using LiDAR
Snapchat lenses using LiDAR

The addition of the LiDAR Scanner to iPhone 12 Pro models enables a new level of creativity for augmented reality.

Eitan Pilipski, Snap's SVP of Camera Platform. 

So not everyone has an iPhone, what about Android?

Google is trying to catch-up (at least on the occlusion part) but with a different approach. As they don’t control the hardware of the Android devices they have to bring the magic with the software as always. In September, they announced that Depth API was included in ARCore. The Depth API uses a Depth API-supported device’s RGB camera to create depth maps (also called depth images). The information provided by a depth map can be used to make virtual objects accurately appear in front of or behind real world objects, enabling immersive and realistic user experiences. Another way of levelling up the creativity and fidelity but using a software rather than hardware side solution.

Depth images
Depth images

AR is an ever growing playing field of opportunity capable of putting both services & experiences into the hands of the consumer. Historical challenges around fidelity, placement and depth of field will now disappear enhancing accuracy, craft and realism resulting in a big step up in the ambition of what we can achieve with Augmented Reality at every scale.