GLEAM

An Illumination Estimation Framework for Real-time Photorealistic Augmented Reality on Mobile Devices

Real Lighting
No Lighting
GLEAM

Augmented reality (AR) places virtual objects in real environments. However, without an estimation of environmental illumination, virtual objects (middle) do not appear as real objects (left). We design GLEAM, a system that estimates environmental illumination in real-time to improve realism of AR objects (right). GLEAM observes metal light probes to estimate illumination.

ABSTRACT

Mixed reality mobile platforms attempt to co-locate virtual scenes with physical environments, towards creating immersive user experiences. However, to create visual harmony between virtual and physical spaces, the virtual scene must be accurately illuminated with realistic lighting that matches the physical environment. To this end, we design GLEAM, a framework that provides robust illumination estimation in real-time by integrating physical light-probe estimation with current mobile AR systems. GLEAM visually observes reflective objects to compose a realistic estimation of physical lighting. Optionally, GLEAM can network multiple devices to sense illumination from different viewpoints and compose a richer estimation to enhance realism and fidelity.

Using GLEAM, AR developers gain the freedom to use a wide range of materials, which is currently limited by the unrealisticappearance of materials that need accurate illumination, such as liquids, glass, and smooth metals. Our controlled environment user studies across 30 participants reveal the effectiveness of GLEAM in providing robust and adaptive illumination estimation over commercial status quo solutions, such as pre-baked directional lighting and ARKit 2.0 illumination estimation. Our benchmarks reveal the need for situation driven tradeoffs to optimize for quality factors in situations requiring freshness over quality and vice-versa. Optimizing for different quality factors in different situations, GLEAM can update scene illumination as fast as 30ms by sacrificing richness and fidelity in highly dynamic scenes, or prioritize quality by allowing an update interval as high as 400ms in scenes that require high-fidelity estimation.

METHOD OVERVIEW

GLEAM observes reflective objects captured in the camera of a device to gather illumination samples from the scene using three modules: (a) Radiance Sampling, (b) Optional Network Transfer, and (c) Cubemap Composition.

(a) Radiance Sampling

AR systems understand the positioning of the camera with respect to a scene. Through the attachment of fiducial markers, reflective objects can be tracked. We design the GLEAM system to use the geometry of the camera’s position, the geometry of the reflective object’s position, and the pixels in the camera frame to sample the radiance of the environment.

To do so, GLEAM iterates over each camera frame pixel, projecting a virtual ray from the AR camera in the scene. GLEAM bounces the ray off of the 3D mesh of the target specular object to understand the angle of the incoming light. GLEAM uses the pixel value to estimate radiance. The angle and radiance value together constitute a radiance sample.

(b) Optional Network Transfer

GLEAM works with a single viewpoint for a single user. But for richer estimation, multiple devices can capture and share radiance samples from multiple perspectives. Thus, our system presents an opportunity to improve the estimation through collaborative sensing.

(c) Cubemap Composition

GLEAM interpolates the radiance samples into a cubemap, representing the physical lighting from all directions. Any gaps in the cubemap are filled through nearest-neighbor and inverse-distance-weighting algorithms. This cubemap is used to virtually illuminate the scene, imparting realism to virtual objects.

SYSTEM CHARACTERIZATION

Characterizing runtime performance of GLEAM single-viewpoint prototype reveal the limitation of generating high number of samples, and higher resolution cubemaps at high update rates. These limitations provide opportunity for GLEAM to optimize between quality (resolution and fidelity) or speed (adaptiveness).

SYSTEM & CODE

Siddhant Prakash, Alireza Bahremand, Linda D. Nguyen, and Robert LiKamWa. 2019.
An Illumination Estimation Framework for Real-time Photorealistic Augmented Reality on Mobile Devices.
The 17th Annual International Conference on Mobile Systems, Applications, and Services (MobiSys ’19), 2019.
[ACM DL] [BibTex] [Code]

We provide additional results in this supplementary page [COMING SOON, ETA June 28]