Skip to content

Focal Surface Holographic Light Transport using Learned Spatially Adaptive Convolutions

People

/      /      /      /      /     

Chuanjun Zheng1

Yicheng Zhan1

Liang Shi2

Ozan Cakmakci3

Kaan Akşit1

1University College London, 2Massachusetts Institute of Technology, 3Google

SIGGRAPH Asia 2024 Technical Communications

Resources

Manuscript Supplementary Code

Bibtex
@inproceedings{zheng2024focalholography,
  title={Focal Surface Holographic Light Transport using Learned Spatially Adaptive Convolutions},
  author={Chuanjun Zheng, Yicheng Zhan, Liang Shi, Ozan Cakmakci, and Kaan Ak{\c{s}}it},
  booktitle = {SIGGRAPH Asia 2024 Technical Communications (SA Technical Communications '24)},
  keywords = {Computer-Generated Holography, Light Transport, Optimization},
  location = {Tokyo, Japan},
  series = {SA '24},
  month={December},
  year={2024},
  doi={https://doi.org/10.1145/3681758.3697989}
}

Abstract

Computer-Generated Holography (CGH) is a set of algorithmic methods for identifying holograms that reconstruct Three-Dimensional (3D) scenes in holographic displays. CGH algorithms decompose 3D scenes into multiplanes at different depth levels and rely on simulations of light that propagated from a source plane to a targeted plane. Thus, for \(n\) planes, CGH typically optimizes holograms using \(n\) plane-to-plane light transport simulations, leading to major time and computational demands. Our work replaces multiple planes with a focal surface and introduces a learned light transport model that could propagate a light field from a source plane to the focal surface in a single inference. Our model leverages spatially adaptive convolution to achieve depth-varying propagation demanded by targeted focal surfaces. The proposed model reduces the hologram optimization process up to \(1.5x\), which contributes to hologram dataset generation and the training of future learned CGH models.

Focal Surface Holographic Light Transport

Simulating light propagation among multiple planes in a 3D volume is computationally demanding, as a 3D volume is represented with multiple planes and each plane requires a separate calculation of light propagation to reconstruct the target image. Thus, for \(n\) planes, conventional light transport simulation methods require \(n\) plane-to-plane simulations, leading to major time and computational demands. Our work replaces multiple planes with a focal surface and introduces a learned light transport model that could propagate a light field from a source plane to the focal surface in a single inference, reducing simulation time by \(10x\).

Image title

Results

When simulating a full-color, all-in-focus 3D image across a focal surface, conventional Angular Spectrum Method (ASM) requires eighteen forward passes to simulate the 3D image with six depth planes given there are three color primaries. In contrast, our model simulates the three colorprimary images simultaneously onto a focal surface with a single forward pass. In the mean time, our model preserves more high-frequency content than U-Net, providing finer details and sharper edges, closer to the ground truth.

Image title

We utilize our model for a 3D phase-only hologram optimization application under \(0 mm\) propagation distance. Optimizing holograms with six target planes using ASM is denoted as ASM 6, while Ours 6 represents optimizing holograms using our model with six focal surfaces. When comparing the simulation results, all holograms are reconstructed using ASM for performance assessment. Ours 6 achieves comparable results with about \(70\%\) of the optimization time compared to ASM 6.

Image title

We also apply our model for a 3D phase-only hologram optimization application under \(10 mm\) propagation distance.

Image title

Relevant research works

Here are relevant research works from the authors:

Outreach

We host a Slack group with more than 250 members. This Slack group focuses on the topics of rendering, perception, displays and cameras. The group is open to public and you can become a member by following this link.

Contact Us

Warning

Please reach us through email to provide your feedback and comments.