Skip to content

Optimizing Vision and Visuals: Lectures on Cameras, Displays and Perception

People

/      /      /      /      /      /     

Koray Kavaklı1

David Walton2

Nick Antipa3

Rafał Mantiuk4

Douglas Lanman5,6

Kaan Akşit2

1Koç University, 2University College London, 3University of California San Diego, 4University of Cambridge, 5Meta Reality Labs, 6University of Washington

SIGGRAPH 2022

Resources

Lecture recording Code Foreword

Bibtex
@inproceedings{10.1145/3532720.3535650,
 author = {Kavakli, Koray and Walton, David Robert and Antipa, Nick and Mantiuk, Rafa\l{} and Lanman, Douglas and Ak\c{s}it, Kaan},
 title = {Optimizing Vision and Visuals: Lectures on Cameras, Displays and Perception},
 year = {2022},
 isbn = {9781450393621},
 publisher = {Association for Computing Machinery},
 address = {New York, NY, USA},
 url = {https://doi.org/10.1145/3532720.3535650},
 doi = {10.1145/3532720.3535650},
 booktitle = {ACM SIGGRAPH 2022 Courses},
 articleno = {17},
 numpages = {66},
 location = {Vancouver, British Columbia, Canada},
 series = {SIGGRAPH '22}
}

Presentation

Abstract

The evolution of the internet is underway, where immersive virtual 3D environments (commonly known as metaverse or telelife) will replace flat 2D interfaces. Crucial ingredients in this transformation are next-generation displays and cameras representing genuinely 3D visuals while meeting the human visual system's perceptual requirements.

This course will provide a fast-paced introduction to optimization methods for next-generation interfaces geared towards immersive virtual 3D environments. Firstly, we will introduce lensless cameras for high dimensional compressive sensing (e.g., single exposure capture to a video or one-shot 3D). Our audience will learn to process images from a lensless camera at the end. Secondly, we introduce holographic displays as a potential candidate for next-generation displays. By the end of this course, you will learn to create your 3D images that can be viewed using a standard holographic display. Lastly, we will introduce perceptual guidance that could be an integral part of the optimization routines of displays and cameras. Our audience will gather experience in integrating perception to display and camera optimizations.

This course targets a wide range of audiences, from domain experts to newcomers. To do so, examples from this course will be based on our in-house toolkit to be replicable for future use. The course material will provide example codes and a broad survey with crucial information on cameras, displays and perception.

Relevant research works

Here are relevant research works from the authors:

Outreach

We host a Slack group with more than 250 members. This Slack group focuses on the topics of rendering, perception, displays and cameras. The group is open to public and you can become a member by following this link.

Contact Us

Warning

Please reach us through email or through GitHub issues to ask your questions or to provide your feedback and comments.

Acknowledgements

The authors would like to thank reviewers for their valuable feedback.

Kaan Akşit is supported by the Royal Society's RGS\R2\212229 - Research Grants 2021 Round 2 in building the hardware prototype used in generating the course material. Kaan Akşit is also supported by Meta Reality Labs inclusive rendering initiative 2022.