Skip to content

Learned Single-Pass Multitasking Perceptual Graphics for Immersive Displays

People

/         /      /      /      /     

Doğa Yılmaz1

He Wang1

Towaki Takikawa2

Duygu Ceylan3

Kaan Akşit1

1University College London, 2University of Toronto, 3Adobe Research

ACM Multimedia 2025

Resources

Manuscript Supplementary Code

Bibtex
@inproceedings{yilmaz2025perceptual,
  author = {Y{\i}lmaz, Do{\u{g}}a and Wang, He and Takikawa, Towaki and Ceylan, Duygu and Ak{\c{s}}it, Kaan},
  title = {Learned Single-Pass Multitasking Perceptual Graphics for Immersive Displays},
  booktitle = {Proceedings of the 33rd ACM International Conference on Multimedia},
  year = {2025},
  location = {Dublin, Ireland},
  publisher = {ACM},
  address = {New York, NY, USA},
  pages = {9},
  doi = {10.1145/3746027.3754801},
  url = {https://doi.org/10.1145/3746027.3754801},
  month = {October 27--31}
}

Video

Abstract

Emerging immersive display technologies efficiently utilize resources with perceptual graphics methods such as foveated rendering and denoising. Running multiple perceptual graphics methods challenges devices with limited power and computational resources. We propose a computationally-lightweight learned multitasking perceptual graphics model. Given RGB images and text-prompts, our model performs text-described perceptual tasks in a single inference step. Simply daisy-chaining multiple models or training dedicated models can lead to model management issues and exhaust computational resources. In contrast, our flexible method unlocks consistent high quality perceptual effects with reasonable compute, supporting various permutations at varied intensities using adjectives in text prompts (e.g. mildly, lightly). Text-guidance provides ease of use for dynamic requirements such as creative processes. To train our model, we propose a dataset containing source and perceptually enhanced images with corresponding text prompts. We evaluate our model on desktop and embedded platforms and validate perceptual quality through a user study.

Image title

Proposed Method

Conclusions

Relevant research works

Here are relevant research works from the authors:

Outreach

We host a Slack group with more than 250 members. This Slack group focuses on the topics of rendering, perception, displays and cameras. The group is open to public and you can become a member by following this link.

Contact Us

Warning

Please reach us through email to provide your feedback and comments.