Table of Contents
Fetching ...

Real-time Neural Six-way Lightmaps

Wei Li, Hanxiao Sun, Tao Huang, Haoxiang Wang, Tongtong Wang, Zherong Pan, Kui Wu

Abstract

Participating media are a pervasive and intriguing visual effect in virtual environments. Unfortunately, rendering such phenomena in real-time is notoriously difficult due to the computational expense of estimating the volume rendering equation. While the six-way lightmaps technique has been widely used in video games to render smoke with a camera-oriented billboard and approximate lighting effects using six precomputed lightmaps, achieving a balance between realism and efficiency, it is limited to pre-simulated animation sequences and is ignorant of camera movement. In this work, we propose a neural six-way lightmaps method to strike a long-sought balance between dynamics and visual realism. Our approach first generates a guiding map from the camera view using ray marching with a large sampling distance to approximate smoke scattering and silhouette. Then, given a guiding map, we train a neural network to predict the corresponding six-way lightmaps. The resulting lightmaps can be seamlessly used in existing game engine pipelines. This approach supports visually appealing rendering effects while enabling real-time user interactivity, including smoke-obstacle interaction, camera movement, and light change. By conducting a series of comprehensive benchmarks, we demonstrate that our method is well-suited for real-time applications, such as games and VR/AR.

Real-time Neural Six-way Lightmaps

Abstract

Participating media are a pervasive and intriguing visual effect in virtual environments. Unfortunately, rendering such phenomena in real-time is notoriously difficult due to the computational expense of estimating the volume rendering equation. While the six-way lightmaps technique has been widely used in video games to render smoke with a camera-oriented billboard and approximate lighting effects using six precomputed lightmaps, achieving a balance between realism and efficiency, it is limited to pre-simulated animation sequences and is ignorant of camera movement. In this work, we propose a neural six-way lightmaps method to strike a long-sought balance between dynamics and visual realism. Our approach first generates a guiding map from the camera view using ray marching with a large sampling distance to approximate smoke scattering and silhouette. Then, given a guiding map, we train a neural network to predict the corresponding six-way lightmaps. The resulting lightmaps can be seamlessly used in existing game engine pipelines. This approach supports visually appealing rendering effects while enabling real-time user interactivity, including smoke-obstacle interaction, camera movement, and light change. By conducting a series of comprehensive benchmarks, we demonstrate that our method is well-suited for real-time applications, such as games and VR/AR.

Paper Structure

This paper contains 28 sections, 10 equations, 15 figures, 1 table, 1 algorithm.

Figures (15)

  • Figure 1: An example set of lightmaps packed in two RGBA textures. Of the 8 available channels, 6 channels store the six-way scattering lightmaps along axis-aligned directions. Additionally, the alpha channel for the first texture contains the transparency $T(\mathbf{x} \leftrightarrow \mathbf{z})$, while the alpha channel of the second texture is an optional emissive component.
  • Figure 2: Our pipeline: the physically based fluid simulator takes the obstacle as input (a) to produce the density field (b). (c) A ray marching with a large sample step extracts the guiding map with three channels, in-scattered radiance $\Tilde{L}_{\text{scattering}}$, transparency $T$, and depth $D$. (d) Our neural lightmaps generator contains a modified UNet that first extracts channel-shared features from the input, which are then processed by four dedicated channel adapters with outputs for front and back, left and right, up and down, and transparency and emissive, respectively. The shadow map of obstacle (e) is generated from the light and combined with predicted lightmaps to generate the final rendering result (f).
  • Figure 3: Our approach can handle dynamic smoke under a moving camera and be integrated seamlessly into Unreal Engine unrealengine.
  • Figure 4: Comparison on a denser smoke field shows that our method continues to outperform prior techniques, maintaining higher visual fidelity even under significantly increased density.
  • Figure 5: Comparison on different illumination configurations for guiding map generation with Avg./max/min PSNR and MSE. LR and TB denote left + right and top + bottom, respectively.
  • ...and 10 more figures