Table of Contents
Fetching ...

GFFE: G-buffer Free Frame Extrapolation for Low-latency Real-time Rendering

Songyin Wu, Deepak Vembar, Anton Sochenov, Selvakumar Panneer, Sungye Kim, Anton Kaplanyan, Ling-Qi Yan

TL;DR

GFFE tackles low-latency real-time rendering by extrapolating future frames without relying on extrapolated-frame G-buffers. It combines history-based motion estimation, hierarchical background collection, and an adaptive rendering window to fill disocclusions, followed by a lightweight shading correction network for non-geometric motion. The approach achieves competitive quality with G-buffer-dependent extrapolation and interpolation baselines while delivering real-time performance and easier integration. Extensive Unreal Engine experiments demonstrate robustness and generalization across scenes, with ablations confirming the value of each module. The work enables efficient, low-latency frame extrapolation suitable for modern game engines and streaming contexts, and can be combined with super-resolution or anti-aliasing techniques.

Abstract

Real-time rendering has been embracing ever-demanding effects, such as ray tracing. However, rendering such effects in high resolution and high frame rate remains challenging. Frame extrapolation methods, which don't introduce additional latency as opposed to frame interpolation methods such as DLSS 3 and FSR 3, boost the frame rate by generating future frames based on previous frames. However, it is a more challenging task because of the lack of information in the disocclusion regions, and recent methods also have a high engine integration cost due to requiring G-buffers as input. We propose a \emph{G-buffer free} frame extrapolation, GFFE, with a novel heuristic framework and an efficient neural network, to plausibly generate new frames in real-time without introducing additional latency. We analyze the motion of dynamic fragments and different types of disocclusions, and design the corresponding modules of the extrapolation block to handle them. After filling disocclusions, a light-weight shading correction network is used to correct shading and improve overall quality. GFFE achieves comparable or better results compared to previous interpolation as well as G-buffer-dependent extrapolation methods, with more efficient performance and easier game integration.

GFFE: G-buffer Free Frame Extrapolation for Low-latency Real-time Rendering

TL;DR

GFFE tackles low-latency real-time rendering by extrapolating future frames without relying on extrapolated-frame G-buffers. It combines history-based motion estimation, hierarchical background collection, and an adaptive rendering window to fill disocclusions, followed by a lightweight shading correction network for non-geometric motion. The approach achieves competitive quality with G-buffer-dependent extrapolation and interpolation baselines while delivering real-time performance and easier integration. Extensive Unreal Engine experiments demonstrate robustness and generalization across scenes, with ablations confirming the value of each module. The work enables efficient, low-latency frame extrapolation suitable for modern game engines and streaming contexts, and can be combined with super-resolution or anti-aliasing techniques.

Abstract

Real-time rendering has been embracing ever-demanding effects, such as ray tracing. However, rendering such effects in high resolution and high frame rate remains challenging. Frame extrapolation methods, which don't introduce additional latency as opposed to frame interpolation methods such as DLSS 3 and FSR 3, boost the frame rate by generating future frames based on previous frames. However, it is a more challenging task because of the lack of information in the disocclusion regions, and recent methods also have a high engine integration cost due to requiring G-buffers as input. We propose a \emph{G-buffer free} frame extrapolation, GFFE, with a novel heuristic framework and an efficient neural network, to plausibly generate new frames in real-time without introducing additional latency. We analyze the motion of dynamic fragments and different types of disocclusions, and design the corresponding modules of the extrapolation block to handle them. After filling disocclusions, a light-weight shading correction network is used to correct shading and improve overall quality. GFFE achieves comparable or better results compared to previous interpolation as well as G-buffer-dependent extrapolation methods, with more efficient performance and easier game integration.

Paper Structure

This paper contains 55 sections, 8 equations, 15 figures, 7 tables, 1 algorithm.

Figures (15)

  • Figure 1: An extrapolated frame by directly projecting fragments from previous rendered frame. The right column shows three types of disocclusions: out-of-screen disocclusion, static disocclusion and dynamic disocclusion from top to bottom. The thin black lines splatted in the image are due to forward warping.
  • Figure 2: Our method generates an extrapolated frame $\bar{I}_{t+\alpha}$ from the rendered frame $I_t$ and history frames. The left part shows the process of rendered frames including adaptive rendering window , history tracking and background collection, which are prepared for extrapolated frames. The right part shows the process of extrapolating a frame, including geometry aware extrapolation (GAE) and shading correction network (SCN). The depth and motion vectors in extrapolated frames are generated in our framework instead of rendering engine, which can be used for additional post-processing.
  • Figure 3: Our motion estimation module tracks history trajectories and estimate next world positions based on history trajectory.
  • Figure 4: The process of hierarchical background collection. Top row is current rendered frame and updated background buffers, and the bottom row is previous background buffers. Different color arrows show different conditions when updating the background buffers. The size of deeper layers (Layer 1) is only $1/4$ as the previous layer (Layer 0).
  • Figure 5: Rendered image under different settings. The yellow rectangle shows displayed areas of the frame. All frames are rendered under the same resolution. Our adaptive strategy not only covers the area we need for the next view, but also contains less redundant information.
  • ...and 10 more figures