360 in the Wild: Dataset for Depth Prediction and View Synthesis
Kibaek Park, Francois Rameau, Jaesik Park, In So Kweon
TL;DR
This work tackles the paucity of real-world 360° datasets with ground-truth pose and depth by introducing 360° in the Wild, a large-scale collection of 25K real omnidirectional images sourced from internet videos with pose and depth annotations. It benchmarks depth estimation and novel view synthesis on this dataset, adapting MiDaS for omnidirectional depth and extending NeRF++ to spherical panoramas for 360° view synthesis. The dataset spans Indoor, Outdoor, and Mannequin scenes and includes moving-object masks, enabling robust learning in diverse real-world conditions. Although ground-truth depth is not metric-scaled due to SfM/MVS limitations, the release provides video links, per-frame annotations, and sequence segmentation to support broad research on omnidirectional perception and rendering in the wild.
Abstract
The large abundance of perspective camera datasets facilitated the emergence of novel learning-based strategies for various tasks, such as camera localization, single image depth estimation, or view synthesis. However, panoramic or omnidirectional image datasets, including essential information, such as pose and depth, are mostly made with synthetic scenes. In this work, we introduce a large scale 360$^{\circ}$ videos dataset in the wild. This dataset has been carefully scraped from the Internet and has been captured from various locations worldwide. Hence, this dataset exhibits very diversified environments (e.g., indoor and outdoor) and contexts (e.g., with and without moving objects). Each of the 25K images constituting our dataset is provided with its respective camera's pose and depth map. We illustrate the relevance of our dataset for two main tasks, namely, single image depth estimation and view synthesis.
