SRViT: Vision Transformers for Estimating Radar Reflectivity from Satellite Observations at Scale
Jason Stock, Kyle Hilburn, Imme Ebert-Uphoff, Charles Anderson
TL;DR
This work addresses gaps in radar coverage by translating GOES-R satellite observations into high-resolution synthetic radar reflectivity fields at scale. It presents SRViT, a Vision Transformer-based image-to-image translator that ingests GOES-R ABI and GLM inputs and outputs MRMS-like reflectivity on a $3$ km grid, trained with a weighted loss $L_e$ to emphasize high-reflectivity values. SRViT achieves improved RMSE ($=3.09$ dBZ) and $R^2$ ($=0.572$) over a fully convolutional UNet baseline, produces sharper outputs as evidenced by gradient-based sharpness metrics, and provides a gradient-based Token (Re)Distribution attribution to aid domain interpretation. The results support enhanced data assimilation in numerical weather prediction over the CONUS, while pointing to avenues for future uncertainty-aware diffusion models and 3D radar extensions; code is available on GitHub.
Abstract
We introduce a transformer-based neural network to generate high-resolution (3km) synthetic radar reflectivity fields at scale from geostationary satellite imagery. This work aims to enhance short-term convective-scale forecasts of high-impact weather events and aid in data assimilation for numerical weather prediction over the United States. Compared to convolutional approaches, which have limited receptive fields, our results show improved sharpness and higher accuracy across various composite reflectivity thresholds. Additional case studies over specific atmospheric phenomena support our quantitative findings, while a novel attribution method is introduced to guide domain experts in understanding model outputs.
