Table of Contents
Fetching ...

SRViT: Vision Transformers for Estimating Radar Reflectivity from Satellite Observations at Scale

Jason Stock, Kyle Hilburn, Imme Ebert-Uphoff, Charles Anderson

TL;DR

This work addresses gaps in radar coverage by translating GOES-R satellite observations into high-resolution synthetic radar reflectivity fields at scale. It presents SRViT, a Vision Transformer-based image-to-image translator that ingests GOES-R ABI and GLM inputs and outputs MRMS-like reflectivity on a $3$ km grid, trained with a weighted loss $L_e$ to emphasize high-reflectivity values. SRViT achieves improved RMSE ($=3.09$ dBZ) and $R^2$ ($=0.572$) over a fully convolutional UNet baseline, produces sharper outputs as evidenced by gradient-based sharpness metrics, and provides a gradient-based Token (Re)Distribution attribution to aid domain interpretation. The results support enhanced data assimilation in numerical weather prediction over the CONUS, while pointing to avenues for future uncertainty-aware diffusion models and 3D radar extensions; code is available on GitHub.

Abstract

We introduce a transformer-based neural network to generate high-resolution (3km) synthetic radar reflectivity fields at scale from geostationary satellite imagery. This work aims to enhance short-term convective-scale forecasts of high-impact weather events and aid in data assimilation for numerical weather prediction over the United States. Compared to convolutional approaches, which have limited receptive fields, our results show improved sharpness and higher accuracy across various composite reflectivity thresholds. Additional case studies over specific atmospheric phenomena support our quantitative findings, while a novel attribution method is introduced to guide domain experts in understanding model outputs.

SRViT: Vision Transformers for Estimating Radar Reflectivity from Satellite Observations at Scale

TL;DR

This work addresses gaps in radar coverage by translating GOES-R satellite observations into high-resolution synthetic radar reflectivity fields at scale. It presents SRViT, a Vision Transformer-based image-to-image translator that ingests GOES-R ABI and GLM inputs and outputs MRMS-like reflectivity on a km grid, trained with a weighted loss to emphasize high-reflectivity values. SRViT achieves improved RMSE ( dBZ) and () over a fully convolutional UNet baseline, produces sharper outputs as evidenced by gradient-based sharpness metrics, and provides a gradient-based Token (Re)Distribution attribution to aid domain interpretation. The results support enhanced data assimilation in numerical weather prediction over the CONUS, while pointing to avenues for future uncertainty-aware diffusion models and 3D radar extensions; code is available on GitHub.

Abstract

We introduce a transformer-based neural network to generate high-resolution (3km) synthetic radar reflectivity fields at scale from geostationary satellite imagery. This work aims to enhance short-term convective-scale forecasts of high-impact weather events and aid in data assimilation for numerical weather prediction over the United States. Compared to convolutional approaches, which have limited receptive fields, our results show improved sharpness and higher accuracy across various composite reflectivity thresholds. Additional case studies over specific atmospheric phenomena support our quantitative findings, while a novel attribution method is introduced to guide domain experts in understanding model outputs.

Paper Structure

This paper contains 22 sections, 8 equations, 9 figures, 2 tables.

Figures (9)

  • Figure 1: Slices of an input (col. 1-4) and output prediction with ground truth (col. 5 and 6, respectively). The real-time observations enable forecasters to assess/forecast storm patterns at scale.
  • Figure 2: Cropped and enlarged model output, showing a Northern Plains Derecho in panels (a-c) and Midwest Squall Lines in panels (d-f). Sample RMSE and $\text{R}^2$ values are shown for each case between the ground truth MRMS (col. 1) and model output (col. 2 and 3).
  • Figure 3: Categorical metrics at varying composite reflectivity thresholds for SRViT and the baseline UNet.
  • Figure 4: Kernel density estimation (KDE) of the mean gradient magnitude of composite reflectivity over all test samples. The dashed line represents the mean with standard deviation.
  • Figure 5: Sample min-max normalized gradient magnitude of Token (Re)Distribution for the (red; separate in panels (a) and (b)) token of interest overlaid on GOES-16 ABI Channel 7.
  • ...and 4 more figures