Table of Contents
Fetching ...

A Multimodal Vision Transformer-based Modeling Framework for Prediction of Fluid Flows in Energy Systems

Kiran Yalamanchi, Shivam Barwey, Ibrahim Jarrah, Pinaki Pal

Abstract

Computational fluid dynamics (CFD) simulations of complex fluid flows in energy systems are prohibitively expensive due to strong nonlinearities and multiscale-multiphysics interactions. In this work, we present a transformer-based modeling framework for prediction of fluid flows, and demonstrate it for high-pressure gas injection phenomena relevant to reciprocating engines. The approach employs a hierarchical Vision Transformer (SwinV2-UNet) architecture that processes multimodal flow datasets from multi-fidelity simulations. The model architecture is conditioned on auxiliary tokens explicitly encoding the data modality and time increment. Model performance is assessed on two different tasks: (1) spatiotemporal rollouts, where the model autoregressively predicts the flow state at future times; and (2) feature transformation, where the model infers unobserved fields/views from observed fields/views. We train separate models on multimodal datasets generated from in-house CFD simulations of argon jet injection into a nitrogen environment, encompassing multiple grid resolutions, turbulence models, and equations of state. The resulting data-driven models learn to generalize across resolutions and modalities, accurately forecasting the flow evolution and reconstructing missing flow-field information from limited views. This work demonstrates how large vision transformer-based models can be adapted to advance predictive modeling of complex fluid flow systems.

A Multimodal Vision Transformer-based Modeling Framework for Prediction of Fluid Flows in Energy Systems

Abstract

Computational fluid dynamics (CFD) simulations of complex fluid flows in energy systems are prohibitively expensive due to strong nonlinearities and multiscale-multiphysics interactions. In this work, we present a transformer-based modeling framework for prediction of fluid flows, and demonstrate it for high-pressure gas injection phenomena relevant to reciprocating engines. The approach employs a hierarchical Vision Transformer (SwinV2-UNet) architecture that processes multimodal flow datasets from multi-fidelity simulations. The model architecture is conditioned on auxiliary tokens explicitly encoding the data modality and time increment. Model performance is assessed on two different tasks: (1) spatiotemporal rollouts, where the model autoregressively predicts the flow state at future times; and (2) feature transformation, where the model infers unobserved fields/views from observed fields/views. We train separate models on multimodal datasets generated from in-house CFD simulations of argon jet injection into a nitrogen environment, encompassing multiple grid resolutions, turbulence models, and equations of state. The resulting data-driven models learn to generalize across resolutions and modalities, accurately forecasting the flow evolution and reconstructing missing flow-field information from limited views. This work demonstrates how large vision transformer-based models can be adapted to advance predictive modeling of complex fluid flow systems.

Paper Structure

This paper contains 13 sections, 1 equation, 14 figures, 1 table.

Figures (14)

  • Figure 1: Schematic of the SwinV2-UNet architecture used for spatiotemporal prediction.
  • Figure 2: Spatiotemporal prediction results for longitudinal projected flow variables. Each row corresponds to a physical variable ($\rho$, $u$, $v$, $w$). Columns show: input at $t=1.00$ s, ground-truth target at $t=1.01$ s, baseline model (trained with single-step rollout) prediction at $t=1.01$ s, and local prediction error at $t=1.01$ s.
  • Figure 3: Spatiotemporal prediction results for longitudinal slice variables. Layout as in Figure \ref{['fig:2']}, with rows for density and velocity components ($u$, $v$, $w$) on planar slices.
  • Figure 4: Spatiotemporal prediction results for density on transverse planes at $z=2$ mm and $z=10$ mm. Columns show input, target, baseline model prediction, and local prediction error.
  • Figure 5: Feature transformation results for Case 1: longitudinal projected density to velocity components ($u$, $v$, $w$) using Euclidean loss.
  • ...and 9 more figures