Table of Contents
Fetching ...

MuViS: Multimodal Virtual Sensing Benchmark

Jens U. Brandt, Noah C. Puetz, Jobel Jose George, Niharika Vinay Kumar, Elena Raponi, Marc Hilbert, Thomas Bäck, Thomas Bartz-Beielstein

Abstract

Virtual sensing aims to infer hard-to-measure quantities from accessible measurements and is central to perception and control in physical systems. Despite rapid progress from first-principle and hybrid models to modern data-driven methods research remains siloed, leaving no established default approach that transfers across processes, modalities, and sensing configurations. We introduce MuViS, a domain-agnostic benchmarking suite for multimodal virtual sensing that consolidates diverse datasets into a unified interface for standardized preprocessing and evaluation. Using this framework, we benchmark established approaches spanning gradient-boosted decision trees and deep neural network (NN) architectures, and show that none of these provides a universal advantage, underscoring the need for generalizable virtual sensing architectures. MuViS is released as an open-source, extensible platform for reproducible comparison and future integration of new datasets and model classes.

MuViS: Multimodal Virtual Sensing Benchmark

Abstract

Virtual sensing aims to infer hard-to-measure quantities from accessible measurements and is central to perception and control in physical systems. Despite rapid progress from first-principle and hybrid models to modern data-driven methods research remains siloed, leaving no established default approach that transfers across processes, modalities, and sensing configurations. We introduce MuViS, a domain-agnostic benchmarking suite for multimodal virtual sensing that consolidates diverse datasets into a unified interface for standardized preprocessing and evaluation. Using this framework, we benchmark established approaches spanning gradient-boosted decision trees and deep neural network (NN) architectures, and show that none of these provides a universal advantage, underscoring the need for generalizable virtual sensing architectures. MuViS is released as an open-source, extensible platform for reproducible comparison and future integration of new datasets and model classes.

Paper Structure

This paper contains 11 sections, 4 equations, 3 figures, 1 table.

Figures (3)

  • Figure 1: We evaluate standard ML architectures across diverse sensing domains, where models must map multimodal time-series inputs ($x1, x2$) to scalar virtual measurements ($y$).
  • Figure 2: Overview of the six benchmark datasets. Each sub-panel displays a distinct virtual sensing task, showcasing the diversity in target distributions (left) and temporal feature characteristics (right). The collection spans varied feature dimensions ($D$), sequence lengths ($T$), and sampling intervals ($\Delta t$), reflecting the heterogeneous nature of real-world sensing applications.
  • Figure 3: Critical distance diagram. The non-significant Friedman test indicates no statistically superior architecture.