Table of Contents
Fetching ...

Discrete Prototypical Memories for Federated Time Series Foundation Models

Liwei Deng, Qingxiang Liu, Xinhe Niu, Shengchao Chen, Sheng Sun, Yuankai Wu, Guodong Long, Yuxuan Liang

Abstract

Leveraging Large Language Models (LLMs) as federated learning (FL)-based time series foundation models offers a promising way to transfer the generalization capabilities of LLMs to time series data while preserving access to private data. However, the semantic misalignment between time-series data and the text-centric latent space of existing LLMs often leads to degraded performance. Meanwhile, the parameter-sharing mechanism in existing FL methods model heterogeneous cross-domain time-series data into a unified continuous latent space, which contradicts the fact that time-series semantics frequently manifest as discrete and recurring regimes. To address these limitations, we propose \textsc{FeDPM}, a federated framework for time-series foundation models based on discrete prototypical memories. Specifically, we learn local prototypical memory priors for intra-domain time-series data. We then align cross-domain memories to promote a unified discrete latent space and introduce a domain-specific memory update mechanism to balance shared and personalized prototypical knowledge. Extensive experiments demonstrate the efficiency and effectiveness of \textsc{FeDPM}. The code is publicly available at https://anonymous.4open.science/r/FedUnit-64D1.

Discrete Prototypical Memories for Federated Time Series Foundation Models

Abstract

Leveraging Large Language Models (LLMs) as federated learning (FL)-based time series foundation models offers a promising way to transfer the generalization capabilities of LLMs to time series data while preserving access to private data. However, the semantic misalignment between time-series data and the text-centric latent space of existing LLMs often leads to degraded performance. Meanwhile, the parameter-sharing mechanism in existing FL methods model heterogeneous cross-domain time-series data into a unified continuous latent space, which contradicts the fact that time-series semantics frequently manifest as discrete and recurring regimes. To address these limitations, we propose \textsc{FeDPM}, a federated framework for time-series foundation models based on discrete prototypical memories. Specifically, we learn local prototypical memory priors for intra-domain time-series data. We then align cross-domain memories to promote a unified discrete latent space and introduce a domain-specific memory update mechanism to balance shared and personalized prototypical knowledge. Extensive experiments demonstrate the efficiency and effectiveness of \textsc{FeDPM}. The code is publicly available at https://anonymous.4open.science/r/FedUnit-64D1.

Paper Structure

This paper contains 38 sections, 5 equations, 5 figures, 10 tables, 1 algorithm.

Figures (5)

  • Figure 1: Ablation study of Time-FFM by replacing the frozen LLM backbone with trainable Transformer layers or FC layers on (a) forecasting MSE and (b) number of parameters. (Detailed settings and results in Appendix \ref{['Ablation_Time-FFM']}.) (c) Performance comparison between our proposed FeDPM and FFTS.
  • Figure 2: The overall architecture of FeDPM.
  • Figure 3: Model efficiency comparison on ETTh1 ($F_i$=96) in terms of forecasting MSE, training time, and training Parameters.
  • Figure 4: Visualization of prototypes on the Weather dataset. (a) Input patches ($P_n = 4$) and their corresponding representative prototypes in the time domain, where thick lines denote prototypes and thin lines denote input patches. (b) Patch Representation and prototype embeddings projected into a shared latent space using t-SNE t-SNE.
  • Figure 5: Hyperparameter Sensitivity Analysis. We evaluate the effects of five key hyperparameters across four datasets under two forecasting horizons, $F_i \in \{96, 192\}$.