Table of Contents
Fetching ...

Federated Inference for Heterogeneous LLM Communication and Collaboration

Zihan Chen, Zeshen Li, Howard H. Yang, Tony Q. S. Quek, Jihong Park

Abstract

Given the limited performance and efficiency of on-device Large Language Models (LLMs), the collaborations between multiple LLMs enable desirable performance enhancements, in which data, tokens, and model weights could be shared across LLMs. This process is constrained by task-oriented QoS demands, privacy requirements, and inherent system heterogeneity. In view of the above challenge and to fully exploit the on-device inference capabilities, we present a novel federated inference framework in this position paper, termed federated refinement \texttt{FedRefine}. This framework presents a new paradigm for heterogeneous LLMs collaboratively performing inference with communicating KV caches in a privacy-preserving manner. Some numerical results are provided to highlight the superiority of \texttt{FedRefine}. Several interesting topics are also highlighted for future research. By exploring the LLM-native communications, we wish to provide a new paradigm for this broad area.

Federated Inference for Heterogeneous LLM Communication and Collaboration

Abstract

Given the limited performance and efficiency of on-device Large Language Models (LLMs), the collaborations between multiple LLMs enable desirable performance enhancements, in which data, tokens, and model weights could be shared across LLMs. This process is constrained by task-oriented QoS demands, privacy requirements, and inherent system heterogeneity. In view of the above challenge and to fully exploit the on-device inference capabilities, we present a novel federated inference framework in this position paper, termed federated refinement \texttt{FedRefine}. This framework presents a new paradigm for heterogeneous LLMs collaboratively performing inference with communicating KV caches in a privacy-preserving manner. Some numerical results are provided to highlight the superiority of \texttt{FedRefine}. Several interesting topics are also highlighted for future research. By exploring the LLM-native communications, we wish to provide a new paradigm for this broad area.

Paper Structure

This paper contains 10 sections, 4 equations, 3 figures.

Figures (3)

  • Figure 1: Illustration of unidirectional and bidirectional cache communication.
  • Figure 2: A depiction of the federated refinement framework in a heterogeneous multi-LLM system.
  • Figure 3: Performance evaluation of the proposed collaborative inference framework. "KV" and "Token" denote collaborative protocols via C2C and T2T transmission, respectively. "Original" refers to transmitting raw queries without privacy protection, while "Rephrased" indicates the use of privacy-preserving semantically rewritten queries.