Table of Contents
Fetching ...

Towards Federated Low-Rank Adaptation of Language Models with Rank Heterogeneity

Yuji Byun, Jaeho Lee

TL;DR

This work tackles federated fine-tuning of large language models with low-rank adaptation (LoRA) by addressing instability caused by rank heterogeneity across clients. It identifies zero-padding as the culprit that dilutes high-quality information during aggregation and introduces a replication-based padding strategy that preserves valuable updates from high-rank clients. The proposed approach, combined with a loss-based mechanism to allocate high ranks, achieves faster convergence and competitive predictive performance while incurring no extra communication cost. The results on DistilBERT and ALBERT with AG's News and DBpedia demonstrate improved efficiency and robustness in federated LoRA settings, highlighting a practical path toward resource-aware, scalable federated fine-tuning.

Abstract

Low-rank adaptation (LoRA) offers an efficient alternative to full-weight adaptation in federated fine-tuning of language models, significantly reducing computational costs. By adjusting ranks for each client, federated LoRA enables flexible resource allocation. However, we observe that heterogeneous ranks among clients lead to unstable performance. Our analysis attributes this instability to the conventional zero-padding aggregation strategy, which dilutes information from high-rank clients during model aggregation. To address this issue, we propose a replication-based padding strategy that better retains valuable information from clients with high-quality data. Empirically, this approach accelerates convergence and enhances the global model's predictive performance.

Towards Federated Low-Rank Adaptation of Language Models with Rank Heterogeneity

TL;DR

This work tackles federated fine-tuning of large language models with low-rank adaptation (LoRA) by addressing instability caused by rank heterogeneity across clients. It identifies zero-padding as the culprit that dilutes high-quality information during aggregation and introduces a replication-based padding strategy that preserves valuable updates from high-rank clients. The proposed approach, combined with a loss-based mechanism to allocate high ranks, achieves faster convergence and competitive predictive performance while incurring no extra communication cost. The results on DistilBERT and ALBERT with AG's News and DBpedia demonstrate improved efficiency and robustness in federated LoRA settings, highlighting a practical path toward resource-aware, scalable federated fine-tuning.

Abstract

Low-rank adaptation (LoRA) offers an efficient alternative to full-weight adaptation in federated fine-tuning of language models, significantly reducing computational costs. By adjusting ranks for each client, federated LoRA enables flexible resource allocation. However, we observe that heterogeneous ranks among clients lead to unstable performance. Our analysis attributes this instability to the conventional zero-padding aggregation strategy, which dilutes information from high-rank clients during model aggregation. To address this issue, we propose a replication-based padding strategy that better retains valuable information from clients with high-quality data. Empirically, this approach accelerates convergence and enhances the global model's predictive performance.

Paper Structure

This paper contains 25 sections, 5 equations, 4 figures, 2 tables.

Figures (4)

  • Figure 1: A visual comparison of two strategies for aggregating rank-heterogeneous LoRA updates. Top: Zero-padding. Bottom: Replication (proposed).
  • Figure 2: Test accuracy of DistilBERT (left two panels) and ALBERT (right two panels) on the AG's News (first and third) and DBPedia (second and fourth) datasets.
  • Figure 3: Test accuracy based on the proportion of high-rank clients, with the results shown for 10%, 20%, 50% of high-rank clients from left to right.
  • Figure 4: Comparison of model performance based on rank allocation