FedBiOT: LLM Local Fine-tuning in Federated Learning without Full Model
Feijie Wu, Zitao Li, Yaliang Li, Bolin Ding, Jing Gao
TL;DR
FedBiOT tackles privacy-preserving, resource-efficient LLM fine-tuning in federated settings by decomposing a compressed server model into an emulator and an adapter, connected via a bi-level optimization that coordinates server distillation and client adaptation. Clients fine-tune only a LoRA-based adapter on local data, while the server aligns the emulator to the full model using a public dataset, ensuring performance with substantially reduced compute and communication. The approach is validated on LLaMA-2-7B across math, code generation, and QA tasks under both i.i.d. and non-i.i.d. data, outperforming Offsite-tuning and FedOT in many scenarios and demonstrating robust stability as dropout rate varies. The work provides a practical path to privacy-preserving, scalable FL fine-tuning for closed-source LLMs, offering meaningful reductions in resource usage while maintaining competitive downstream performance.
Abstract
Large language models (LLMs) show amazing performance on many domain-specific tasks after fine-tuning with some appropriate data. However, many domain-specific data are privately distributed across multiple owners. Thus, this dilemma raises the interest in how to perform LLM fine-tuning in federated learning (FL). However, confronted with limited computation and communication capacities, FL clients struggle to fine-tune an LLM effectively. To this end, we introduce FedBiOT, a resource-efficient LLM fine-tuning approach to FL. Specifically, our method involves the server generating a compressed LLM and aligning its performance with the full model. Subsequently, the clients fine-tune a lightweight yet important part of the compressed model, referred to as an adapter. Notice that as the server has no access to the private data owned by the clients, the data used for alignment by the server has a different distribution from the one used for fine-tuning by clients. We formulate the problem into a bi-level optimization problem to minimize the negative effect of data discrepancy and derive the updating rules for the server and clients. We conduct extensive experiments on LLaMA-2, empirically showing that the adapter has exceptional performance when reintegrated into the global LLM. The results also indicate that the proposed FedBiOT significantly reduces resource consumption compared to existing benchmarks, all while achieving comparable performance levels.
