Table of Contents
Fetching ...

Credential Leakage in LLM Agent Skills: A Large-Scale Empirical Study

Zhihao Chen, Ying Zhang, Yi Liu, Gelei Deng, Yuekang Li, Yanjun Zhang, Jianting Ning, Leo Yu Zhang, Lei Ma, Zhiqiang Li

Abstract

Third-party skills extend LLM agents with powerful capabilities but often handle sensitive credentials in privileged environments, making leakage risks poorly understood. We present the first large-scale empirical study of this problem, analyzing 17,022 skills (sampled from 170,226 on SkillsMP) using static analysis, sandbox testing, and manual inspection. We identify 520 vulnerable skills with 1,708 issues and derive a taxonomy of 10 leakage patterns (4 accidental and 6 adversarial). We find that (1) leakage is fundamentally cross-modal: 76.3% require joint analysis of code and natural language, while 3.1% arise purely from prompt injection; (2) debug logging is the primary vector, with print and console.log causing 73.5% of leaks due to stdout exposure to LLMs; and (3) leaked credentials are both exploitable (89.6% without privileges) and persistent, as forks retain secrets even after upstream fixes. After disclosure, all malicious skills were removed and 91.6% of hardcoded credentials were fixed. We release our dataset, taxonomy, and detection pipeline to support future research.

Credential Leakage in LLM Agent Skills: A Large-Scale Empirical Study

Abstract

Third-party skills extend LLM agents with powerful capabilities but often handle sensitive credentials in privileged environments, making leakage risks poorly understood. We present the first large-scale empirical study of this problem, analyzing 17,022 skills (sampled from 170,226 on SkillsMP) using static analysis, sandbox testing, and manual inspection. We identify 520 vulnerable skills with 1,708 issues and derive a taxonomy of 10 leakage patterns (4 accidental and 6 adversarial). We find that (1) leakage is fundamentally cross-modal: 76.3% require joint analysis of code and natural language, while 3.1% arise purely from prompt injection; (2) debug logging is the primary vector, with print and console.log causing 73.5% of leaks due to stdout exposure to LLMs; and (3) leaked credentials are both exploitable (89.6% without privileges) and persistent, as forks retain secrets even after upstream fixes. After disclosure, all malicious skills were removed and 91.6% of hardcoded credentials were fixed. We release our dataset, taxonomy, and detection pipeline to support future research.

Paper Structure

This paper contains 24 sections, 4 figures, 3 tables.

Figures (4)

  • Figure 1: A real-world credential leakage case discovered in our study: the developer embeds a Base64-encoded client secret directly in the skill's source code, exposing the credential to anyone who installs or inspects the skill.
  • Figure 2: The Overview of the Methodology. The study proceeds through four phases: (1) dataset collection of 17,022 skills from SkillsMP, (2) static filtering via keyword matching, NL semantic analysis, and AST-based sink detection (3,156 candidates retained), (3) dynamic validation in instrumented sandboxes under benign and adversarial conditions (1,427 flagged), and (4) manual classification by three reviewers into Benign, Vulnerable, and Malicious categories (520 confirmed cases).
  • Figure 3: Distribution of Hardcoded Credential types.
  • Figure 4: Exploitation Type Distribution