Table of Contents
Fetching ...

Measuring LLM Trust Allocation Across Conflicting Software Artifacts

Noshin Ulfat, Ahsanul Ameen Sabit, Soneya Binta Hossain

Abstract

LLM-based software engineering assistants fail not only by producing incorrect outputs, but also by allocating trust to the wrong artifact when code, documentation, and tests disagree. Existing evaluations focus mainly on downstream outcomes and therefore cannot reveal whether a model recognized degraded evidence, identified the unreliable source, or calibrated its trust across artifacts. We present TRACE (Trust Reasoning over Artifacts for Calibrated Evaluation), a framework that elicits structured artifact-level trust traces over Javadoc, method signatures, implementations, and test prefixes under blind perturbations. Using 22,339 valid traces from seven models on 456 curated Java method bundles, we evaluate per-artifact quality assessment, inconsistency detection, affected artifact attribution, and source prioritization. Across all models, quality penalties are largely localized to the perturbed artifact and increase with severity, but sensitivity is asymmetric across artifact types: documentation bugs induce a substantially larger heavy-to-subtle gap than implementation faults (0.152-0.253 vs. 0.049-0.123). Models detect explicit documentation bugs well (67-94%) and Javadoc and implementation contradictions at 50-91%, yet show a systematic blind spot when only the implementation drifts while the documentation remains plausible, with detection dropping by 7-42 percentage points. Confidence is poorly calibrated for six of seven models. These findings suggest that current LLMs are better at auditing natural-language specifications than at detecting subtle code-level drift, motivating explicit artifact-level trust reasoning before correctness-critical downstream use.

Measuring LLM Trust Allocation Across Conflicting Software Artifacts

Abstract

LLM-based software engineering assistants fail not only by producing incorrect outputs, but also by allocating trust to the wrong artifact when code, documentation, and tests disagree. Existing evaluations focus mainly on downstream outcomes and therefore cannot reveal whether a model recognized degraded evidence, identified the unreliable source, or calibrated its trust across artifacts. We present TRACE (Trust Reasoning over Artifacts for Calibrated Evaluation), a framework that elicits structured artifact-level trust traces over Javadoc, method signatures, implementations, and test prefixes under blind perturbations. Using 22,339 valid traces from seven models on 456 curated Java method bundles, we evaluate per-artifact quality assessment, inconsistency detection, affected artifact attribution, and source prioritization. Across all models, quality penalties are largely localized to the perturbed artifact and increase with severity, but sensitivity is asymmetric across artifact types: documentation bugs induce a substantially larger heavy-to-subtle gap than implementation faults (0.152-0.253 vs. 0.049-0.123). Models detect explicit documentation bugs well (67-94%) and Javadoc and implementation contradictions at 50-91%, yet show a systematic blind spot when only the implementation drifts while the documentation remains plausible, with detection dropping by 7-42 percentage points. Confidence is poorly calibrated for six of seven models. These findings suggest that current LLMs are better at auditing natural-language specifications than at detecting subtle code-level drift, motivating explicit artifact-level trust reasoning before correctness-critical downstream use.

Paper Structure

This paper contains 39 sections, 4 equations, 10 figures, 1 table.

Figures (10)

  • Figure 1: Overview of TRACE Pipeline.
  • Figure 2: Mean input-quality scores by dataset variant and model across five artifact dimensions (Javadoc, Signature, MUT, Test Prefix, Overall), revealing baseline calibration differences and the largest perturbation-induced drops per artifact.
  • Figure 3: Score changes from base to perturbed datasets. Delta from base ($\Delta = \text{perturbed} - \text{base}$) by metric, perturbation, and model. Larger negative deltas are concentrated in the perturbed artifact, indicating localized sensitivity with limited spillover.
  • Figure 4: Severity breakdown for documentation bugs, MUT bugs, and MUT+Doc contradictions (grouped bars: mean overall quality score per model and severity tier; error bars: $\pm$1 std). All models preserve severity monotonicity, with larger gaps for documentation than code perturbations.
  • Figure 5: Inconsistency Detection Rates of Javadoc-MUT using 5-Metric comparisons - panels by perturbation type, with bars per model and three severity tiers per bar group; the false-positive baseline is shown as a dashed line. Net detection gain above baseline is the key quantity.
  • ...and 5 more figures