Table of Contents
Fetching ...

K$α$LOS finds Consensus: A Meta-Algorithm for Evaluating Inter-Annotator Agreement in Complex Vision Tasks

David Tschirschwitz, Volker Rodehorst

Abstract

Progress in object detection benchmarks is stagnating. It is limited not by architectures but by the inability to distinguish model improvements from label noise. To restore trust in benchmarking the field requires rigorous quantification of annotation consistency to ensure the reliability of evaluation data. However, standard statistical metrics fail to handle the instance correspondence problem inherent to vision tasks. Furthermore, validating new agreement metrics remains circular because no objective ground truth for agreement exists. This forces reliance on unverifiable heuristics. We propose K$α$LOS (KALOS), a unified meta-algorithm that generalizes the "Localization First" principle to standardize dataset quality evaluation. By resolving spatial correspondence before assessing agreement, our framework transforms complex spatio-categorical problems into nominal reliability matrices. Unlike prior heuristic implementations, K$α$LOS employs a principled, data-driven configuration; by statistically calibrating the localization parameters to the inherent agreement distribution, it generalizes to diverse tasks ranging from bounding boxes to volumetric segmentation or pose estimation. This standardization enables granular diagnostics beyond a single score. These include annotator vitality, collaboration clustering, and localization sensitivity. To validate this approach, we introduce a novel and empirically derived noise generator. Where prior validations relied on uniform error assumptions, our controllable testbed models complex and non-isotropic human variability. This provides evidence of the metric's properties and establishes K$α$LOS as a robust standard for distinguishing signal from noise in modern computer vision benchmarks.

K$α$LOS finds Consensus: A Meta-Algorithm for Evaluating Inter-Annotator Agreement in Complex Vision Tasks

Abstract

Progress in object detection benchmarks is stagnating. It is limited not by architectures but by the inability to distinguish model improvements from label noise. To restore trust in benchmarking the field requires rigorous quantification of annotation consistency to ensure the reliability of evaluation data. However, standard statistical metrics fail to handle the instance correspondence problem inherent to vision tasks. Furthermore, validating new agreement metrics remains circular because no objective ground truth for agreement exists. This forces reliance on unverifiable heuristics. We propose KLOS (KALOS), a unified meta-algorithm that generalizes the "Localization First" principle to standardize dataset quality evaluation. By resolving spatial correspondence before assessing agreement, our framework transforms complex spatio-categorical problems into nominal reliability matrices. Unlike prior heuristic implementations, KLOS employs a principled, data-driven configuration; by statistically calibrating the localization parameters to the inherent agreement distribution, it generalizes to diverse tasks ranging from bounding boxes to volumetric segmentation or pose estimation. This standardization enables granular diagnostics beyond a single score. These include annotator vitality, collaboration clustering, and localization sensitivity. To validate this approach, we introduce a novel and empirically derived noise generator. Where prior validations relied on uniform error assumptions, our controllable testbed models complex and non-isotropic human variability. This provides evidence of the metric's properties and establishes KLOS as a robust standard for distinguishing signal from noise in modern computer vision benchmarks.

Paper Structure

This paper contains 37 sections, 24 equations, 30 figures, 6 tables.

Figures (30)

  • Figure 1: Visual Abstract.
  • Figure 2: The KαLOS Framework. (1) Input: The pipeline processes active disagreement between raters. (2) Principled Configuration: Data-driven calibration selects the optimal distance function ($d_{\text{loc}}$) and threshold ($\tau^*$) by maximizing the statistical separation between observed signal ($D_o$) and chance noise ($D_e$). (3) Execution: A solver transforms continuous spatial data into a discrete reliability matrix. Crucially, the completeness assumption encodes missed instances as no_object, treating them as existence disagreement (false positives/negatives) rather than missing data. (4) Downstream Diagnostics: This standardization enables rigorous quality checks, visualizing Localization Sensitivity, Class Difficulty, and Annotator Vitality ($V_r$).
  • Figure 3: Performance of different correspondence solvers averaged across LVIS, TexBiG and VinDr-CXR. A simple filtered Rand Index variation was used due to the constrained clustering setup, where limited and partially overlapping clusters, together with non-exhaustive ground-truth coverage, rendered more advanced evaluation metrics infeasible or unreliable for assessing correspondence accuracy.
  • Figure 4: F1-score comparison. Greedy performs best.
  • Figure 5: Failure-type comparison across correspondence solvers and noise magnitudes using raincloud plots. TP shows correctly matched pairs, Missed Opportunities are correct pairs the solver failed to match, and FP captures spurious matches where no true pair exists. Cuckoo Eggs indicate cases where a correct match was displaced by an incorrect one, effectively combining a FN with a FP and revealing when class disagreement overrides $\psi_{\text{soft}}$\ref{['eq:category_lenient']} function.
  • ...and 25 more figures