Table of Contents
Fetching ...

A tree interpretation of arc standard dependency derivation

Zihao Huang, Ai Ka Lee, Jungyeul Park

Abstract

We show that arc-standard derivations for projective dependency trees determine a unique ordered tree representation with surface-contiguous yields and stable lexical anchoring. Each \textsc{shift}, \textsc{leftarc}, and \textsc{rightarc} transition corresponds to a deterministic tree update, and the resulting hierarchical object uniquely determines the original dependency arcs. We further show that this representation characterizes projectivity: a single-headed dependency tree admits such a contiguous ordered representation if and only if it is projective. The proposal is derivational rather than convertive. It interprets arc-standard transition sequences directly as ordered tree construction, rather than transforming a completed dependency graph into a phrase-structure output. For non-projective inputs, the same interpretation can be used in practice via pseudo-projective lifting before derivation and inverse decoding after recovery. A proof-of-concept implementation in a standard neural transition-based parser shows that the mapped derivations are executable and support stable dependency recovery.

A tree interpretation of arc standard dependency derivation

Abstract

We show that arc-standard derivations for projective dependency trees determine a unique ordered tree representation with surface-contiguous yields and stable lexical anchoring. Each \textsc{shift}, \textsc{leftarc}, and \textsc{rightarc} transition corresponds to a deterministic tree update, and the resulting hierarchical object uniquely determines the original dependency arcs. We further show that this representation characterizes projectivity: a single-headed dependency tree admits such a contiguous ordered representation if and only if it is projective. The proposal is derivational rather than convertive. It interprets arc-standard transition sequences directly as ordered tree construction, rather than transforming a completed dependency graph into a phrase-structure output. For non-projective inputs, the same interpretation can be used in practice via pseudo-projective lifting before derivation and inverse decoding after recovery. A proof-of-concept implementation in a standard neural transition-based parser shows that the mapped derivations are executable and support stable dependency recovery.

Paper Structure

This paper contains 13 sections, 3 theorems, 5 equations, 1 figure, 1 table, 1 algorithm.

Key Result

Theorem 1

A single-headed dependency tree admits a rooted ordered representation of the kind defined above, with leaves in surface order and every subtree spanning a contiguous substring, if and only if the dependency tree is projective.

Figures (1)

  • Figure 1: A dependency tree and its ordered dependency tree representation

Theorems & Definitions (6)

  • Theorem 1: Projective characterization
  • proof : Proof sketch
  • Theorem 2: Transition correspondence
  • proof : Proof sketch
  • Theorem 3: Recoverability
  • proof : Proof sketch