Table of Contents
Fetching ...

Vaporetto: Efficient Japanese Tokenization Based on Improved Pointwise Linear Classification

Koichi Akabe, Shunsuke Kanda, Yusuke Oda, Shinsuke Mori

Abstract

This paper proposes an approach to improve the runtime efficiency of Japanese tokenization based on the pointwise linear classification (PLC) framework, which formulates the whole tokenization process as a sequence of linear classification problems. Our approach optimizes tokenization by leveraging the characteristics of the PLC framework and the task definition. Our approach involves (1) composing multiple classifications into array-based operations, (2) efficient feature lookup with memory-optimized automata, and (3) three orthogonal pre-processing methods for reducing actual score calculation. Thus, our approach makes the tokenization speed 5.7 times faster than the current approach based on the same model without decreasing tokenization accuracy. Our implementation is available at https://github.com/daac-tools/vaporetto under the MIT or Apache-2.0 license.

Vaporetto: Efficient Japanese Tokenization Based on Improved Pointwise Linear Classification

Abstract

This paper proposes an approach to improve the runtime efficiency of Japanese tokenization based on the pointwise linear classification (PLC) framework, which formulates the whole tokenization process as a sequence of linear classification problems. Our approach optimizes tokenization by leveraging the characteristics of the PLC framework and the task definition. Our approach involves (1) composing multiple classifications into array-based operations, (2) efficient feature lookup with memory-optimized automata, and (3) three orthogonal pre-processing methods for reducing actual score calculation. Thus, our approach makes the tokenization speed 5.7 times faster than the current approach based on the same model without decreasing tokenization accuracy. Our implementation is available at https://github.com/daac-tools/vaporetto under the MIT or Apache-2.0 license.

Paper Structure

This paper contains 30 sections, 5 equations, 13 figures, 5 tables, 1 algorithm.

Figures (13)

  • Figure 1: Example of Japanese tokenization with pointwise method. Bottom box contains character $n$-gram features described in Section \ref{['sec:context-features']}.
  • Figure 2: Examples of dictionary features of two words "[S]のno" and "[S]世se[S]界kai" with different positions. Highlighted rectangles indicate position where dictionary word was found. Bold lines indicate position the corresponding feature affects. Solid-line rectangles containing 6 characters indicate window. I feature is repeated for all intermediate character boundaries.
  • Figure 3: Integrating character $n$-gram scores to result array $\bm{y}$. $W=3$. $\bm{w}_{\textrm{pattern}}($"[S]界kai"$)$ has 6 weights, $\bm{w}_{\textrm{pattern}}($"[S]世se[S]界kai"$)$ has 5 weights, and $\bm{w}_{\textrm{pattern}}($"[S]全zen[S]世se[S]界kai"$)$ has 4 weights, as formulated in Equation (\ref{['eq:w-pattern']}). Each score array is integrated to position $k-W$ on $\bm{y}$, where $k$ is rightmost position of pattern.
  • Figure 4: PMA built from three patterns ("[S]界kai", "[S]世se[S]界kai", "[S]全zen[S]世se[S]界kai"). Balloons indicate patterns reported at corresponding states. Dotted lines indicate failure edges to non-root states.
  • Figure 5: Difference between adding positions of character $n$-gram scores and dictionary word scores. $W=3$. L, I, and R indicate types of dictionary features. 0 indicates padding.
  • ...and 8 more figures