Table of Contents
Fetching ...

Implementing Basic Arithmetic in $\mathbb{F}_p$ via $\mathbb{F}_2$, and Its Application for Computing the Hamming Distance of Linear Codes

Fernando Hernando, Gregorio Quintana-Ortí

Abstract

We present a new general method for performing basic arithmetic in the finite field~$\mathbb{F}_p$ for any prime $p>2$ by using traditional binary operations over~$\mathbb{F}_2$. Our new approach is efficient and competitive with current state-of-art methods. We apply our new arithmetic method to the computation of the minimum Hamming distance of random linear codes for the fields $\mathbb{F}_3$ and $\mathbb{F}_7$. Our new arithmetic method allows to apply new techniques such as the isometric addition that accelerate the computation of the Hamming distance. We have developed implementations in the C programming language for computing the Hamming distance that clearly outperform both state-of-art licensed software and open-source software such as \textsc{Magma} and \textsc{GAP}/\textsc{Guava} on single-core processors, multicore processors, and shared-memory multiprocessors.

Implementing Basic Arithmetic in $\mathbb{F}_p$ via $\mathbb{F}_2$, and Its Application for Computing the Hamming Distance of Linear Codes

Abstract

We present a new general method for performing basic arithmetic in the finite field~ for any prime by using traditional binary operations over~. Our new approach is efficient and competitive with current state-of-art methods. We apply our new arithmetic method to the computation of the minimum Hamming distance of random linear codes for the fields and . Our new arithmetic method allows to apply new techniques such as the isometric addition that accelerate the computation of the Hamming distance. We have developed implementations in the C programming language for computing the Hamming distance that clearly outperform both state-of-art licensed software and open-source software such as \textsc{Magma} and \textsc{GAP}/\textsc{Guava} on single-core processors, multicore processors, and shared-memory multiprocessors.

Paper Structure

This paper contains 28 sections, 14 equations, 7 figures, 5 tables, 6 algorithms.

Figures (7)

  • Figure 1: Speedups of our new implementations with respect to Magma for all matrices in the dataset 3 of $\mathbb{F}_7$. Each subplot contains the results of one subdataset.
  • Figure 2: Times in seconds (left) and speedups (right) for a random sample of matrices in subdataset 3_a of $\mathbb{F}_7$. In these plots Magma time is $[1, 10)$. The horizontal axis shows the matrices assessed with their dimensions ($k \times n$) and their distance (d).
  • Figure 3: Times in seconds (left) and speedups (right) for a random sample of matrices in subdataset 3_b of $\mathbb{F}_7$. In these plots Magma time is $[10, 100)$. The horizontal axis shows the matrices assessed with their dimensions ($k \times n$) and their distance (d).
  • Figure 4: Times in seconds (left) and speedups (right) for a random sample of matrices in subdataset 3_c of $\mathbb{F}_7$. In these plots Magma time is $[100, 1000)$. The horizontal axis shows the matrices assessed with their dimensions ($k \times n$) and their distance (d).
  • Figure 5: Times in seconds (left) and speedups (right) for a random sample of matrices in subdataset 3_d of $\mathbb{F}_7$. In these plots Magma time is $[1000, 10000)$. The horizontal axis shows the matrices assessed with their dimensions ($k \times n$) and their distance (d).
  • ...and 2 more figures

Theorems & Definitions (3)

  • Example 1
  • Example 2
  • Example 3