Table of Contents
Fetching ...

A Note on the Eigenvalues of the Google Matrix

Lars Eldén

TL;DR

A theorem concerning the eigenvalues was recently proved by Langville and Meyer and in this note another proof is given.

Abstract

The Google matrix is a positive, column-stochastic matrix that is used to compute the pagerank of all the web pages on the Internet: the eigenvector corresponding to the eigenvalue 1 is the pagerank vector. Due to its huge dimension, of the order of billions, the (presently) only viable method to compute the eigenvector is the power method. For the convergence of the iteration, it is essential to know the eigenvalue distribution of the matrix. A theorem concerning the eigenvalues was recently proved by Langville and Meyer. In this note another proof is given.

A Note on the Eigenvalues of the Google Matrix

TL;DR

A theorem concerning the eigenvalues was recently proved by Langville and Meyer and in this note another proof is given.

Abstract

The Google matrix is a positive, column-stochastic matrix that is used to compute the pagerank of all the web pages on the Internet: the eigenvector corresponding to the eigenvalue 1 is the pagerank vector. Due to its huge dimension, of the order of billions, the (presently) only viable method to compute the eigenvector is the power method. For the convergence of the iteration, it is essential to know the eigenvalue distribution of the matrix. A theorem concerning the eigenvalues was recently proved by Langville and Meyer. In this note another proof is given.

Paper Structure

This paper contains 1 theorem, 4 equations.

Key Result

Theorem 1

Let $P$ be a column-stochastic matrix with eigenvalues $\{1, \lambda_2, \lambda_3 \ldots, \lambda_n\}$. Then the eigenvalues of $A = \alpha P + (1 - \alpha) v e^T$, where $0 < \alpha < 1$ and $v$ is a vector with non-negative elements satisfying $e^Tv =1$, are $\{1, \alpha \lambda_2, \alpha \lambda

Theorems & Definitions (1)

  • Theorem 1: lame:03alame:03