Skip to content Skip to sidebar Skip to footer

Using ARPACK Solving Eigenvalueproblem, But Getting Inconsistent Results With Matlab

I'm new to ARPACK, I downloaded a script like the following import time import numpy as np from scipy.linalg import eigh from scipy.sparse.linalg import eigs np.set_printoptions(su

Solution 1:

A few things first about the MATLAB functions:

  • Eigenvalues returned by eig are NOT sorted. In [V,D] = eig(A) we are only guaranteed that the columns of V are the corresponding right eigenvectors to the eigenvalues in D(i,i). On the other hand, svd returns singular values sorted in decreasing order.

  • d = eigs(A,k) return the k largest-magnitude eigenvalues. However it is intended for large and sparse matrices, and generally is not a substitute for:

    d = eig(full(A));
    d = sort(d, 'descend');
    d = d(1:k);
    

    (eigs is based on ARPACK, while eig uses LAPACK routines).

  • There is no natural ordering of complex numbers. The convention is that the sort function sorts complex elements first by magnitude (i.e. abs(x)), then by phase angle on [-pi,pi] interval (i.e. angle(x)) if magnitudes are equal.


MATLAB

With that in mind, consider the following MATLAB code:

% create the same banded matrix you're using
n = 30;
A = spdiags(ones(n,1)*[-1,2,-1], [-1 0 1], n, n);
A(1,9) = 30;
%A = full(A);

% k eigenvalues closest to sigma
k = 10; sigma = 3.6766133;
D = eigs(A, k, sigma);

% lets check they are indeed sorted by distance to sigma
dist = abs(D-sigma);
issorted(dist)

I get:

>> D
D =
  3.684024113506185 + 0.000000000000000i
  3.820058967057386 + 0.000000000000000i
  3.511202932803535 + 0.000000000000000i
  3.918541441830945 + 0.000000000000000i
  3.979221766266508 + 0.000000000000000i
  3.307439963195125 + 0.000000000000000i
  4.144524409923134 + 0.000000000000000i
  3.642801014184618 + 0.497479798520640i
  3.642801014184618 - 0.497479798520640i
  3.080265978640096 + 0.000000000000000i

>> dist
dist =
   0.007410813506185
   0.143445667057386
   0.165410367196465
   0.241928141830945
   0.302608466266508
   0.369173336804875
   0.467911109923134
   0.498627536953383
   0.498627536953383
   0.596347321359904

You can try to get similar results using dense eig:

% closest k eigenvalues to sigma
ev = eig(full(A));
[~,idx] = sort(ev - sigma);
ev = ev(idx(1:k))

% compare against eigs
norm(D - ev)

The difference is acceptably small (close to machine epsilon):

>> norm(ev-D)
ans =
     1.257079405021441e-14

Python

Similarly in Python:

import numpy as np
from scipy.sparse import spdiags
from scipy.sparse.linalg import eigs

# create banded matrix
n = 30
A = spdiags((np.ones((n,1))*[-1,2,-1]).T, [-1,0,1], n, n).todense()
A[0,8] = 30

# EIGS: k closest eigenvalues to sigma
k = 10
sigma = 3.6766133
D = eigs(A, k, sigma=sigma, which='LM', return_eigenvectors=False)
D = D[::-1]
for x in D:
    print '{:.16f}'.format(x)

# EIG
ev,_ = np.linalg.eig(A)
idx = np.argsort(np.abs(ev - sigma))
ev = ev[idx[:k]]
for x in ev:
    print '{:.16f}'.format(x)

with similar results:

# EIGS
3.6840241135061853+0.0000000000000000j
3.8200589670573866+0.0000000000000000j
3.5112029328035343+0.0000000000000000j
3.9185414418309441+0.0000000000000000j
3.9792217662665070+0.0000000000000000j
3.3074399631951246+0.0000000000000000j
4.1445244099231351+0.0000000000000000j
3.6428010141846170+0.4974797985206380j
3.6428010141846170-0.4974797985206380j
3.0802659786400950+0.0000000000000000j

# EIG
3.6840241135061880+0.0000000000000000j
3.8200589670573906+0.0000000000000000j
3.5112029328035339+0.0000000000000000j
3.9185414418309468+0.0000000000000000j
3.9792217662665008+0.0000000000000000j
3.3074399631951201+0.0000000000000000j
4.1445244099231271+0.0000000000000000j
3.6428010141846201+0.4974797985206384j
3.6428010141846201-0.4974797985206384j
3.0802659786400906+0.0000000000000000j

Solution 2:

The results are consistent between NumPy and Matlab if you use the eigs function:

>> format long
>> A = diag(-ones(n-1,1),-1) + diag(2*ones(n,1)) + diag(-ones(n-1,1),+1);A(1,9)=30;
>> eigs(A,3,3.6766133)'
ans =
   3.684024113506185   3.820058967057386   3.511202932803534

As for why the true closest eigenvalue isn't selected, I think that has to do with convergence to complex eigenvalues of real matrices and the choice of a real shift. I don't know how ARPACK calculates its iterates, but I remember being told that a real A with a real σ cannot by default converge to a complex conjugate pair since their ratio in absolute value is 1 (for Inverse Power Iteration). Since ARPACK will generate the complex eigenvalues at the 8 and 9 iteration (+/- ordering is random), I'm guessing their is some fix for this that I've forgotten about or never knew:

>> ev = eigs(A,9,3.6766133);ev(8:9)
ans =
  3.642801014184617 - 0.497479798520639i
  3.642801014184617 + 0.497479798520639i

I'm not sure if their is a general work around for this other than guessing a complex part for the shift or just grabbing extra eigenvalues until the conjugate pair falls into the ball of convergence for the ARPACK method.


Post a Comment for "Using ARPACK Solving Eigenvalueproblem, But Getting Inconsistent Results With Matlab"