# GAMS Support Wiki

### Site Tools

gams:calculating_eigenvalues

This is an old revision of the document!

# Calculating Eigenvalues

I assume that you are interested in the eigenvalues (and possibly the eigen vectors) of a SYMMETRIC matrix. You may write all the equations that define the eigenvectors, but most likely the NLP solver will not be able to solve the model because it is highly non linear and non-convex. However, you may compute the eigenvalues one by one by maximizing a norm. The following little GAMS program should show you the principle:

```set i dimension of the symmetric matrix / i1 * i6 /,
io(i) subset with already defined eigenvectors;
alias (i,ip,ipp);
parameter c(i,i) matrix for which eigenvalues are computed
ev(i,i) eigenvectors
la(i) eigenvalues;
* Generate random positive semidefinite symmetric matrix, c.
* The rank will be less than min( card(i), card(o) ):
*
set o / o1 * o6 /;
parameter d(i,o) data observations;
d(i,o) = normal(0,1);
c(i,ip) = sum(o, d(i,o) * d(ip,o) );
display c;
*
* Make sure io and ev are defined before we enter the loop
*
io(i)    = no;
ev(i,io) = 0;
variable x(i) eigenvector
lambda eigenvalue;
equation edef     definition of max eigenvalue
norm     eigenvector has norm 1
ortho(i) eigen vector is orthonormal to already defined vectors;
edef ..      sum( (i,ip), x(i) * c(i,ip) * x(ip) ) =e= lambda;
norm ..      sum( i, sqr( x(i) ) ) =e= 1;
ortho(io)..  sum( i, ev(i,io) * x(i) ) =e= 0;
model eigen / all /;
option limrow = 0, limcol = 0, solprint = off;
* Now generate the eigen vectors one by one starting from the largest
* eigenvalue. Each new eigen vector must be orthogonal to the previous
* ones and maximize the weighted norm.
* If maximizing is replaced by minimizing we get them in the reverse
* order.
loop( ipp,
* Initialize with random vector and normalize it so the only nonlinear
* constraint is satiesfied:
x.l(i) = normal(0,1);
x.l(i) = x.l(i) / sum( ip, sqr( x.l(ip) ) );
solve eigen using nlp maximizing lambda;
* Store the current eigen vector and eigen value and increment the
* set of vectors that future vectors must be orthogonal to:
ev(i,ipp) = x.l(i);
io(ipp) = yes;
la(ipp) = lambda.l;
);
display ev, la;
*
* Test the accuracy of the solution:
* If dtest below is too large you may reduce the optimality tolerance.
* In CONOPT this is done with the line "set rtredg 1.d-10" in a conopt
* options file, conopt.opt.
*
parameter test(i,i) computed tr(ev) * diag(la) * ev -- should be equal to c
dtest(i,i) error in test;
test(i,ip)  = sum( ipp, ev(i,ipp) * la(ipp) * ev(ip,ipp) );
dtest(i,ip) = test(i,ip) - c(i,ip);
display test, dtest;```

Another approach submitted by Arne Drud: Essentially, the eigenvalues are found one at a time from the largest one using maximization. Each time an eigenvector has been found we require that the next eigenvectors are orthogonal. A little trick in the generation of initial values removes components parallel to already found eigenvectors and ensures that the initial point to each solve is feasible. All constraints except one are linear and the objective is quadratic, so the model is easy to solve.

```\$Title First Covariance Matrix and its Eigen-values and -vectors.
* Contributed by Arne Drud (adrud@arki.dk)
SETS
j          variables                            /j1*j25/
n          observations                         /n1*n40/

ALIAS (j,jj,jjj);

PARAMETER
jn1(j,n)    observations of the variables
meanj(j)    mean value of the observations
covj(j,jj)  First covariance matrix;

* forming a covariance matrix
jn1(j,n)   = NORMAL(0,1);
meanj(j)   = SUM(n, jn1(j,n)) /CARD(n);
covj(j,jj) = SUM(n, (jn1(j,n) - meanj(j))*(jn1(jj,n) - meanj(jj)))/CARD(n);
DISPLAY covj;

scalar OK;

Parameter evecj(j,j)  Eigenvectors found so far;
Parameter evalj(j)    Eigenvalues found so far;

\$batinclude eigen.inc j covj evecj evalj ok call1

display evecj, evalj;

Parameter testj(j,j) New covariance matrix
difj(j,j)  Difference between original and new covariance;

testj(j,jj) = sum( jjj, evecj(j,jjj) * evalj(jjj) * evecj(jj,jjj) );
difj(j,jj)  = covj(j,jj) - testj(j,jj);
display difj;

\$Title Second Covariance Matrix and its Eigen-values and -vectors.
SETS
i          variables                            /i1*i15/

ALIAS (i,ii,iii);

PARAMETER
in1(i,n)    observations of variables
meani(i)    mean value of the observations
covi(i,ii)  Second covariance matrix;

* forming a covariance matrix
in1(i,n)   = NORMAL(0,1);
meani(i)   = SUM(n, in1(i,n)) /CARD(n);
covi(i,ii) = SUM(n, (in1(i,n) - meani(i))*(in1(ii,n) - meani(ii)))/CARD(n);
DISPLAY covi;

Parameter eveci(i,i)  Eigenvectors found so far;
Parameter evali(i)    Eigenvalues found so far;

\$batinclude eigen.inc i covi eveci evali ok call2

display eveci, evali;

Parameter testi(i,i) New covariance matrix
difi(i,i)  Difference between original and new covariance;

testi(i,ii) = sum( iii, eveci(i,iii) * evali(iii) * eveci(ii,iii) );
difi(i,ii)  = covi(i,ii) - testi(i,ii);
display difi; ```
```\$offlisting
\$ontext
Batinclude file eigen.inc

The program computes the eigenvalues and eigen vector of a square
symmetric matrix.

Parameters (all required):
1: The set over which the matrix is defined (input)
2: The square matrix (input)
3: The eigenvectors (output) declared as a square matrix over the set
in argument 1. The second set indicates the different eigenvalues.
4: The eigenvalues (output) declared as a one dimensional parameter
over the set in argument 1.
5: Return Code. A scalar into which a return code will be written.
A +1 indicates that not all eigenvectors and eigenvalues were found.
6: A name that can be used to make all identifiers unique for multiple
calls
\$offtext

alias (%1, %6__1, %6__2 );
%5 = 1;
*
*  Define a model that finds the largest eigen value and eigen vector:
*
set       %6__3(%1)  Subset already having an associated eigenvector-eigenvalue pair;
Scalar    %6__4      Norm of initial random vector;

%6__3(%1) = no;
%3(%1,%1) = 0;

variable %6__5(%1)   Eigenvector
%6__6       Eigenvalue;

equation %6__7       Definition of Objective;
Equation %6__8(%1)   The new Eigenvector must be orthogonal to earlier Eigenvectors;
Equation %6__9       The norm of the eigen vector must be 1;

%6__7        .. %6__6 =e= sum((%1,%6__1), %6__5(%1)*%2(%1,%6__1)*%6__5(%6__1) );

%6__8(%6__3) .. sum( %1, %3(%1,%6__3) * %6__5(%1) ) =e= 0;

%6__9        .. sum(%1, sqr(%6__5(%1)) ) =E= 1;

Model %6__10 / %6__7, %6__8, %6__9 /;
%6__10.limrow=0;
%6__10.limcol=0;
%6__10.solprint=2;

parameter %6__11(%1);

loop( %6__2\$%5,
*
*  We initialize the eigenvector with random numbers. To prevent using
*  a lot of time in phase 1 we make sure the orthogonality constraints
*  are satisfied by subtracting a multiple of the already found
*  eigenvectors. We do not check that the remaining vector still is
*  nonzero.
*
%6__5.l(%1)     = uniform(-1,1);
%6__11(%6__3)   = sum(%1, %6__5.l(%1)*%3(%1,%6__3));
%6__5.l(%1)     = %6__5.l(%1) - sum(%6__3,%6__11(%6__3)*%3(%1,%6__3));
%6__4           = sqrt( sum(%1, sqr(%6__5.l(%1)) ) );
%6__5.l(%1)     = %6__5.l(%1) / %6__4;
solve %6__10 using nlp maximizing %6__6;
%5\$(%6__10.solvestat ne 1) = 0;
%5\$(%6__10.modelstat ne 1 and %6__10.modelstat ne 2) = 0;
%6__3(%6__2)    = Yes;
%3(%1,%6__2)    = %6__5.l(%1);
%4(%6__2)       = %6__6.l;
);
*
*  Clear the content of all local variables to preserve space
*
option kill=%6__3, kill=%6__4, kill=%6__5, kill=%6__6, kill=%6__11;
*
*  Clear as much of the symbol table as possible.
*  It seems that aliasses cannot be cleared.
*
\$kill %6__3
\$kill %6__4
\$kill %6__5
\$kill %6__6
\$kill %6__7
\$kill %6__8
\$kill %6__9
\$kill %6__11``` 