User Tools

Site Tools


gams:calculating_eigenvalues

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
gams:calculating_eigenvalues [2007/10/21 06:27]
Franz Nelissen
gams:calculating_eigenvalues [2020/05/28 07:57]
Michael Bussieck
Line 1: Line 1:
 ====== Calculating Eigenvalues ====== ====== Calculating Eigenvalues ======
  
-I assume that you are interested in the eigenvalues ​(and possibly the eigen vectors) ​of a SYMMETRIC ​matrix. You may write all the equations that define the eigenvectors,​ but most likely the NLP solver ​ will not be able to solve the model because it is highly non linear and non-convex. However, ​you may compute the eigenvalues one by one by maximizing a norm. The following ​little ​GAMS program should show you the principle:​ +Calculating all eigenvalues and eigenvectors ​of a symmetric ​matrix ​in one swoop results in a highly non linear and non-convex ​model and NLP solvers most likely will have problems solving these. However, ​one may compute the eigenvalues one by one by maximizing a norm and iteratively introducing a constraint that makes the new eigenvector orthogonal to the ones already foundlittle ​trick in the generation ​of initial values removes components parallel to already ​found eigenvectors and ensures that the initial point to each solve is feasibleAll constraints except ​one are linear ​and the objective ​is quadratic, so the model is easy to solve.  
-<​code>​ + 
-set i dimension ​of the symmetric matrix / i1 * i6 /, +Alternatively, the tool eigenvector ​that is distributed with GAMS can be calledThe model below (contributed by Arne Drudgives an example ​of this approach for Covariance matrix.
-    io(i) subset with already ​defined ​eigenvectors+
-alias (i,​ip,​ipp);​ +
-parameter c(i,i) matrix for which eigenvalues are computed +
-          ev(i,i) eigenvectors +
-          la(i) eigenvalues;​ +
-* Generate random positive semidefinite symmetric matrix, c. +
-* The rank will be less than min( card(i), card(o) ): +
-+
-set o / o1 * o6 /; +
-parameter d(i,o) data observations;​ +
-d(i,o) = normal(0,​1);​ +
-c(i,ip) = sum(o, d(i,o) * d(ip,o) ); +
-display c; +
-+
-* Make sure io and ev are defined before we enter the loop +
-+
-io(i)    = no; +
-ev(i,io) = 0; +
-variable x(i) eigenvector +
-         ​lambda eigenvalue;​ +
-equation edef     ​definition of max eigenvalue +
-         ​norm ​    ​eigenvector has norm 1 +
-         ​ortho(i) eigen vector ​is orthonormal to already defined vectors; +
-edef ..      sum( (i,ip), x(i) * c(i,ip) * x(ip) ) =e= lambda; +
-norm ..      sum( i, sqr( x(i) ) ) =e= 1; +
-ortho(io).. ​ sum( i, ev(i,io) * x(i) ) =e= 0; +
-model eigen / all /; +
-option limrow = 0, limcol = 0, solprint = off; +
-* Now generate the eigen vectors ​one by one starting from the largest +
-* eigenvalue. Each new eigen vector must be orthogonal to the previous +
-* ones and maximize ​the weighted norm. +
-* If maximizing ​is replaced by minimizing we get them in the reverse +
-* order. +
-loop( ipp, +
-* Initialize with random vector and normalize it so the only nonlinear +
-* constraint ​is satiesfied:​ +
-         x.l(i) = normal(0,​1);​ +
-         x.l(i) = x.l(i) / sum( ipsqr( x.l(ip) ) ); +
-solve eigen using nlp maximizing lambda; +
-* Store the current eigen vector and eigen value and increment the +
-* set of vectors ​that future vectors must be orthogonal to: +
-         ​ev(i,​ipp) = x.l(i)+
-         ​io(ipp) = yes; +
-         ​la(ipp) = lambda.l; +
-); +
-display ev, la; +
-+
-* Test the accuracy ​of the solution: +
-* If dtest below is too large you may reduce the optimality tolerance. +
-* In CONOPT ​this is done with the line "set rtredg 1.d-10"​ in conopt +
-* options file, conopt.opt. +
-+
-parameter test(i,i) computed tr(ev) * diag(la) * ev -- should be equal to c +
-          dtest(i,i) error in test; +
-test(i,​ip) ​ = sum( ipp, ev(i,ipp) * la(ipp) * ev(ip,ipp) ); +
-dtest(i,ip) = test(i,ip) - c(i,ip); +
-display test, dtest; +
-</​code>​+
  
-Another approach submitted by Arne Drud: Essentially,​ the eigenvalues are found one at a time from the largest 
-one using maximization. Each time an eigenvector has been found we require that the next eigenvectors are  
-orthogonal. A little trick in the generation of initial values removes components parallel ​ 
-to already found eigenvectors and ensures that the initial point to each solve is feasible. ​ 
-All constraints except one are linear and the objective is 
-quadratic, so the model is easy to solve. 
 <​code>​ <​code>​
-$Title ​First Covariance Matrix and its Eigen-values ​and -vectors. +$Title Covariance Matrix and its Eigenvalues ​and -vectors. 
-* Contributed ba Arne Drud (adrud@arki.dk) +Sets 
-SETS +       '​variables' ​        /j1*j25/ 
- ​j ​         variables ​                           /j1*j25/ +       '​observations' ​     ​/n1*n40/
- ​n ​         observations ​                        ​/n1*n40/+
  
-ALIAS (j,jj,jjj);+Alias (j,jj,jjj);
  
-PARAMETER +Parameters 
- ​jn1(j,​n) ​   observations of the variables +    jn1(j,​n) ​   ​'observations of the variables' 
- ​meanj(j) ​   mean value of the observations +    meanj(j) ​   ​'mean value of the observations' 
- ​covj(j,​jj) ​ First covariance matrix;+    covj(j,​jj)  ​'First covariance matrix';
  
 * forming a covariance matrix * forming a covariance matrix
-jn1(j,​n) ​  ​= ​NORMAL(0,1); +jn1(j,​n) ​  ​= ​normal(0,1); 
-meanj(j) ​  ​= ​SUM(n, jn1(j,n)) /CARD(n); +meanj(j) ​  ​= ​sum(n, jn1(j,n))/card(n); 
-covj(j,jj) = SUM(n, (jn1(j,n) - meanj(j))*(jn1(jj,​n) - meanj(jj)))/​CARD(n)+covj(j,jj) = sum(n, (jn1(j,n) - meanj(j))*(jn1(jj,​n) - meanj(jj)))/​card(n);
-DISPLAY covj;+
  
-scalar OK;+Parameter 
 +    evecj(j,​j) ​ '​Eigenvectors found so far' 
 +    evalj(j) ​   '​Eigenvalues found so far';
  
-Parameter evecj(j,​j) ​ Eigenvectors found so far; 
-Parameter evalj(j) ​   Eigenvalues found so far; 
- 
-$batinclude eigen.inc j covj evecj evalj ok call1 
- 
-display evecj, evalj; 
- 
-Parameter testj(j,j) New covariance matrix 
-          difj(j,​j) ​ Difference between original and new covariance; 
- 
-testj(j,jj) = sum( jjj, evecj(j,​jjj) * evalj(jjj) * evecj(jj,​jjj) ); 
-difj(j,​jj) ​ = covj(j,jj) - testj(j,​jj);​ 
-display difj; 
- 
-$Title Second Covariance Matrix and its Eigen-values and -vectors. 
-SETS 
- ​i ​         variables ​                           /i1*i15/ 
- 
-ALIAS (i,ii,iii); 
- 
-PARAMETER 
- ​in1(i,​n) ​   observations of variables 
- ​meani(i) ​   mean value of the observations 
- ​covi(i,​ii) ​ Second covariance matrix; 
- 
-* forming a covariance matrix 
-in1(i,​n) ​  = NORMAL(0,​1);​ 
-meani(i) ​  = SUM(n, in1(i,n)) /CARD(n); 
-covi(i,ii) = SUM(n, (in1(i,n) - meani(i))*(in1(ii,​n) - meani(ii)))/​CARD(n);​ 
-DISPLAY covi; 
- 
-Parameter eveci(i,​i) ​ Eigenvectors found so far; 
-Parameter evali(i) ​   Eigenvalues found so far; 
- 
-$batinclude eigen.inc i covi eveci evali ok call2 
- 
-display eveci, evali; 
- 
-Parameter testi(i,i) New covariance matrix 
-          difi(i,​i) ​ Difference between original and new covariance; 
- 
-testi(i,ii) = sum( iii, eveci(i,​iii) * evali(iii) * eveci(ii,​iii) ); 
-difi(i,​ii) ​ = covi(i,ii) - testi(i,​ii);​ 
-display difi;  
-</​code>​ 
-<​code>​ 
-$offlisting 
-$ontext 
-  Batinclude file eigen.inc 
- 
-  The program computes the eigenvalues and eigen vector of a square 
-  symmetric matrix. 
- 
-  Parameters (all required): 
-  1: The set over which the matrix is defined (input) 
-  2: The square matrix (input) 
-  3: The eigenvectors (output) declared as a square matrix over the set 
-     in argument 1. The second set indicates the different eigenvalues. 
-  4: The eigenvalues (output) declared as a one dimensional parameter 
-     over the set in argument 1. 
-  5: Return Code. A scalar into which a return code will be written. 
-     A +1 indicates that not all eigenvectors and eigenvalues were found. 
-  6: A name that can be used to make all identifiers unique for multiple 
-     calls 
-$offtext 
- 
-alias (%1, %6__1, %6__2 ); 
-%5 = 1; 
 * *
 *  Define a model that finds the largest eigen value and eigen vector: *  Define a model that finds the largest eigen value and eigen vector:
 * *
-set       ​%6__3(%1 Subset already having an associated eigenvector-eigenvalue pair; +Set       ​jev(j  'Subset already having an associated eigenvector-eigenvalue pair'
-Scalar ​   ​%6__4      ​Norm of initial random vector;+Scalar ​   ​normrv ​  '​Norm of initial random vector';
  
-%6__3(%1) = no; +jev(j) = no; 
-%3(%1,%1) = 0;+evecj(j,j) = 0;
  
-variable %6__5(%1  ​Eigenvector +Variable 
-         %6__6       Eigenvalue;+    evec(j   '​Eigenvector' 
 +    ​eval ​      ​'Eigenvalue';
  
-equation %6__7       Definition of Objective; +Equation 
-Equation %6__8(%1  ​The new Eigenvector must be orthogonal to earlier Eigenvectors; +    eq_obj ​    '​Definition of Objective' 
-Equation %6__9       The norm of the eigen vector must be 1;+    ​eq_orth(j'The new Eigenvector must be orthogonal to earlier Eigenvectors' 
 +    ​eq_norm ​   'The norm of the eigen vector must be 1';
  
-%6__7        ​.. ​%6__6 =e= sum((%1,%6__1), %6__5(%1)*%2(%1,%6__1)*%6__5(%6__1) );+eq_obj ​       ​.. ​eval =e= sum((j,jj), evec(j)*covj(j,jj)*evec(jj) );
  
-%6__8(%6__3) .. sum( %1%3(%1,%6__3) * %6__5(%1) ) =e= 0;+eq_orth(jev .. sum( jevecj(j,jev) * evec(j) ) =e= 0;
  
-%6__9        ​.. sum(%1, sqr(%6__5(%1)) ) =E= 1;+eq_norm ​      .. sum(j, sqr(evec(j)) ) =e= 1;
  
-Model %6__10 ​%6__7, %6__8, %6__9 /+Model mod_ev ​all /;
-%6__10.limrow=0;​ +
-%6__10.limcol=0;​ +
-%6__10.solprint=2;​ +
-%6__10.solvelink=1;+
  
-parameter ​%6__11(%1);+parameter ​helper(j);
  
-loop( %6__2$%5,+option limRow=0, limCol=0, solPrint=silent,​ solveLink=%solveLink.loadLibrary%
 +loop(jjj,
 * *
 *  We initialize the eigenvector with random numbers. To prevent using *  We initialize the eigenvector with random numbers. To prevent using
Line 197: Line 64:
 *  nonzero. *  nonzero.
 * *
-   %6__5.l(%1)     = uniform(-1,​1);​ +   evec.l(j)     = uniform(-1,​1);​ 
-   %6__11(%6__3)   = sum(%1%6__5.l(%1)*%3(%1,%6__3)); +   helper(jev)   = sum(jevec.l(j)*evecj(j,jev)); 
-   %6__5.l(%1)     ​= ​%6__5.l(%1) - sum(%6__3,%6__11(%6__3)*%3(%1,%6__3)); +   evec.l(j)     ​= ​evec.l(j) - sum(jev,helper(jev)*evecj(j,jev)); 
-   %6__4           = sqrt( sum(%1, sqr(%6__5.l(%1)) ) ); +   normrv ​       ​= sqrt( sum(j, sqr(evec.l(j)) ) ); 
-   %6__5.l(%1)     ​= ​%6__5.l(%1) / %6__4+   evec.l(j)     ​= ​evec.l(j) / normrv
-   ​solve ​%6__10 ​using nlp maximizing ​%6__6+   ​solve ​mod_ev ​using nlp maximizing ​eval
-   %5$(%6__10.solvestat ne 1) = 0+   abort$(mod_ev.solvestat ne 1) '​solvestat not 1', mod_ev.solvestat
-   %5$(%6__10.modelstat ne 1 and %6__10.modelstat ne 2) = 0+   abort$(mod_ev.modelstat ne 1 and mod_ev.modelstat ne 2) '​modelstat not 1 or 2'
-   %6__3(%6__2   = Yes; +   jev(jjj     = Yes; 
-   %3(%1,%6__2   %6__5.l(%1); +   evecj(j,jjj evec.l(j); 
-   %4(%6__2      ​%6__6.l;+   evalj(jjj   eval.l;
 ); );
-* + 
- Clear the content of all local variables to preserve space +Parameter 
-* +    testj(j,j) 'New covariance matrix'​ 
-option kill=%6__3kill=%6__4kill=%6__5kill=%6__6kill=%6__11+    difj(j,​j) ​ '​Difference between original and new covariance';​ 
-* + 
- Clear as much of the symbol table as possible+testj(j,jj) sum( jjjevecj(j,jjj) * evalj(jjj) * evecj(jj,jjj) ); 
-*  It seems that aliasses cannot be cleared+difj(j,​jj)  ​covj(j,jj) - testj(j,​jj);​ 
-+abort$(smax((j,​jj),​ abs(difj(j,​jj)))>​1e-6) difj
-$kill %6__3 + 
-$kill %6__4 +Use eigenvector tool to calculate Eigenvectors and Eigenvalues 
-$kill %6__5 +execute_unload 'a.gdx', j, covj; 
-$kill %6__6 +execute '​eigenvector.exe a.gdx j covj b.gdx evalj evecj';​ 
-$kill %6__7 +execute_load '​b.gdx',​ evalj, evecj; 
-$kill %6__8 +testj(j,jj) = sum( jjj, evecj(j,​jjj) ​evalj(jjj) * evecj(jj,​jjj) ); 
-$kill %6__9 +difj(j,​jj) ​ = covj(j,jj) - testj(j,​jj);​ 
-$kill %6__11+abort$(smax((j,​jj),​ abs(difj(j,​jj)))>​1e-6) difj;
 </​code>​ </​code>​
IMPRESSUM / LEGAL NOTICEPRIVACY POLICY gams/calculating_eigenvalues.txt · Last modified: 2020/05/28 07:57 by Michael Bussieck