# Hessian, Eigenvalues, Eigenvectors, and Stability in the Surface Evolver

Our motto: The purpose of calculus is to turn every problem into a linear algebra problem. Contents:
• How to think of surfaces discretized in the Evolver.
• What is the Hessian?
• Hessian iteration.
• Eigenvalues and eigenvectors.
• Physical interpretation of eigenvalues and eigenvectors.
• Basic Evolver eigenstuff commands.
• Normal motion mode.
• Hessian with metric.
• Visualizing eigenvectors.
• Accuracy of eigenvalues.
• Evolving unstable surfaces.
• Caveats and warnings. For background, see any decent linear algebra text, such as Strang [SG], especially chapter 6. For more on stability and bifurcations, see Arnol'd [AV] or Hale and Kocak [HK].

## 1. How to think of surfaces discretized in the Evolver.

The Surface Evolver parameterizes a surface in terms of vertex coordinates. Restrict attention to one fixed triangulation of a surface. Let X be the vector of all vertex coordinates, containing N entries, where N could be thousands. Call X the "configuration vector", and call the N-dimensional vector space it inhabits "configuration space". The total energy E(X) is a scalar function on configuration space. Visualize the graph of the energy function as a rolling landscape with mountains, valleys, and mountain passes. The gradient of the energy is the steepest uphill direction. The ordinary 'g' iteration step of Evolver minimizes energy by moving in the downhill direction, the negative of the gradient direction, until it reaches a local minimum.

A constraint is a scalar function on configuration space for which X is restricted to lie on a particular level set. A global constraint, such as a volume constraint, determines a hypersurface, that is, it removes one degree of freedom from the configuration space. Level-set constraints on vertices, such as x^2 + y^2 = 4, are actually a separate constraint for each vertex they apply to. Each removes one degree of freedom for each vertex it applies to. The intersection of all the constraint hypersurfaces is called the "feasible set", and is the set of all possible configurations satisfying all the constraints. If there are any nonlinear constraints, such as curved walls or volume constraints, then the feasible set will be curved. When a surface is moved, by 'g' for example, the vector for the direction of motion is always first projected to be tangent to the feasible set. However, if the feasible set is curved, straight-line motion will deviate slightly from the feasible set. Therefore, after every motion X is projected back to the feasible set using Newton's Method.

## 2. What is the Hessian?

Consider a particular surface in a particular configuration X, and consider a small perturbation Y added to X. Then the energy may be expanded in a Taylor series:
```   E(X+Y) = E_0 + G^T Y + 1/2 Y^T H Y + ...
```
Here the constant E_0 is the original energy, E_0 = E(X). G is the vector of first derivatives, the gradient, which is a vector the same dimension as Y or X (G is a 1-form, if you want to get technical). H is a square matrix of second derivatives, the Hessian. Here ^T denotes transpose. The Hessian is automatically a symmetric matrix. The gradient G determines the best linear approximation to the energy, and the Hessian H determines the best quadratic approximation.

If there are constraints on the surface, then the the perturbation vector Y is required to be tangent to the feasible set in configuration space. The curvature of the feasible set can have an effect on H, but this is taken care of automatically by the Evolver. This effect arises from the slight change in energy due to the projection back to the feasible set after motion. This does not affect the gradient, but it does affect the Hessian. In particular, if Q is the Hessian of a global constraint and q is the Lagrange multiplier for the constraint, then the adjusted energy Hessian is H - qQ. Therefore it is necessary to do a 'g' step to calculate Lagrange multipliers before doing any Hessians when there are global constraints.

## 3. Hessian iteration.

Suppose we assume the quadratic approximation to the surface is a good one. Then we can find an equilibrium point by solving for motion Y that makes the energy gradient zero. Recalling that G and H depend only on X, the energy gradient as a function of Y is
```	  grad E = G^T + Y^T H.
```
So we want G^T + Y^T H = 0, or transposing,
```	   G + H Y = 0.
```
Solving for Y gives
```	    Y = - H^{-1} G.
```
This is actually Newton's Method applied to the gradient. The Evolver's "hessian" command does this calculation and motion. It works best when the surface is near an equilibrium point. It doesn't matter if the equilibrium is a minimum, a saddle, or a maximum. However, nearness does matter. Remember we are dealing with thousands of variables, and you don't have to be very far away from an equilibrium for the equilibrium to not be within the scope of the quadratic approximation. When it does work, "hessian" will converge very quickly, 4 or 5 iterations at most.

Example: Start with the catenoid datafile catman.fe, which has radius 1 and half-height 0.55. Do "r; u; g 33". This gets very close to an equilibrium. Doing "hessian" a couple of times gets to the equilibrium:

```Enter command: hessian
WARNING: Hessian not positive definite. Index: 15
1. area: 6.458481560186685 energy: 6.458481560186685
Enter command: hessian
WARNING: Hessian not positive definite. Index: 15
1. area: 6.458481554354727 energy: 6.458481554354727
Enter command: hessian
WARNING: Hessian not positive definite. Index: 15
1. area: 6.458481554354730 energy: 6.458481554354730
```
The warnings about being not positive definite are telling us this is not a minimum, but rather a saddle point. See below for an explanation of index.

If you find yourself at a saddle point, the gradient is zero, so gradient descent or conjugate gradient won't move the surface. There is a command "saddle" for this situation. "Saddle" uses the Hessian to find a downhill direction and moves in that direction. Then you can use 'g' to go downhill until you are near enough to another equilibrium (a minimum, one hopes) to use "hessian" again.

Catenoid example continued:

```Enter command: saddle
Lowest eigenvalue -0.166231802086995
stepsize 0.386015
1. area: 6.448452267001213 energy: 6.448452267001213
Enter command: U
Enter command: g 40
(39 lines of output suppressed)
1. area:  6.43239214344577 energy:  6.43239214344577  scale: 0.501238
Enter command: hessian
1. area: 6.432392110363842 energy: 6.432392110363842
Enter command: hessian
1. area: 6.432392110363805 energy: 6.432392110363805
```
This time there are no warnings about definiteness, so this is a minimum.

## 4. Eigenvalues and eigenvectors.

The Hessian H is a real symmetric matrix. Therefore it can be diagonalized by an orthogonal change of basis of configuration space. The new basis vectors are called eigenvectors, and the entries on the diagonal are called eigenvalues. In this eigenvector basis, the shape of the quadratic becomes obvious. Directions along eigenvectors with negative eigenvalues have curvature downward. Directions with positive eigenvalues have upward curvature. Remember that eigenvectors are in configuration space, so each eigenvector represents a particular perturbation of the surface.

Another characterization of an eigenvector Q with an eigenvalue q is

```		       H Q = q Q.
```
The number of negative eigenvalues of H is called the "index" of H. The numbers of negative, zero, and positive eigenvalues is called the "inertia" of H. One can also speak of the inertia relative to a value c as the inertia of H-cI, i.e. the number of eigenvalues less than c, equal to c, and greater than c. Sylvester's Law of Inertia says that if P is a nonsingular matrix of the same dimension as H, then the matrix
```			K = P H P^T
```
has the same inertia as H, although not the same eigenvalues. The Evolver can factor a Hessian into a form
```			H = L D L^T
```
where L is a lower triangular matrix and D is diagonal. Hence the inertia of H can be found by counting signs on the diagonal of D. Inertia relative to c can be found by factoring H-cI.

The eigenvectors associated to a particular eigenvalue form a subspace, whose dimension is called the "multiplicity" of the eigenvalue.

## 5. Physical interpretation of eigenvalues and eigenvectors.

Consider a surface at equilibrium. Recall that the gradient after a small perturbation from equilibrium is
```     grad E = H Y.
```
So if we move along an eigenvector direction,
```    grad E = H Q = q Q.
```
Note the force is aligned exactly with the perturbation. Since force is negative gradient, if q is positive the force will restore the surface to equilibrium (stable perturbation), and if q is negative the force will cause the pertubation to grow (unstable perturbation). Each eigenvector thus represents a mode of perturbation. Furthermore, different modes grow or decay independently, since in an eigenvector basis the Hessian is diagonal and there are no terms linking the different modes.

## 6. Basic Evolver eigenstuff commands.

Several Evolver commands are available to analyze eigenvalues. The inertia of the Hessian with respect to some value c may be found with "eigenprobe c". For example, "eigenprobe 1" would report the number of eigenvalues less than 1, equal to 1, and greater than 1. For example, suppose we use cube.fe, do "r; r; g 33; eigneprobe 1".
```Enter command: r
Vertices: 50   Edges: 144   Facets: 96   Facetedges: 288   Memory: 35200
Enter command: r
Vertices: 194   Edges: 576   Facets: 384   Facetedges: 1152   Memory: 144688
Enter command: g 5
5. area:  5.35149888327957 energy:  5.35149888327957  scale: 0.234708
4. area:  5.14935198146208 energy:  5.14935198146208  scale: 0.211314
3. area:  5.04082414175121 energy:  5.04082414175121  scale: 0.198453
2. area:  4.97603104222589 energy:  4.97603104222589  scale: 0.212489
1. area:  4.93583787909035 energy:  4.93583787909035  scale: 0.199088
Enter command: eigenprobe 1
Eigencounts:   406 <,  0 ==,  175 >
```
This means the Hessian has 406 eigenvalues less than 1, none equal to 1, and 175 greater than 1. Note the total number of eigenvalues is equal to the total number of degrees of freedom of the surface, in this case
```   (# vertices)*(# coordinates)-(# constraints) = 194*3 - 1 = 581 = 406+175
```
where the constraint is the volume constraint.

As another example, consider the catenoid done earlier. At the saddle point, we want an idea of the size of the negative eigenvalues, so we try a couple of probes:

```Enter command: eigenprobe 0
Eigencounts:   15 <,  0 ==,  93 >
Enter command: eigenprobe -.1
Eigencounts:   5 <,  0 ==,  103 >
Enter command: eigenprobe -.2
Eigencounts:   0 <,  0 ==,  108 >
```
Eigenvalues near a particular value c may be found with the "lanczos c" command, which will print out 15 approximate eigenvalues near the given value. These are found using the Lanczos Algorithm, which uses the factored form of H-cI in an iteration applied to a random starting vector. The eigenvalue nearest c is very accurate, but others may not be. Also, the multiplicities are often spurious. Since lanczos starts with a random vector, it can be run multiple times to get an idea of the error. To report a different number of eigenvalues, there is a form of the command "lanczos(c,n)" where n is the desired number of eigenvalues.

Example: Consider the catenoid saddle point again. We have found a lower bound of -.2 with eigenprobe, so we use c = -.2 to be sure to get the negative eigenvalues.

```Enter command: lanczos -.2
Eigencounts:   0 <,  0 ==,  108 >
1    -0.166231802086995
2    -0.166231680552490
3    -0.152458089614635
4    -0.106236212563050
5    -0.106234366247255
6    -0.090082092161033
7    -0.071091363808304
8    -0.071090320077033
9    -0.027892770620759
10    -0.023170983264959
11    -0.023166120823995
12    -0.019190965317897
13    -0.011028578039142
14    -0.001435993210057
15     0.033348448639838
```
We were expecting 15 negative eigenvalues, but only got 14. This is due to the problems the Lanczos Algorithm has with multiplicities. We try again:
```Enter command:  lanczos -.2
Eigencounts:   0 <,  0 ==,  108 >
1    -0.166231802086055
2    -0.152458089617565
3    -0.152457519122379
4    -0.106236212563050
5    -0.106236212247118
6    -0.090082092161584
7    -0.071091363808304
8    -0.071091158817350
9    -0.027892770620759
10    -0.023170983264959
11    -0.023166120823996
12    -0.018776028676628
13    -0.011028578039142
14    -0.001435993210057
15     0.033348448639866
```
Note the set of values is pretty much the same, but some multiplicities are different. Never believe the multiplicities lanczos reports.

At the catenoid minimum, we know eigenvalues are nonnegative, so we just probe with c = 0:

```Enter command: lanczos 0
Eigencounts:   0 <,  0 ==,  108 >
1     0.026052238371145
2     0.031856777934626
3     0.047699681340687
4     0.056733874533003
5     0.056734987631288
6     0.066437323288507
7     0.069306703122241
8     0.069309080060938
9     0.069360683473588
10     0.082395337547387
11     0.082980194084240
12     0.088125783422076
13     0.090879863697854
14     0.094458784621844
15     0.103320847691398
```
Since there are no zero eigenvalues, the Hessian is positive definite, and this is a stable minimum.

For more accurate eigenvalues and multiplicities, there is the "ritz(c,n)" command (named after the Rayleigh-Ritz algorithm), which takes a random n-dimensional subspace and applies shifted inverse iteration to it. It reports eigenvalues as they converge to machine precision. When all desired eigenvalues converge (as by the total change in eigenvalues in an iteration being less than 1e-10), the iteration ends and the values are printed. Lesser precision is shown on these to indicate they are not converged to machine precision. You can interrupt ritz by hitting the interrupt key (usually CTRL-C), and it will report the current eigenvalue approximations. The advantage of ritz over lanczos is that ritz reports multiplicities correctly, and there are no spurious values. The disadvantage is that ritz takes longer. I find myself always using ritz rather than lanczos, because of its accuracy on multiplicities.

```Enter command: ritz(-.2,16)
Eigencounts:   0 <,  0 ==,  108 >
1.   -0.166231802086995
2.   -0.152458089617565
3.   -0.152458089617565
4.   -0.106236212563050
5.   -0.106236212563050
6.   -0.090082092161592
7.   -0.071091363808304
8.   -0.071091363808304
9.   -0.027892770620759
10.   -0.023170983264959
11.   -0.023166120823996
12.   -0.011028578039142
13.   -0.0110285780391
14.   -0.0014359932101
15.   -0.0014359932101
16.    0.0333484486692
Total eigenvalue changes in last iteration: 9.7077013e-11
```
Example: At catenoid minimum:
```Enter command: ritz(0,10)
Eigencounts:   0 <,  0 ==,  108 >
1.    0.026052238371145
2.    0.0318567779345
3.    0.0318567779345
4.    0.0476996813407
5.    0.0476996813407
6.    0.0567349876313
7.    0.0664373232885
8.    0.0664373232885
9.    0.0693067031224
10.    0.0693090800612
```
Consult the hessian_menu command for a smorgasbord of more things to do with the Hessian.

## 7. Normal motion mode.

Anyone who uses "hessian" on a surface with very many facets quickly finds that it is very difficult to get near enough to a minimum for "hessian" to work. The main difficulty is that to be at a minimum means not only that the surface is the right overall shape, but that the details of the triangulation are also optimal. This is what happens at the catenoid saddle point. The surface is the right shape, but the triangulation is not the best possible. So, in general, most of the effort in reaching a minimum is devoted to waiting for the triangulation to slither around. A further problem is that all these sideways slithering modes have very small eigenvalues at equilibrium, obscuring the eigenvalues we are really interested in, those that reveal the stability of the ideal smooth surface. In fact, eigenvalues of smooth surfaces are defined only for perturbations perpendicular to the surface in order to avoid an infinity of sideways slithering modes with zero eigenvalues. The Evolver can be made to restrict perturbations this way by the "hessian_normal" command. This reduces the perturbation allowed at any vertex to one degree of freedom perpendicular to the surface (the perpendicular actually being defined as the gradient of the volume). The advantages of hessian_normal is that "hessian" converges much better and the dimension of the configuration space is cut by a factor of 3 (or whatever the ambient space dimension is). The disadvantage is that you do not find the absolute best triangulation, but this is very minor since the triangulated surface is only an approximation to the smooth surface anyway. My advice is to always use hessian_normal, unless you know you have a good reason not to.

Example:

```Enter command: hessian_normal
hessian_normal  ON.
Enter command: ritz(0,15)
Eigencounts:   0 <,  0 ==,  36 >
1.    0.503204121488077
2.    0.690389595824364
3.    0.690389595824365
4.    1.215401251192666
5.    1.215401251192666
6.    1.977446010036804
7.    1.9774955770794
8.    2.5338045074780
9.    2.6512425181985
10.    2.6512425181985
11.    2.8358903297445
12.    2.8358903297445
13.    2.9720891301587
14.    2.9720891301587
15.    3.4103761189660
Total eigenvalue changes in last iteration: 9.1304186e-11
```
Catenoid minimum:
```Enter command: hessian_normal
hessian_normal  ON.
Enter command: ritz(0,10)
Eigencounts:   0 <,  0 ==,  36 >
1.    0.514445712629864
2.    0.714488917523277
3.    0.714488917523277
4.    1.261017108906935
5.    1.261017108906936
6.    2.007586721451991
7.    2.007590244697900
8.    2.579554224174655
9.    2.7541598365110
10.    2.7541598536864
Total eigenvalue changes in last iteration: 9.8221709e-11
```
Note the two sets of eigenvalues are much more similar, since they depend more on the shape of the surface rather than its exact triangulation.

There is one effect of hessian_normal that may be a little puzzling at first. Many times a surface is known to have modes with zero eigenvalue; translational or rotational modes, for example. Yet no zero eigenvalues are reported. For example, with an unrefined but converged cube.fe,

```Enter command: hessian_normal
hessian_normal  ON.
Enter command: ritz(0,10)
Eigencounts:   0 <,  0 ==,  13 >
1.    0.2439343205786
2.    0.2439343205786
3.    0.2439343205786
4.    2.2111052161890
5.    2.2111052161890
6.    2.2111052161890
7.    2.6515643837331
8.    2.6515643837331
9.    2.8907398309279
10.    3.9391006653705
Total eigenvalue changes in last iteration: 6.8667516e-11
```
One might expect 6 zero eigenvalues from three translational modes and three rotational modes. But the rotational modes are eliminated by the hessian_normal restriction. The three translational modes have eigenvalue 0.2439343205786, not 0, since some vertices cannot translate tangent to the surface. The effective translation results from vertices moving in on one hemisphere and out on the other. This distorts the triangulation, raising the energy, hence the positive eigenvalue. This effect decreases with refinement. It may happen that the distortion improves the triangulation, which would result in the lowering of the eigenvalue.

## 8. Hessian with metric.

One would expect that refining the surface would lead the eigenvalues to converge to the eigenvalues for the smooth surface. Suppose one refines the catenoid minimum and converges to a minimum (in hessian_normal mode, of course). Then
```Enter command: ritz(0,10)
Eigencounts:   0 <,  0 ==,  168 >
1.    0.143138804205824
2.    0.188546678630880
3.    0.188546678630880
4.    0.321674036722288
5.    0.321674036722288
6.    0.533436451654412
7.    0.5334436082271
8.    0.8053367591906
9.    0.8093825151690
10.    0.8093825153531
Total eigenvalue changes in last iteration: 8.2309937e-11
```
The eigenvalues have all shrunk by about a factor of 4! This is no way to converge. The explanation is that so far we have been looking at eigenvectors in slightly the wrong way. An eigenvector is supposed to represent a mode of perturbation that is proportional to the force. However, the response of a surface to a force need not be numerically equal to the force. After all, forces and displacements are different kinds of things. They have different units: displacement has units of distance, and force has units of energy per distance. In other words, displacement is a vector and force is a covector. Note that matrix multiplication by the Hessian H turns a vector into a covector. In general, to turn a vector into an equivalent covector, one needs an inner product, or metric. So far we have been using the Euclidean inner product on configuration space, but that is not the proper one to use if you want to approximate smooth surfaces. The proper inner product of perturbations f and g of a smooth surface is the integral over the surface of the pointwise inner product:
```           /
<f,g> = | <f(x),g(x)> dA.
/
```
In discrete terms, the inner product takes the form of a square matrix M such that <Y,Z> = Y^T M Z for vectors Y,Z. We want this inner product to approximate integration with respect to area. Having such an M, the eigenvalue equation becomes
```               H Q = q M Q.
```
Officially, Q is now called a "generalized eigenvector" and q is a "generalized eigenvalue". But we will drop the "generalized". An intuitive interpretation of this equation is as Newton's Law of Motion,
`              Force = Mass * Acceleration `
where HQ is the force, M is the mass, and qQ is the acceleration. In other words, in an eigenmode, the acceleration is proportional to the perturbation.

The Evolver command "linear_metric" includes M in the eigenvector calculations. Two methods are available. In the simplest, the "star metric", M is a diagonal matrix, and the "mass" associated with each vertex is 1/3 of the area of the adjacent facets (1/3 since each facet gets shared among 3 vertices). The other, the "linear metric", extends functions from vertices to facets by linear interpolation and integrates with respect to area. By default, "linear_metric" uses a 50/50 mix of the two, which seems to work best. See the more detailed discussion below. The fraction of linear metric can be set by assigning the fraction to the internal variable linear_metric_mix. By default, linear_metric_mix := 0.5. In quadratic mode, however, only the quadratic interpolation metric is used, since the star metric restricts convergence to order h^2 while the quadratic interpolation metric permits h^4 convergence.

Example: catenoid minimum

```Enter command: linear_metric
Using linear interpolation metric with Hessian.
Enter command: ritz(0,10)
Eigencounts:   0 <,  0 ==,  36 >
1.    4.4905965290294
2.    6.3459225129726
3.    6.3459225129726
4.   11.7601643904582
5.   11.7601643904582
6.   20.0942195806072
7.   20.0944203641549
8.   23.6988361223006
9.   25.8390145007050
10.   25.8390145514502
Total eigenvalue changes in last iteration: 6.1311199e-11
```
After refining and reconverging:
```Enter command: ritz(0,10)
Using alternate minimal degree with linear interpolation metric.
Eigencounts:   0 <,  0 ==,  168 >
1.    4.811894121996312
2.    6.371343313698237
3.    6.371343313698248
4.   11.038034522163555
5.   11.038034522163555
6.   18.7648997082141
7.   18.7649552291661
8.   25.7146187558491
9.   27.3047483851875
10.   27.3047484846167
Total eigenvalue changes in last iteration: 8.5373403e-11
```

## 9. Visualizing eigenvalues.

Naturally, you want to see what various modes look like. At present. Evolver can do this through a subsidiary menu invoked by the command "hessian_menu". This is a menu I use to test out various Hessian-related features. Once you have an approximate eigenvalue for a mode you want to look at, do "hessian_menu". The sequence of options in the menu you want to do is:
``` Option 1. Initialize Hessian.
Option V. Calculate eigenvector.  This prompts for an approximate
eigenvalue.  Don't give it the exact eigenvalue, just
some value much closer to your desired value than to
any other eigenvalue.  This also prompts for a number
of iterations to do.  Say 40 or 50.  Eigenvalue
approximations are printed every 10 iterations, so
you can watch convergence.
Option 4. Move.  This prompts you for a scale factor. 1 is a
reasonable first guess.  If it is too big or too
small, option 7 restores the original surface so
you can try option 4 again.
Option 7. Restore original surface.  Do this unless you want
Can repeat cycle starting with V again.
Option q. Quit. Back to main prompt.
```
If you want to look at a bunch of modes, you can do a ritz calculation at the second step:
``` Option 1. Initialize Hessian.
Option Z. Do ritz calculation of multiple eigenpairs.  This
prompts for a shift value.  Pick a value near the
eigenvalues you want, preferably below them so the
shifted Hessian is positive definite.  This also
prompts for a number of eigenpairs to do.  Eigenvalue
approximations are printed as they converge.  You
can stop the iterations with a keyboard interrupt.
Option X. Pick which eigenvector you want to use for moving.
Option 4. Move.  This prompts you for a scale factor. 1 is a
reasonable first guess.  If it is too big or too
small, option 7 restores the original surface so
you can try option 4 again.
Option 7. Restore original surface.  Do this unless you want
Can repeat cycle by doing option X again.
Option q. Quit. Back to main prompt.
```

## 10. Accuracy of eigenvalues.

The question here is how well the eigenvalues of the discrete surface approximate the eigenvalues of the smooth surface. This is significant when deciding the stability of a surface. I present some comparisons in cases where an analytic result is possible. All are done in hessian_normal mode with linear_metric.

Any general theorem on accuracy is difficult, since any of these situations may arise:

• The discrete surface exists, but the smooth surface does not.
• The smooth surface exists, but the discrete surface does not.
• The smooth and discrete surfaces exist, but their eigenvalues are not close due to proximity to a bifurcation point.
• The discrete surface is a good approximation to the smooth surface, and their eigenvalues are close. In general, linear model eigenvalues will have error on the order h^2, where h is a typical edge length, and quadratic model eigenvalues will have error h^4.

Example: Flat square membrane with fixed sides of length pi.
Smooth eigenvalues: n^2 + m^2, or 2,5,5,8,10,10,13,13,17,17,18,...

```64 facets:
linear_metric_mix:=0  linear_metric_mix:=.5   linear_metric_mix:=1
1.    1.977340485013809      2.036549240354332      2.098934776481970
2.    4.496631115530167      4.959720306550873      5.522143605551107
3.    4.496631115530168      4.959720306550873      5.522143605551109
4.    6.484555753109618      7.764634143334637      9.618809107463317
5.    8.383838160213188     10.098798069316681     12.451997407229225
6.    8.958685299442784     10.499717069102577     12.677342602918362
7.    9.459695221455723     12.274525789880890     17.295660791788656
8.    9.459695221455725     12.274525789880887     17.295660791788663
9.   11.5556960351898       15.7679635512509       23.585015056138165
10.   12.9691115062193       16.7214142904454       23.5850152363232

256 facets:
linear_metric_mix:=0   linear_metric_mix:=.5  linear_metric_mix:=1
1.    1.994813709598124      2.009974216370146      2.025331529636075
2.    4.869774462491779      4.999499042685448      5.135641457408189
3.    4.869774462491777      4.999499042685451      5.135641457408191
4.    7.597129628414264      7.985384943789075      8.411811839672165
5.    9.571559289947587     10.090156419079083     10.639318311067063
6.    9.759745949169531     10.186524915471141     10.665025703718413
7.   12.026114091326091     13.008227978344539     14.149533399632906
8.   12.026114091326104     13.008227978344539     14.149533399632912
9.   15.899097189772919     17.242998229925817     18.8011199268796
10.   15.8990975402264       17.2429983900303       18.8011200311777

1024 facets:
linear_metric_mix:=0   linear_metric_mix:=.5  linear_metric_mix:=1
1.    1.998755557957786      2.002568891165917      2.006394465437547
2.    4.967163232244397      5.0005039961225        5.0342462883517
3.    4.967163232244401      5.0005039961225        5.0342462883517
4.    7.897718646133286      7.9990889953386        8.1028624365882
5.    9.891301374223204     10.0268080202750       10.1637119963890
6.    9.941758861492074     10.0519390576227       10.1658353297015
7.   12.750608847206168     13.0141591049184       13.2878054981609
8.   12.750608847206173     13.0141591049184       13.2878054981609
9.   16.718575011911319     17.0839068189583       17.4625185925160
10.   16.7185751321348       17.0839069326949       17.4625186893608
```
A curious fact here is that eigenvalues 5 and 6 are identical in the smooth limit, but are split in the discrete approximation. This is due to the fact that the 2-dimension eigenspace of the eigenvalue 10 can have a basis of two perturbations that each have the symmetry of the square, but are differently affected by the discretization.

Example: Sphere of radius 1 with fixed volume.
Analytic eigenvalues: (n+2)(n-1), n=1,2,3,...
with multiplicities: 0,0,0,4,4,4,4,4,10,10,10,10,10,10,10

```Linear mode, linear_metric_mix:=.5.
24 facets          96 facets     384 facets       1536 facets
1.    0.3064952760319  0.1013854786956  0.0288140848394  0.0078006105505
2.    0.3064952760319  0.1013854786956  0.0288140848394  0.0078006105505
3.    0.3064952760319  0.1013854786956  0.0288140848394  0.0078006105505
4.    2.7132437122162  3.9199431944791  4.0054074145696  4.0036000699357
5.    2.7132437122162  3.9199431944791  4.0054074145696  4.0036000699357
6.    2.7132437122162  3.9199431944791  4.0054074145696  4.0036000699357
7.    3.6863290283837  4.3377418342267  4.1193008989205  4.0347069118257
8.    4.7902142880145  4.3377418342267  4.1193008989205  4.0347069118257
9.    4.7902142880145  9.0031399793203  9.9026247196089  9.9870390309740
10.    6.7380098215325  9.7223042306475 10.0981057119891 10.0387404054572
11.    6.7380098215325  9.7223042306545 10.0981057120367 10.0387404054607
12.    6.7380098215325  9.7223042307334 10.0981057121432 10.0387404055516
13.    8.6898276285107  9.9763109502799 10.1298354477703 10.0466252566958

In quadratic mode (net 4 times as many vertices per facet)
24 facets          96 facets     384 facets       1536 facets

1.    0.0311952242811   0.0025690857791  0.0002802874477  0.0000358409262
2.    0.0311952242812   0.0025690857791  0.0002802874477  0.0000358409262
3.    0.0311952242812   0.0025690857791  0.0002802874477  0.0000358409262
4.    4.2142037235384   4.0160946848738  4.0009978658738  4.0000515986034
5.    4.2142037235384   4.0160946848738  4.0009978658738  4.0000515986034
6.    4.2564592100813   4.0222815310390  4.0014892600742  4.0000883321792
7.    4.2564592100813   4.0222815310390  4.0014892600742  4.0000883321792
8.    4.2564592100813   4.0222815310390  4.0014892600742  4.0000883321792
9.    9.9900660130172  10.1007993043211 10.0076507048408 10.0004402861468
10.    9.9900660130172  10.1007993043271 10.0076507049338 10.0004402861545
11.    9.9900660130173  10.1007993047758 10.0076507050506 10.0004402861829
12.   12.1019587923328  10.1262981041797 10.0089922470136 10.0005516796159
13.   12.1019587923350  10.1262981042009 10.0089922470510 10.0005516796196
14.   12.1019587927495  10.1262981044110 10.0089922470904 10.0005516797052
15.   12.6934178610912  10.1478397915001 10.0106695599620 10.0007050467653
```

## 11. Evolving unstable surfaces.

The Hessian features can be used to evolve unstable surfaces. Typically these are surfaces with just a few unstable modes. The idea is to get close to the desired equilibrium and use "hessian" to reach it. "hessian_normal" mode should be used. Regular 'g' gradient descent iteration should not be used. To change the surface, i.e. to follow an equilibrium curve through a phase diagram, make small changes to the control parameter and use a couple of hessian iterations to reconverge each time. Particular care is needed near bifurcation points, because of the several equilibrium curves that meet. When approaching a bufurcation point, try to jump over it so there is no ambiguity as to which curve you are following. The approach to a bifurcation point can be detected by watching eigenvalues. An eigenvalue crosses 0 when a surface introduces new modes of instability.

Example: Catenoid with increasing volume.
Datafile: catbody.fe
Initial setup:

```   r;u;set body[1] target 16;g 12;hessian_normal;linear_metric;
hessian; hessian; r; hessian; hessian; hessian;
hh:={set body[1] target body[1].volume+1;hessian;hessian}
```
Now use hh to gradually increase volume while staying on equilibrium curve, watching eigenvalues.
```Enter command: ritz(0,3)
Using alternate minimal degree with linear interpolation metric.
Eigencounts:   0 <,  0 ==,  167 >
1.    2.2493865628383
2.    2.2493865628586
3.    3.4195071377674
Total eigenvalue changes in last iteration: 2.0156488e-11
Enter command: hh 5
1. area: 21.141667206066167 energy: 21.141667206066167
1. area: 21.141615782397832 energy: 21.141615782397832
1. area: 22.169902101400254 energy: 22.169902101400254
1. area: 22.169854825108697 energy: 22.169854825108697
1. area: 23.282397023726674 energy: 23.282397023726674
1. area: 23.282353101413886 energy: 23.282353101413886
1. area: 24.464341274946872 energy: 24.464341274946872
1. area: 24.464300513349833 energy: 24.464300513349833
1. area: 25.702179724171355 energy: 25.702179724171355
1. area: 25.702142204234143 energy: 25.702142204234143
Enter command: hessian
1. area: 25.702142204230316 energy: 25.702142204230316
Enter command: ritz(0,3)
Eigencounts:   0 <,  0 ==,  167 >
1.    1.2028324831242
2.    1.2028324831248
3.    2.0390079885519
Total eigenvalue changes in last iteration: 2.2813085e-11
Enter command: hh 3
1. area: 26.984000745573610 energy: 26.984000745573610
1. area: 26.983966587384145 energy: 26.983966587384145
1. area: 28.299631687326780 energy: 28.299631687326780
1. area: 28.299600930309420 energy: 28.299600930309420
1. area: 29.640558578947580 energy: 29.640558578947580
1. area: 29.640531147914665 energy: 29.640531147914665
Enter command: ritz(0,3)
Eigencounts:   0 <,  0 ==,  167 >
1.    0.685404735474365
2.    0.6854047354744
3.    1.4101587913280
Total eigenvalue changes in last iteration: 1.1820211e-11
Enter command: hh 2
1. area: 30.999756941441436 energy: 30.999756941441436
1. area: 30.999732661337180 energy: 30.999732661337180
1. area: 32.371490124371235 energy: 32.371490124371235
1. area: 32.371468752092902 energy: 32.371468752092902
Enter command: ritz(0,3)
Eigencounts:   0 <,  0 ==,  167 >
1.    0.4318734576755
2.    0.4318734576755
3.    1.1017886271341
Total eigenvalue changes in last iteration: 3.5383807e-11
Enter command: hh
1. area: 33.751107838484586 energy: 33.751107838484586
1. area: 33.751089095385431 energy: 33.751089095385431
Enter command: hessian
1. area: 33.751089095385161 energy: 33.751089095385161
Enter command: ritz(0,3)
Eigencounts:   0 <,  0 ==,  167 >
1.    0.3287582084304
2.    0.3287582084304
3.    0.9753637312583
Total eigenvalue changes in last iteration: 2.6318059e-11
Enter command: hh 2
1. area: 35.134861457652093 energy: 35.134861457652093
1. area: 35.134845055479197 energy: 35.134845055479197
1. area: 36.519742808618901 energy: 36.519742808618901
1. area: 36.519728467348564 energy: 36.519728467348564
Enter command: ritz(0,3)
Eigencounts:   0 <,  0 ==,  167 >
1.    0.1613606341053
2.    0.1613606341053
3.    0.7676766304843
Total eigenvalue changes in last iteration: 1.9774848e-11
Enter command: hh
1. area: 37.903347646791616 energy: 37.903347646791616
1. area: 37.903335105418506 energy: 37.903335105418506
Enter command: v
Body       target volume         actual volume        pressure
1      29.9999999999999      29.9999999999999     1.3823204062517
Enter command: ritz(0,3)
Eigencounts:   0 <,  0 ==,  167 >
1.    0.0939110493639
2.    0.0939110493639
3.    0.6826851644133
Total eigenvalue changes in last iteration: 5.9152683e-12
Enter command: hh
1. area: 39.283762236092727 energy: 39.283762236092727
1. area: 39.283751258070531 energy: 39.283751258070531
Enter command: ritz(0,3)
Eigencounts:   0 <,  0 ==,  167 >
1.    0.0354258781074
2.    0.0354258781074
3.    0.6080971191775
Total eigenvalue changes in last iteration: 1.1863177e-11
Enter command: hh
1. area: 40.659470270905231 energy: 40.659470270905231
WARNING: Hessian not positive definite. Index: 2

1. area: 40.659460645995907 energy: 40.659460645995907
```
So we have jumped over the bifurcation point. "Index 2" means there is now a 2-dimensional space of unstable modes, which is what we would expect from rotational symmetry and modes breaking that symmetry.
```Enter command: ritz(0,3)
Eigencounts:   2 <,  0 ==,  165 >
1.   -0.0152968669252
2.   -0.0152968669252
3.    0.5425334950865
Total eigenvalue changes in last iteration: 7.9594109e-12

Enter command: hh 5
WARNING: Hessian not positive definite. Index: 2

1. area: 42.029277078218904 energy: 42.029277078218904
WARNING: Hessian not positive definite. Index: 2

1. area: 42.029268622090562 energy: 42.029268622090562
WARNING: Hessian not positive definite. Index: 2

1. area: 43.392248175480987 energy: 43.392248175480987
WARNING: Hessian not positive definite. Index: 2

1. area: 43.392240728029670 energy: 43.392240728029670
WARNING: Hessian not positive definite. Index: 2

1. area: 44.747659587835507 energy: 44.747659587835507
WARNING: Hessian not positive definite. Index: 2

1. area: 44.747653010889152 energy: 44.747653010889152
WARNING: Hessian not positive definite. Index: 2

1. area: 46.094957713094828 energy: 46.094957713094828
WARNING: Hessian not positive definite. Index: 2

1. area: 46.094951887907854 energy: 46.094951887907854
WARNING: Hessian not positive definite. Index: 2

1. area: 47.433726897337927 energy: 47.433726897337927
WARNING: Hessian not positive definite. Index: 2

1. area: 47.433721722119216 energy: 47.433721722119216
Enter command: ritz(0,3)
Eigencounts:   2 <,  0 ==,  165 >
1.   -0.1845951925697
2.   -0.1845951925696
3.    0.3134563697442
Total eigenvalue changes in last iteration: 1.2700507e-11
```
Now we are well into the unstable symmetric branch. We could use "saddle" to break the symmetry and move toward a minimum. We could use "hessian_menu" to look at the unstable modes. Actually, we could have looked at the modes well before reaching the bifurcation point, since the shape of modes doesn't change very much in the transition from stability to instability.

If I were trying to be really accurate, I would use the quadratic model and as many facets as my machine and patience could handle.

Other methods for evolving unstable surfaces:

• Using symmetry. If the unstable surface is more symmetric than the stable surfaces, then enforcing symmetry can remove the unstable modes. For example, a surface of revolution could be retricted to just a 90 degree wedge between two perpendicular mirror planes, with 90 degree contact angle on the planes.
• Using volume constraints. Recall that in general every constraint removes one degree of freedom in the configuration space. Hence a volume constraint has the potential to remove one unstable mode. For example, unstable catenoids can be made stable by adding a volume constraint and adjusting the volume until the pressure is 0.

## 12. Caveats and warnings.

When not near enough to an equilibrium, incautious use of "hessian" can wreck a surface. If you don't know what's going to happen, save the surface first. Or use hessian_menu, where you can restore the surface with option 7. Or set " check_increase", which will abort a hessian iteration that would increase energy. But remember sometimes hessian should increase energy, for example to satisfy a volume constraint or reach an unstable equilibrium.

It it best to use probe values below least eigenvalue, so H-cI is positive definite. Makes the factoring more numerically stable. If you suspect the factoring is having problems, try bunch_kaufman.

Not all energies have built-in Hessians. If Evolver complains about lacking a Hessian, use the " convert_to_quantities" command, or restart Evolver with the -q option. This converts energies and volumes internally to the named quantity format, for which many more Hessians exist. Using any symmetry group (other than the built-in torus model) requires converting to named quantities also.

back to Hessian top