# Matrix Functions¶

Consider a \(n \times n\) matrix \(\mathbf{A}\) and a scalar function \(f \, : \, \mathbb{C} \rightarrow \mathbb{C}\), then we can define the matrix function \(f(\mathbf{A})\) as

where the contour \(\Gamma\) encompass all eigenvalues of \(\mathbf{A}\). This is called the Cauchy’s Integral representation of the matrix function \(f(\mathbf{A})\). Or, equivalently, assuming that the matrix \(\mathbf{A}\) is diagonalizable such that \(\mathbf{A} = \mathbf{VDV}^{-1}\) where \(\mathbf{D}\) is a diagonal matrix with the eigenvalues \(\lambda_i\) of \(\mathbf{A}\) and the matrix \(\mathbf{V}\) stores the corresponding eigenvectors, then the matrix function can be calculated as

It is worth mentioning that \(f(\mathbf{A})\) will often produce a dense matrix even if input matrix \(\mathbf{A}\) is sparse. Moreover, computing the full matrix function is generally a very expensive operation.

Just as the matrix inverse is not needed to solve a linear system, most practical applications require the evaluation of the \(f(\mathbf{A}) \; \vec{b}\) (i.e., the action of the matrix function over a vector \(\vec{b}\)) rather the full dense matrix function \(f(\mathbf{A})\). To this end, many efficient methods exist and are usually based on the Krylov subspace. These methods consist in generating a Krylov basis using the input matrix and then evaluating the function over the projected matrix through some direct method. Some popular matrix functions includes the inverse \(\mathbf{A}^{-1}\), the exponential \(e^\mathbf{A}\), and the square root \(\sqrt{\mathbf{A}}\).

## Full Matrix Function¶

HighFM supports the following functions over **dense** matrices:

`expm(beta, A)`

= \(\exp{(\beta \, \mathbf{A})}\)`sqrtm(beta, A)`

= \(\sqrt{\beta \, \mathbf{A}}\)`cosm(beta, A)`

= \(\cos{(\beta \, \mathbf{A})}\)`sinm(beta, A)`

= \(\sin{(\beta \, \mathbf{A})}\)`sincm(beta, A)`

= \((\beta \, \mathbf{A})^{-1} \sin{(\beta \, \mathbf{A})}\)`cosm_sqrt(beta, A)`

= \(\cos{(\beta \, \sqrt{\mathbf{A})}}\)`sincm_sqrt(beta, A)`

= \((\beta \, \sqrt{\mathbf{A}})^{-1} \sin{(\beta \, \sqrt{\mathbf{A})}}\)

Note that neither `cosm_sqrt(beta, A)`

nor `sincm_sqrt(beta, A)`

will calculate the square root explicitly.

Note

You can use `sincosm()`

, `sinccosm()`

and `sinccosm_sqrt()`

to calculate the matrix sine/`sinc`

and cosine simultaneously. In this case, you must use `tie(S, C)`

to assign the result to the matrices `S`

and `C`

. This is more efficient than calculating each function separately.

Warning

Note that the square root of a matrix is only unique if its eigenvalues are positive. Otherwise, multiple (complex) square roots may exist. If the matrix is singular, then this routine will fail. Moreover, `sqrtm(beta, A)`

will preserve the type of \(\mathbf{A}\), i.e., the algorithm will only produce the real part of square root if `A`

uses `float`

or `double`

.

Warning

All of these routines are very expensive in terms of arithmetic operations, scaling with \(O(n^3)\) where `n`

is the size of the matrix, with a large constant.

## The action of the Matrix Function¶

Similarly, to compute \(f(\mathbf{A}) \; \vec{b}\), you can call the following functions:

`expmv(beta, A, b, method)`

= \(\exp{(\beta \, \mathbf{A})} \, \vec{b}\)`sqrtmv(beta, A, b, method)`

= \(\sqrt{(\beta \, \mathbf{A})} \, \vec{b}\)`cosmv(beta, A, b, method)`

= \(\cos{(\beta \, \mathbf{A})} \, \vec{b}\)`sinmv(beta, A, b, method)`

= \(\sin{(\beta \, \mathbf{A})} \, \vec{b}\)`sincmv(beta, A, b, method)`

= \((\beta \, \mathbf{A})^{-1} \sin{(\beta \, \mathbf{A})} \, \vec{b}\)`cosmv_sqrt(beta, A, b, method)`

= \(\cos{(\beta \, \sqrt{\mathbf{A})}} \, \vec{b}\)`sincmv_sqrt(beta, A, b, method)`

= \((\beta \, \sqrt{\mathbf{A}})^{-1} \sin{(\beta \, \sqrt{\mathbf{A})}} \, \vec{b}\)

where `method`

is one of the following:

Note

`ExpvTaylor`

can only compute the matrix exponential.

You can use `sincosmv()`

, `sinccosmv()`

and `sinccosmv_sqrt()`

to calculate the action of the matrix sine/`sinc`

and cosine simultaneously. In this case, you must use `tie(vs, vc)`

to assign the result to the vector `vs`

and `vc`

. This is more efficient than calculating each function separately.

For example, \(\vec{x} = e^{\mathbf{A}t} \; \vec{b}\) can be calculated as

```
CSRMatrix<double> A;
Vector<double> b;
Vector<double> x;
// ... Initialize A and b ... //
x = expmv(A, t, b, RestartedKrylov());
```

You can also create a solver instance, and then use it for several function calls:

```
RestartedKrylov krylov({.atol = 1E-6,
.rtol = 1E-8,
.restart_length = 20,
.max_restarts = 100});
x1 = expmv(A1, t1, b1, krylov);
x2 = expmv(A1, t2, b2, krylov);
x3 = expmv(A2, t3, b1, krylov);
```

Here, we create a new Restarted Krylov instance with the absolute tolerance set to `atol = 1E-6`

; the relative tolerance, `rtol = 1E-8`

; the restart cycle set to `20`

iterations; and the maximum number of restarts set to `100`

. Then we pass this instance to each function call, reusing the solver and all memory allocated.