Sensititivy Analysis in the Degenerate Case for Perron-Eigenvalues

We use a paper from N. Singhal and V. Pande as reference and start from there. We proceed in analogy to appendix B. We extend the Sensitivity Analysis for the case, that an eigenvalue is degenerate. Since we are interested in the computation of the stationary distribution of a system, which has been sampled by short trajectories. This can lead to transition matrices, which do not have one unique Perron-Eigenvector, but a few, which correspond to completely seperated subspaces. In particular this can happen, when the trajectory is started in state, that are known to exist, but have not been observed by previous trajectories. We will concentrate only on the case, where the eigenvector subspace to the eigenvalue one is degenerated. In this particular case a few step even simplify.

To compute the stationary distribution we need to compute the left eigenvalues of ${}\mathbf{P}$ to the eigenvalue one or of ${}\mathbf{A}$ to the eigenvalue zero, which is equivalent to find a basis for the kernel or the nullspace. To compute left eigenvalues, with an algorithmus written for right eigenvalues, one needs to transpose the matrix in question and proceed as usual.

We start with an eigenvector equation, where ${}\mathbf{R}\equiv\mathbf{R}(1)$ is the Matrix of right eigenvectors (in columns) to the eigenvalue one and the ${}\mathbf{0}$ is a zeromatrix of the same size. In analogy ${}\mathbf{L}\equiv\mathbf{L}(1)$ is the matrix of left eigenvectors (in rows).
\[ \mathbf{A}\mathbf{R}=\mathbf{0}_{R}\]
\[ \mathbf{L}\mathbf{A}=\mathbf{0}_{L}\]

Since we are looking at statistical matrices, we know, that the right PerronEigenvectors are constant on a seperated subspace, so if we choose the right eigenvectors to be orthogonal, these are zero everywhere execept on the indices referring to a state in a certain subspace. These indices are constant and we will choose these to be equal to one.
\[ \sum_{k=1}^{b}r_{ik}r_{jk}=\left(\sum_{k=1}^{b}r_{ik}\right)\cdot\delta_{ij},\]
\[ \mathbf{R}^{T}\mathbf{R}=diag(c_{1},\ldots,c_{b})\]

So now we have split the matrix in several subspaces indicated by the characteristic vectors ${}r_{i}$. The left eigenvectors indicate now the stationary distribution inside one of these subspaces. To proceed, we need to choose a basis of left eigenvectors, so that these are also confined to the subspaces. We express this by
\[ \sum_{k=1}^{b}l_{ik}r_{kj}=f_{i}\cdot\delta_{ij},\]
\[ \mathbf{L}\mathbf{R}=diag(f_{1},\ldots,f_{b})\]

Finally we choose the ${}f_{i}=1$, so that the stationary distribution in one subspace is given by ${}l_{i}$. This is equal to
\[ ||l_{i}||_{1}=1\]
\[ \mathbf{L}\mathbf{R}=\mathbf{1}\]

and is different to (B4), where the euklidian norm is chosen. We assume explicitly for the sensitivity, that the subspaces will be kept and not, that the degeneracy is destroyed. Meaning, that the ${}r_{i}$ are constant
\[ \frac{\partial\mathbf{L}}{\partial p_{ij}}\mathbf{R}=0\]

This leads to a form for equation (B6)
\[ \frac{\partial\mathbf{L}}{\partial p_{ij}}\left[\begin{array}{cc} \bar{\mathbf{A}} & \mathbf{R}\end{array}\right]=\mathbf{L}\left[\begin{array}{cc} -\left(\frac{\partial\mathbf{A}}{\partial p_{ij}}-\sum_{k=1}^{b}\frac{\partial\lambda_{k}}{\partial p_{ij}}\mathbf{I}\right) & \mathbf{0}\end{array}\right]\]
\[ \frac{\partial\mathbf{L}}{\partial p_{ij}}\left[\begin{array}{cc} \bar{\mathbf{A}} & \mathbf{R}\end{array}\right]=\mathbf{L}\left[\begin{array}{cc} -\left(e_{i}\left(e_{j}\right)^{T}-\left(\sum_{k=1}^{b}l_{jk}\right)\mathbf{1}\right) & \mathbf{0}\end{array}\right]\]

For computational reasons we only need to compute the entries of the variation of ${}L_{mn}$ to ${}p_{ij}$, when ${}i,j,m,n$ are in the same subspace, otherwise the value is zero. At least I think, I have to check that. At least I assume, when the degeneracy is maintained.
\[ \frac{\partial l_{k}}{\partial p_{ij}}\left[\begin{array}{cc} \bar{\mathbf{A}} & \mathbf{R}\end{array}\right]=\mathbf{l_{k}}\left[\begin{array}{cc} -\left(e_{i}\left(e_{j}\right)^{T}-\left(\sum_{n=1}^{b}l_{jn}\right)\mathbf{1}\right) & \mathbf{0}\end{array}\right]\]
\[ \frac{\partial l_{k}}{\partial p_{ij}}\left[\begin{array}{cc} \bar{\mathbf{A}} & \mathbf{R}\end{array}\right]=\mathbf{l_{k}}\left[\begin{array}{cc} -\left(e_{i}\left(e_{j}\right)^{T}-\left(\sum_{n=1}^{b}l_{jn}\right)\mathbf{1}\right) & \mathbf{0}\end{array}\right]\]