This is a discussion about the necessity of Detailed Balance in Markov Models
The probability of being in a state o at some time should be given by
We can then clearly express the change in the probabiliy during the unit timestep by
For the stationary distribution, we need
to hold, which implies the sum to vanish identically. This means, that each state receives as much probability as it looses. In which way, does not matter.
One simple way to ensure this is to balance the flux between to states in both directions. This assumption is stronger than, what we wanted, but fulfills its purpose. This assumption is sufficient, but, and this is important!, not necessary and is referred to as Detailed Balance
Since with Monte Carlo Simulation the stress is on sampling the stationary distribution and not on the dynamics anymore. The idea is to explore the phase space in a somewhat random fashion and reject states during this exploration with the right probability, so that the remaining time series represents the stationary distribution.
This means, that in a MC algorithm a Probability Flux Matrix is constructed in such a way, that it will reproduce the right stationary distribution. To do this, the Flux fromThis gives finally
The proposal step probability can be chosen arbitrarily, although one uses usually a symmetric one
Finally the detailed balance condition is used, to get information about the acceptance probability.
If now the proposal is symmetric we get for the acceptance
This idea was first presented by N. Metropolis et al. J. Chem. Phys. 21, 1087 (1953)
In general all Monte Carlo Simulation use this trick, the difference is only in the proposal step, which is adapted to certain problems. The better or more intelligent the proposal step is, the higher the acceptance rate and the higher the rate of convergence of the time series to the stationary distribution.
Unfortunately one cannot proof that the simulation is ergodic. For this we need