Taken from Random Walks on Finite Quantum Groups by Franz & Gohm.

To motivate the definition of quantum Markov chains let us start with a reformulation of the classical Markov chain. Let M,\,G be finite sets. Any map b:M\times G\rightarrow M may be called an action of G on M. Let F(M) and F(G) be the *-algebra of complex functions on M and G. For all g\in G, we have unital *-homomorphisms (\checkmark) \alpha_g:F(M)\rightarrow F(M) given by (\alpha_g f)(x):=f(b(x,g)). They can be put together into a single unital *-homomorphism

\displaystyle\beta:F(M)\rightarrow F(M)\otimes F(G)\displaystyle f\mapsto \sum_{g\in G}\alpha_g f\otimes\mathbf{1}_{\{g\}},

where \mathbf{1}_{\{g\}} is the indicator function. We get a natural non-commutative generalisation just by allowing the algebras to become non-commutative (by replacing the C*-algebras F(M) and F(G) by more general (!), not necessarily commutative C*-algebras).

Let B and A be unital C*-algebras and \beta:B\rightarrow B\otimes A a unital *-homomorphism. Here B\otimes A is the spatial tensor product. Then we can build up the following iterative scheme for n\geq 0:

j_0:B\rightarrow Bb\mapsto b

j_1:B\rightarrow B\otimes Ab\mapsto\beta(b)=b_{(0)}\otimes a_{(0)}.

(Sweedler’s notation b_{(0)}\otimes b_{(1)} stands for \sum_i b_{0i}\otimes b_{1i} and is very convenient for writing formulas).

\displaystyle j_n:B\rightarrow B\otimes\bigotimes_{1}^nAj_n=(j_{n-1}\otimes I_A)\circ\beta,

\displaystyle b\mapsto j_{n-1}(b_{(0)})\otimes b_{(1)}\in \left(B\otimes\bigotimes_1^{n-1}A\right)\otimes A.

Clearly all the j_n are unital *-homomorphisms (\checkmark). If we want to have an algebra \hat{B} which includes all their ranges we can form the infinite tensor product \hat{A}:=\bigotimes_{1}^\infty A (the closure of the union of all of \bigotimes_1^{n} A with the natural inclusions x\mapsto x\otimes 1) and then \hat{B}=B\otimes\hat{A}.

Denote by \sigma the right shift on \hat{A}, i.e. \sigma(a_1\otimes a_2\otimes\cdots)=1_B\otimes a_1\otimes a_2\otimes\cdots Using this we can also write

j_n:B\rightarrow \hat{B}b\mapsto \hat{\beta}^n(b\otimes 1_A),

where \hat{\beta} is a unital *-homomorphism given by

\hat{\beta}:\hat{B}\rightarrow\hat{B}b\otimes a\mapsto\beta\circ(I_B\otimes\sigma)(b\otimes a)=\beta(b)\otimes a.

i.e. by applying the shift, we first obtain b\otimes 1_A\otimes a\in\hat{B} and then interpret ‘\beta\circ‘ as the operation which replaces b\otimes 1_A by \beta(b). We may interpret \hat{\beta} as a kind of time evolution producing j_1,j_2,\dots

To do probability theory, consider states \psi,\,\phi on B,\,A and form product states


for B\otimes \bigotimes_1^n A (in particular for n=\infty the infinite product state on \hat{B}, which we call \Psi). Now we can think of the j_n as non-commutative random variables with distributions \Psi\circ j_n and \{j_n\}_{n\geq 0} is a non-commutative stochastic process. We call \psi the initial state and \phi the transition state.

In order to analyse this process, we define for n\geq 1 the linear maps:

\displaystyle Q_{[0,n-1]}:B\otimes\bigotimes_1^nA\rightarrow B\otimes\bigotimes_1^{n-1}A,

b\otimes a_1\otimes\cdots\otimes a_{n-1}\otimes a_n\mapsto b\otimes a_1\otimes\cdots \otimes a_{n-1}\phi(a_n).

In particular, Q:=Q_{[0,0]}=I_B\otimes \phi:B\otimes A\rightarrow Bb\otimes a\mapsto b\phi(a).

Such maps are often called slice maps. From a probabilistic point of view, it is common to refer to idempotent  norm-one (completely) positive maps onto a C*-subalgebra as (non-commutative) conditional expectations. Clearly the slice map Q_{[0,n-1]} is a conditional expectation (with its range embedded by x\mapsto x\otimes 1) and it has the additional property of preserving the state, i.e. \Psi\circ Q_{[0,n-1]}=\Psi.

Proposition 2.1: The Markov Property

Q_{[0,n-1]}\circ j_n=j_{n-1}\circ T_\phi

where T_\phi:B\rightarrow B

b\mapsto Q\circ\beta(b)=(I_B\otimes\phi)\circ\beta(b)=b_{(0)}\phi(b_{(1)}).


Q_{[0,n-1]}j_n(b)=Q_{[0,n-1]}(j_{n-1}(b_{(0)})\otimes b_{(1)})

=j_{n-1}(b_{(0)})\phi(b_{(1)})=j_{n-1}T_\phi(b) \bullet

We interpret this as a Markov property of the process \{j_n\}_{n\geq0}. Note that if there are state-preserving conditional expectations P_{n-1} onto j_{n-1}(B) and P_{[0,n-1]} onto the algebraic span of j_0(B),\dots,j_{n-1}(B), then because P_{n-1} is dominated by P_{[0,n-1]} and P_{[0,n-1]} is dominated by Q_{[0,n-1]}, we get

P_{[0,n-1]}\circ j_n=j_{n-1}\circ T_\phi

(Markov Property)

The reader should check that for commutative algebras, this is the usual Markov property of classical probability.

Suppose that X=\{1,\dots,d\} is a finite set and consider the C*-algebra F(X) of complex-valued functions on X. This algebra has basis \{\mathbf{1}_{\{i\}}:i\in X\}. We now construct the \{j_n\}_{n\geq 0}, the stochastic operator T_\phi and a conditional expectation E_{[0,n-1]}.

Now let A=B=F(X). We need a unital *-homomorphism \beta. Let us try

\beta:F(X)\rightarrow F(X)\displaystyle\mathbf{1}_{\{i\}}\mapsto \mathbf{1}_{\{i\}}\otimes \mathbf{1}_{X}.

It is not difficult to show that this is a unital *-homomorphism.

In this framework the \{j_n\} are quite simple:

\displaystyle j_n(\mathbf{1}_{\{i\}})=\mathbf{1}_{\{i\}}\otimes\bigotimes_{\ell=1}^{n-1}\mathbf{1}_X.

Now let \phi be a probability measure on X. We have

T_\phi(\mathbf{1}_{\{i\}})=(I_{F(X)}\otimes \phi)\circ\beta(\mathbf{1}_{\{i\}})=\mathbf{1}_{\{i\}}.

i.e. T_\phi=I_{F(X)}. Now we need to construct a conditional expectation E_{[0,n-1]}. The following seems reasonable:

\displaystyle E_{[0,n-1]}\left(\mathbf{1}_{\{i\}}\otimes\bigotimes_{\ell=1}^{n}\mathbf{1}_X\right)=\mathbf{1}_{\{i\}}\otimes\bigotimes_{\ell=1}^{n-1}\mathbf{1}_X.

I’m hoping to show that for commutative algebras the Markov property above it the correct one. Here I’ve started with a commutative algebra, and showed that the Markov property holds. However I don’t think I have the correct \beta. Even if I do find the correct \beta, I also need to show that classical Markov chains satisfy this quantum Markov property. In the next paragraph there is a hint how we might do this. Let \{\xi_n\}_{n\geq0} be a classical Markov chain with stochastic operator P=p(i,j). In the classical situation the stochastic operator can tell us a lot about the random variables \{\xi_n\}_{n\geq0} — hopefully in the quantum case also! 

So in hoping to find the correct \beta for classical Markov chains,  define

\displaystyle T_\phi(\mathbf{1}_{\{j\}})=\sum_{i=1}^dp(i,j)\mathbf{1}_{\{i\}}.

We should be able to get something out of this by looking at Q=I\otimes\phi=Q_{[0,0]}=E_{[0,0]} — which is connected to \phi via T_\phi=(I\otimes\phi)\circ \beta. This suggests to me that we can look at the Markov property for n=1 (recalling j_1=\beta and j_0=I_{F(X)}):

E_{[0,0]}\circ \beta=(I_{F(X)}\otimes\phi)\circ \beta=T_\phi.

With the prescription T_\phi(\mathbf{1}_{\{j\}})=\sum_i p(i,j)\mathbf{1}_{\{i\}} we should be able to write down a \beta that satisfies the Markov property for some \phi. Following this procedure I have derived

\displaystyle\beta(\mathbf{1}_{\{j\}})=\left\{\begin{array}{cc}\sum_{i=1}^dp(i,j)\mathbf{1}_{\{i\}}\otimes\frac{\mathbf{1}_{\{j\}}}{\phi(\mathbf{1}_{\{j\}})}&\text{ if }\phi(\mathbf{1}_{\{j\}})>0\\[2ex] 0&\text{ if }\phi(\mathbf{1}_{\{j\}})=0\end{array}\right.

… this isn’t linear. Does \phi determine p(i,j) and vice versa? If \phi is allowed to be chosen independently of p(i,j) then things don’t seem to work. We must establish a link between p(i,j) and \phi. To try and get linearity we have for A\subset X

\displaystyle\beta(\mathbf{1}_{A})=\sum_{i=1}^d\mathbf{1}_{\{i\}}\otimes \left(\sum_{\ell\in A}p(\ell,i)\mathbf{1}_{\{ \ell\}}\right).

As unlikely as it seems, this \beta is additive and I’m going to extend to linearity by allowing homogeneity. In fact this is easier written as

\beta(\mathbf{1}_{\{j\}})=\sum_{i=1}^d\mathbf{1}_{\{i\}}\otimes p(i,j)\mathbf{1}_{\{j\}}.

However this implies that \mathbf{\phi}=\mathbf{1} which doesn’t seem correct… Well thankfully I’m doing random walks on groups as that is perfectly straightforward — can’t seem to find a \beta to do the job at all. If I had thought for a moment then this shouldn’t have been a problem — it’s when we have a group action can we write down the classical version in this way. Suppose again that X=\{1,\dots,d\} and we have an action of  a finite group G on F(X):

\alpha_g: F(X)\times G\rightarrow F(X).

It is better that this is induced by a group action b:X\times G\rightarrow G via \alpha_g(f)(x)=f(b(x,g))As before this can be encoded as a coaction

\beta:F(X)\rightarrow F(X)\otimes F(G), \mathbf{1}_{\{x\}}\mapsto\sum_{g\in G}\alpha_g(\mathbf{1}_{\{x\}})\otimes\mathbf{1}_{\{g\}}.

My ‘interpretation’ of this is as follows. If the action is going to be by g, then \alpha_{g}\left(\mathbf{1}_{\{x\}}\right) is like the set of points which will be sent to x in that case. So it should be very difficult to do what I wanted to do above without a group action somewhere. Can we still do it? Perhaps we can construct a Markov chain which can’t be realised in this framework? Consider a two state Markov chain with transition probabilities \lambda and \gamma. I can show that after quite a lot of calculations that this can only be done if \lambda=1/2=\gamma. This suggests that we can only do Markov chains which are induced by group actions — this does not cover all Markov chains however. Therefore I must disagree with the author and call this (classical) construction a non-invariant random walk on a finite group (NIRWG).

We must show therefore that the above Markov Property is the same as that of the classical chain when we have a NIRWG. Let X be a set and b:X\times G\rightarrow X be a group action. Let \nu\in M_p(G) be a probability measure on G and define a stochastic operator by:

\displaystyle p(x,y)=\sum_{g\in G}\nu(g:b(x,g)=y), for all x,\,y\in X.

Now the unital *-homomorphisms \alpha_g are given by


We also have a transition operator

T_\phi\left(\mathbf{1}_{\{x\}}\right)=\sum_{y\in X}p(y,x)\mathbf{1}_{\{y\}}.

Now let us check out what \phi is given by. It’s not easy (possibly not unique) but we have


This seems to suggest that an action of G on X and a state on F(G) can define a classical non-invariant random walk but that the other direction there may not necessarily be unique. Hmmm — can this be inverted? At this point we have to ask questions about the comparative sizes of X and G — or moreover |\text{supp}(\nu)|=\Sigma. From now on I will assume that |\Sigma|=X.

We examine the Markov property for n=2.

Thus in the general case, we say that \{j_n\}_{n\geq0} is a quantum Markov chain on B. The map T_\phi is called the transition operator of the Markov chain. In the classical case, it can be identified with the transition matrix by choosing indicator functions of single points as a basis, i.e. T_\phi(\mathbf{1}_{\{j\}})=\sum_{i=1}^dp_{ij}\mathbf{1}_{\{i\}}. It is an instructive exercise to start with a given transition matrix P and to realise the classical Markov Chain with the construction above. As indicated I don’t think this is always possible.

Let \xi be the random walk on \mathbb{Z}_p with stochastic operator induced by the driving probability\nu(1)=\nu(2)=1/2. The action, b we will be concerned with shall be left-multiplication (well \mathbb{Z}_p is an additive abelian group so we can right-addition —I am considering the action to be ‘on’ x):

b:\mathbb{Z}_p\times \mathbb{Z}_p\rightarrow \mathbb{Z}_pb(x,y)=x+y.

This yields the unital *-homomorphisms of F(G) given by:


They can be put together (encoded?)  into a single unital *-homomorphism

\beta:F(G)\rightarrow F(G)\otimes F(G)f\mapsto \sum_{z=0}^{p-1}\alpha_z(f)\otimes\mathbf{1}_{\{z\}}.

Now \beta is linear and the indicator functions are a basis for F(G), hence we examine \beta(\mathbf{1}_{\{x\}}):



My reading of this is that if the ‘action’ was z (\mathbf{1}_{\{z\}}), then the random walk will be (considering it started ‘at’ \mathbf{1}_{\{x\}}) ‘at’ \mathbf{1}_{\{x+z\}}.

Now we can examine the stochastic process \{j_n\}_{n\in\mathbb{N}}. We have that j_0(\mathbf{1}_{\{x\}})=\mathbf{1}_{\{x\}}j_1(\mathbf{1}_{\{x\}})=\beta(\mathbf{1}_{\{x\}}). Now we calculate:

j_2(\mathbf{1}_{\{x\}})=(j_1\otimes I_{F(G)})\circ\beta(\mathbf{1}_{\{x\}}) ,

=(j_1\otimes I_{F(G)})\left(\sum_{z=0}^{p-1}\left(\mathbf{1}_{\{x+z\}}\otimes\mathbf{1}_{\{z\}}\right)\right),


My interpretation here is that the random walk in in a superposition of states — it’s true state can only be determined when we know which combination of z and w acted. Inductively, we have that


Again the construction is clear, the \{j_n\} are indeed random variables on the function space, and further we can see that the random walk is the random variable given by x+\sum_{k=1}^nz_k

Whoops: Most of the next paragraph doesn’t make sense. I would define the states on F(G) linearly by:


Eh, actually this fixes the next paragraph!   

For the probability theory, we might look at the probability measures M_p(\mathbf{Z}_p). For these to be states they need to be norm 1, positive elements in the dual of F(G). When F(G) is normed by the supremum norm, the elements of M_p(G) are in the dual space with the l^1 norm — thus we have that the probability measures are indeed norm 1. Also, as we know that the positive elements of F(G) are those that are real-valued with f(x)\geq0, we clearly have \mu(f)\in\mathbf{R}^+ for \mu\in M_p(G) so the probability measures are indeed our states.

I‘d like to take \psi being equivalent to \delta^0 — for a walk starting deterministically at 0, and \phi=\nubut this doesn’t make a whole pile of sense at the moment, so we’ll just stick with \psi. So the distribution of j_n is given by \Psi\circ j_n, where \Psi=\psi\otimes\bigotimes_1^{\infty}\nu.

Now for the slice maps. This is where I am confused. My interpretation so far is that j_n\in F(G)\otimes\bigotimes_1^n F(G), where the first F(G) describes the current state, and the kth element of the tensor product of the n F(G)s accounts for the action at step k. The definition of the slice maps is very much at odds with this (probably false) interpretation. It’s pretty clear that the slice maps act on the j_n(F(G)). Considering Q_{[0,n-1]}(j_n(\mathbf{1}_{\{x\}}):


Hmmm — it seems as if the actions are the wrong way around. Maybe we can do them the other way around?

Well anyway, let us consider the Q map — what does that look like. It acts on j_1(F(G)) anyway:


This does seem to encode the walk that begins at x. Perhaps I am doing things slightly wrong — but looking at the “right-shift” formulation, the same problems emerge…

Now looking to the Markov property. Again we’ll just look at f=\mathbf{1}_{\{x\}}. On the left-hand side we have:


Now for the right-hand side. Firstly


Now we must hit this with j_{n-1} — I’m happy enough that they are equal actually. Now we need to endorse the interpretation — that this is the usual Markov property of classical probability. We can take this to be as simple as:


Analogous to a classic formula, we can also derive the following semigroup property for transition operators from the Markov property. It is one of the main reasons why Markov chains are easier than more general processes.

Corollary 2.2: Semigroup Property


It might be useful to prove this is the commutative case for a random walk on a group G of order |G|=N. Both Qj_n and T_\phi act on F(G) and as both are linear it suffices to prove the result for \mathbf{1}_{\{g\}}. As per usual, this is given by

\sum_{g_1,\dots,g_N\in G}\left(\mathbf{1}_{\left\{\left(\prod_{k=n}^0 g_k\right)g\right\}}\otimes\bigotimes_{k=1}^n\mathbf{1}_{\{g_k\}}\right).

Now, as far as I am concerned, the action of Q on this is unclear: Q=I_{F(G)}\otimes \phi acts on F(G) \otimes F(G)

On the other hand, it’s pretty easy to show that T_\phi^n(\mathbf{1}_{\{g\}}) is given by:

\left(\sum_{g_1,\dots,g_n\in G}\mathbf{1}_{\left\{\left(\prod_{k=n}^1g_k\right)g\right\}}\right)\left(\prod_{k=n}^1\nu(g_k)\right)

 I can’t reconcile this exactly, but if Q can act like this then no problem.

Finally we note that if (\psi\otimes\phi)\circ\beta=\psi, then \Psi\circ\hat{\beta}=\Psi.

Suppose that (\psi\otimes\phi)\circ\beta=\psi. Let b\in B; then there exist elementary tensors b_i\otimes a_i such that

\beta(b)=\sum_{i=1}^nb_i\otimes a_i.

Now apply the product state \psi\otimes\phi:


Now let b\otimes a embedded into \hat{B}. Now

\hat{\beta}(b)=\sum_{i=1}^nb_i\otimes a_i\otimes a

\Rightarrow \Psi\circ\hat{\beta}=\underbrace{\left(\sum_{i=1}^n\psi(b_i)\phi(a_i)\right)}_{=\psi(b)}\phi(a)

=\psi(b)\phi(a)=\Psi(b\otimes a),

and the claim is proved \bullet

This implies that the Markov chain is stationary, i.e. correlations between random variables only depend on time differences (???) In particular, the state \psi is then preserved by T_\phi, i.e. \psi\circ T_\phi=\psi.

Suppose that we have a stationary Markov chain and let b\in B. Now


\Rightarrow \psi\circ T_\phi(b)=\sum_{i=1}^n\psi(b_i)\phi(a_i)=\psi(b) \bullet 

The construction above is called coupling to a shift.