You are currently browsing the category archive for the ‘Research Updates’ category.

Just back from a great workshop at Seoul National University, I am just going to use this piece to outline in a relaxed manner my key goals for my work on random walks on quantum groups for the near future.

In the very short term I want to try and get a much sharper lower bound for my random walk on the Sekine family of quantum groups. I believe the projection onto the ‘middle’ of the M_n(\mathbb{C}) might provide something of use. On mature reflection, recognising that the application of the upper bound lemma is dominated by one set of terms in particular, it should be possible to use cruder but more elegant estimates to get the same upper bound except with lighter calculations (and also a smaller \alpha — see Section 5.7).

I also want to understand how sharp (or otherwise) the order n^n convergence for the random walk on the dual of S_n is — n^n sounds awfully high. Furthermore it should be possible to get a better lower bound that what I have.

It should also be possible to redefine the quantum total variation distance as a supremum over projections \sim subsets via G \supset S\leftrightarrow \mathbf{1}_S. If I can show that for a positive linear functional \rho that |\rho(a)|\leq \rho(|a|) then using these ideas I can. More on this soon hopefully. No, this approach won’t work.

The next thing I might like to do is look at a random walk on the Sekine quantum groups with an n-dependent driving probability and see if I can detect the cut-off phenomenon (Chapter 4). This will need good lower bounds for k\ll t_n, some cut-off time.

Going back to the start, the classical problem began around 1904 with the question of Markov:

Which card shuffles mix up a deck of cards and cause it to ‘go random’?

For example, the perfect riffle shuffle does not mix up the cards at all while a riffle shuffle done by an amateur will.

In the context of random walks on classical groups this question is answered by the Ergodic Theorem 1.3.2: when the driving probability is not concentrated on a subgroup (irreducibility) nor the coset of a normal subgroup (aperiodicity).

Necessary and sufficient conditions on the driving probability \nu\in M_p(\mathbb{G}) for the random walk on a quantum group to converge to random are required. It is expected that the conditions may be more difficult than the classical case. However, it may be possible to use Diaconis-Van Daele theory to get some results in this direction. It should be possible to completely analyse some examples (such as the Kac-Paljutkin quantum group of order 8).

This will involve a study of subgroups of quantum groups as well as normal quantum subgroups.

It should be straightforward to extend the Upper Bound Lemma (Lemma 5.3.8) to the case of compact Kac algebras. Once that is done I will want to look at quantum generalisations of ‘natural’ random walks and shuffles.

I intend also to put the PhD thesis on the Arxiv. After this I have a number of options as regard to publishing what I have or maybe waiting a little while until I solve the above problems — this will all depend on how my further study progresses.

 

Advertisements

Slides of a talk given at the Topological Quantum Groups and Harmonic Analysis workshop at Seoul National University, May 2017.

Abstract A central tool in the study of ergodic random walks on finite groups is the Upper Bound Lemma of Diaconis & Shahshahani. The Upper Bound Lemma uses the representation theory of the group to generate upper bounds for the distance to random and thus can be used to determine convergence rates for ergodic walks. These ideas are generalised to the case of finite quantum groups.

After a long time I have finally completed my PhD studies when I handed in my hardbound thesis (a copy of which you can see here).

It was a very long road but thankfully now the pressure is lifted and I can enjoy my study of quantum groups and random walks thereon for many years to come.

I have finally finished the first draft of my PhD thesis. My advisor Dr Stephen Wills is presently reading through it and will get back to me with his comments in the next few weeks. The project was successful in that I managed to prove the Diaconis-Shahshahani Upper Bound Lemma for finite quantum groups… how successful my application of the Lemma to concrete examples is probably open to debate. First draft of abstract and introduction — without references — below the fold.

Read the rest of this entry »

Let \mathbb{G} be a finite quantum group described by A=\mathcal{C}(\mathbb{G}) with an involutive antipode (I know this is true is the commutative or cocommutative case. I am not sure at this point how restrictive it is in general. The compact matrix quantum groups have this property so it isn’t a terrible restriction.) S^2=I_A. Under the assumption of finiteness, there is a unique Haar state, h:A\rightarrow \mathbb{C} on A.

Representation Theory

A representation of \mathbb{G} is a linear map \kappa:V\rightarrow V\otimes A that satisfies

\left(\kappa\otimes I_A\right)\circ\kappa =\left(I_V\otimes \Delta\right)\circ \kappa\text{\qquad and \qquad}\left(I_V\otimes\varepsilon\right)\circ \kappa=I_V.

The dimension of \kappa is given by \dim\,V. If V has basis \{e_i\} then we can define the matrix elements of \kappa by

\displaystyle\kappa\left(e_j\right)=\sum_i e_i\otimes\rho_{ij}.

One property of these that we will use it that \varepsilon\left(\rho_{ij}\right)=\delta_{i,j}.

Two representations \kappa_1:V_1\rightarrow V_1\otimes A and \kappa_2:V_2\rightarrow V_2\otimes A are said to be equivalent, \kappa_1\equiv \kappa_2, if there is an invertible intertwiner between them. An intertwiner between \kappa_1 and \kappa_2 is a map T\in L\left(V_1,V_2\right) such that

\displaystyle\kappa_2\circ T=\left(T\otimes I_A\right)\circ \kappa_1.

We can show that every representation is equivalent to a unitary representation.

Timmermann shows that if \{\kappa_\alpha\}_{\alpha} is a maximal family of pairwise inequivalent irreducible representation that \{\rho_{ij}^\alpha\}_{\alpha,i,j} is a basis of A. When we refer to “the matrix elements” we always refer to such a family. We define the span of \{\rho_{ij}\} as \mathcal{C}\left(\kappa\right), the space of matrix elements of \kappa.

Given a representation \kappa, we define its conjugate, \overline{\kappa}:\overline{V}\rightarrow\overline{V}\otimes A, where \overline{V} is the conjugate vector space of V, by

\displaystyle\overline{\kappa}\left(\bar{e_j}\right)=\sum_i \bar{e_i}\otimes\rho_{ij}^*,

so that the matrix elements of \overline{\kappa} are \{\rho_{ij}^*\}.

Timmermann shows that the matrix elements have the following orthogonality relations:

  • If \alpha and \beta are inequivalent then h\left(a^*b\right)=0, for all a\in \mathcal{C}\left(\kappa_\alpha\right) and b\in\mathcal{C}\left(\kappa_\beta\right).
  • If \kappa is such that the conjugate, \overline{\kappa}, is equivalent to a unitary matrix (this is the case in the finite dimensional case), then we have

\displaystyle h\left(\rho_{ij}^*\rho_{kl}\right)=\frac{\delta_{i,k}\delta_{j,l}}{d_\alpha}.

This second relation is more complicated without the S^2=I_A assumption and refers to the entries and trace of an intertwiner F from \kappa to the coreprepresention with matrix elements \{S^2\left(\rho_{ij}\right)\}. If S^2=I_A, then this intertwiner is simply the identity on V and so the the entries \left[F\right]_{ij}=\delta_{i,j} and the trace is d=\dim V.

Denote by \text{Irr}(\mathbb{G}) the set of unitary equivalence classes of irreducible unitary representations of \mathbb{G}. For each \alpha\in\text{Irr}(\mathbb{G}), let \kappa_\alpha:V_{\alpha}\rightarrow V_{\alpha}\otimes A be a representative of the class \alpha where V_\alpha is the finite dimensional vector space on which \kappa_\alpha acts.

Diaconis-Van Daele Fourier Theory

Read the rest of this entry »

The following runs a thread through what I’ve looked at over the past year: Progression Report.

I have continued to work through Murphy http://books.google.com/books?id=emNvQgAACAAJ&dq=gerald+murphy+c*+algebras+and+operator+theory&h

I managed to get through two sections last week: Compact Hilbert Space Operators and The Spectral Theorem. I also have 9 of 12 chapter 2 exercises completed. I have been writing my study up here and this is proving fruitful on three counts:

  1. I can put questions in red for my supervisor to see
  2. I am not happy putting up something on this page that I haven’t justified to myself. This means I have to fill in some extra steps (in blue)
  3. I should have a nice set of notes to peruse should I need them

Unfortunately this week will be mostly concerned with preparing lectures for two modules that I will be lecturing in CIT:

MATH6014

MATH6037

I have continued to work through Murphy http://books.google.com/books?id=emNvQgAACAAJ&dq=gerald+murphy+c*+algebras+and+operator+theory&h

Before the Christmas break I finished off the chapter 1 exercises.

Chapter 2: C*-Algebras and Hilbert Space Operators.

2.1 C*-Algebras

Initially we defined a C*-algebra, A, as a complete normed algebra, together with a conjugate-linear involution * that satisfies the C*-equation:

\|a^*a\|=\|a\|^2, \forall\,a\in A

Self-adjoint or Hermitian elements are defined to have the property a^*=a. As a consequence of this, and the C*-equation, the spectral radius of a self-adjoint element, \nu(a), is equal to its norm, \|a\|. As a corollary of this, of all the norms that can be put on the *-algebra, only one makes it into a C*-algebra – i.e. satisfying the C*-equation.

In the previous chapter we have seen that an algebra, A, can be unitised to form a new algebra, \tilde{A}, which contains A as a subspace. In general, the norm got by extending the norm on A to a norm on \tilde{A} does not make \tilde{A} into a C*-algebra. However Theorem 2.1.6 shows that there does exist a (unique) norm on \tilde{A} making it a C*-algebra. In many examples we may now assume that a general C*-algebra is unital – replacing it with the unique unitisation, \tilde{A}, if necessary.

One such result which depends on this fact is that the the spectrum of a self-adjoint element is real.

A central result in this chapter is that all abelian C*-algebras are C_0(X), for some locally compact Hausdorff space, X. In fact X is the character space \Phi(A) (as with Belton, this is via the Gelfand transformation). This identification allows the development of the powerful functional calculus. Briefly, if a is a normal element of a C*-algebra A, (a^*a=aa^*), and z is the inclusion map from \sigma(a)\rightarrow \mathbb{C}, then there exists a unique *-homomorphism \varphi:C(\sigma(a))\rightarrow A such that \varphi(z)=a. This unique *-homorphism is called the functional calculus at a. This particular section ended with the Belton result that if X is a compact Hausdorff space, \Phi(C(X))\cong X (via x\mapsto \delta_x).

2.2 Positive Elements of C*-Algebras

This section introduces a partial order on A_{\text{SA}} (the set of self-adjoint elements of A). Namely, an element a\in A_{\text{SA}} is positive if \sigma(a)\subset \mathbb{R}^+. The partial order is defined in the obvious way.

As a consequence of the Gelfand transformation and the functional calculus, we can show that positive elements of a C*-algebra possess unique positive square roots.  Another prominent result is that for an arbitrary element a\in Aa^*a is positive.

2.3 Operators and Sesquilinear Forms

As a first move, we prove that bounded operators on Hilbert spaces have adjoints. Next projections are examined and partial isometries are examined. This leads onto the polar decomposition theorem. Namely, if T is a continuous linear operator on a Hilbert space H, there exists a unique partial isometry S such that T=S|T|; where |T|=(T^*T)^{1/2}. The rest of the section focusses on the connection between operators and sesquilinear forms.

2.4 Compact Hilbert Space Operators

At first this chapter looks at some of the basic properties of these objects – e.g. if T is compact so are |T| and T^*. Thus K(H) is self-adjoint and thus a C*-algebra (it is a closed ideal in B(H)). We see that normal compact operators are diagonalisable.

We look at the finite rank operators, F(H) and see that they are dense in K(H). Next the operator x\otimes y is examined:

(x\otimes y)(z)=\langle z,y\rangle x

These are rank-one, and the x\otimes x are rank-one projections if x is a unit vector. This leads on to the fact that F(H) is linearly spanned by these rank-one projections.

This is a synopsis of what I covered up until recently (up to p.56). As an experiment I am attempting to do my study of Murphy by way of fully presenting the details on this webpage. I am unsure of whether or not this is too time consuming. Presently I am on page 63 and I will have to cover the rest of the chapter material (10 pages) in one day or similar if I am going to consider this tactic feasible.

I have continued to work through Murphy http://books.google.com/books?id=emNvQgAACAAJ&dq=gerald+murphy+c*+algebras+and+operator+theory&h

I have finished off section 1.4 including Atkinson’s Theorem and a first look at the unilateral shift. I have done exercises 1-7. In terms of progress, I am on p.31 of 265, with 13 exercises left in this section. Following discussions with my supervisor, I may be able to leave out sections 3.2, 3.5, 4.4, 5.2-6 and the whole of chapter 7.

I have continued to work through Murphy http://books.google.com/books?id=emNvQgAACAAJ&dq=gerald+murphy+c*+algebras+and+operator+theory&h

I have finished off my revision of sections 1.2 (The Spectrum and the Spectral Radius) & 1.3 (The Gelfand Representation). Section 1.4 is a new topic for me – Compact and Fredholm Operators. A linear map T:X\rightarrow Y between Banach spaces is compact if T(B_1^X[0]) is totally bounded. As a corollary, all linear maps on finite dimensional spaces are compact. The transpose T^*:Y^*\rightarrow X^* has been introduced by Murphy is this chapter, and I have seen that if T is compact, then so is T^*. A linear map T is Fredholm if the T(X) and \text{ker }T are finite dimensional. In terms of progress, I am on p.25 of 265.

%d bloggers like this: