Taken from Hopf Algebras by Abe. This is not even nearly finished however I pressed publish instead of save draft… oh well. 

In this section, we give the definition of Hopf algebras and present some examples. We begin by defining coalgebras, which are in a dual relationship with algebras., then bialgebras and Hopf algebras as algebraic systems in which the structures of algebras and coalgebras are interrelated by certain laws.

1.1 Coalgebras

We define a coalgebra dually to an algebra. Given an algebra A and algebra homomorphisms  \Delta:A\rightarrow A\otimes A and \varepsilon:A\rightarrow\mathbb{C}, we call (A,\Delta,\varepsilon) or just A a coalgebra when we have:

(\Delta\otimes I_A)\circ \Delta=(I_A\otimes\Delta)\circ\Delta,

(the coassociative law).

(\varepsilon\otimes I_A)\circ\Delta=(I_A\otimes\varepsilon)\circ\Delta,

(the counitary property)

The maps \Delta and \varepsilon are called the comultiplication map and the counit map of A, and together they are said to be the structure maps of the coalgebra A.

Now suppose we are given two coalgebras (A,\Delta_A,\varepsilon_A) and (B,\Delta_B,\varepsilon_B). A linear map \sigma:A\rightarrow B satisfying the conditions

\Delta_B\circ \sigma=(\sigma\otimes\sigma)\circ \Delta_A, \varepsilon_B\circ \sigma=\varepsilon_A

is called a coalgebra morphism.

For algebras A and B we define a linear map \tau:A\otimes B\rightarrow B\otimes A by \tau(x\otimes y)=y\otimes x for x\in Ay\in B. If a coalgebra satisfies

\tau\circ \Delta=\Delta,

then A is said to be cocommutative. If A and B are two coalgebras, the tensor product A\otimes B becomes a coalgebra with structure maps

\Delta_{A\otimes B}=(I_A\otimes\tau\otimes I_B)\circ (\Delta_A\otimes\Delta_B),

\varepsilon_{A\otimes B}=\varepsilon_A\otimes\varepsilon_B,

which we call the tensor product of A and B. If A,\,B are cocommutative, then A\otimes B is a direct product of cocommutative coalgebras where the canonical projections \pi_1:A\otimes B\rightarrow A\pi_2:A\otimes B\rightarrow B are respectively given by \pi_1(a\otimes b)=a\varepsilon(b) and \pi_2(a\otimes b)=\varepsilon(a)b for a\in Ab\in B.

When a subalgebra A_0\subset A of a coalgebra A satisfies the condition \Delta A_0\subset A_0\otimes A_0, then A_0 becomes a coalgebra with the restrictions of \Delta,\,\varepsilon to A_0 as the structure maps. Such a A_0 is called a subcoalgebra of A.

For algebras A and B, if we define a map \chi:A^*\otimes B^*\rightarrow (A\otimes B)^* by

\chi(\rho\otimes\tau)(x\otimes y)=\rho(x)\tau(y),

where \rho\in A^*\tau\in B^*x\in A and y\in B, then \chi is an injective linear map (Suppose that \chi(\rho\otimes\tau)=0. That is to say that for all x\otimes y\in A\otimes B

\chi(\rho\otimes\tau)(x\otimes y)=\rho(x)\tau(y)=0.

and by the no divisors theorem \rho(x)=0 or \tau(y)=0 for all x\in Ay\in B. Hence x or y=0 and 0\otimes y=x\otimes 0=0.). Given a coalgebra A, let A^* denote the dual of A.

\nabla:A^*\otimes A^*\overset{\chi}{\rightarrow}(A\otimes A)^*\overset{\Delta^*}{\rightarrow}A^*,

\eta:\mathbb{C}\cong \mathbb{C}^*\overset{\varepsilon^*}{\rightarrow}A^*,

(A^*,\nabla,\eta) becomes an algebra, which we call the dual algebra of A.

I’m not sure exactly what \Delta^* and \varepsilon^* are here. I’m supposing that they are the adjoints of the linear maps \Delta and \varepsilon but I haven’t ever gone through the construction via the inducement of a map \hat{T}:B^*\rightarrow A^* from a linear map T:A\rightarrow B — which is an approach mentioned here. The only natural inducement I can think of is follows. Let T:U\rightarrow V be a linear map between vector spaces U and V. Now define a linear map \tilde{T}:V^*\rightarrow U^* by \rho(\cdot)\mapsto\rho(T\cdot).

Now let B:U\times V\rightarrow\mathbb{C} be a bilinear form. A bilinear form B:U\times V\rightarrow\mathbb{C} induces linear maps \Phi_{UV}:U\rightarrow V^* and \Phi_{VU}:V\rightarrow U^*:

\Phi_{UV}:U\rightarrow V^*u\mapsto \Phi_{UV}(u) where \Phi_{UV}(u)(v)=B(u,v),

and similarly for \Phi_{VU}.

 If U and V are inner product spaces, I can also define a bilinear maps \sigma:U\times V\rightarrow \mathbb{C} using the inner product of V and T:

\sigma(u,v)=\langle Tu,v\rangle_V.

By fixing u\in U, I could define a linear functional on V and hence we have a map \sigma_u:U\rightarrow V^* given by:

\sigma_u(v)=\langle Tu,v\rangle_V.

I’m happy enough here to invoke Riesz representation theorem and the existence of T^*:V\rightarrow U such that

\sigma_u(v)=\langle Tu,v\rangle_V=\langle u,T^*v\rangle_U=\sigma_v(u),

where I define \sigma_v:V\rightarrow U^* in the obvious way. So what I have is T:U\rightarrow VT^*:V\rightarrow U but I really want to identify T^* with a \hat{T}:V^*\rightarrow U^* and this will hopefully tell me what \Delta^* and \varepsilon^* are. Let \rho\in V^* and define \hat{T}:V^*\rightarrow U^*, by


I think this is the identification I need. For example, take \rho\in (A\otimes A)^*. In this construction, \Delta^*\equiv\hat{\Delta} so that


where \Delta^*:A\rightarrow A\otimes A is the inner product adjoint of \Delta. Similarly \varepsilon^*\equiv\hat{\varepsilon}(z(\cdot))=z(\varepsilon^*(a)).

In general, \chi is not necessarily an isomorphism. Thus we cannot define a coalgebra structure on the dual space A^* of an algebra A in a similar fashion. However, if A is a finite dimensional vector space, then \chi:A^*\otimes A^*\rightarrow (A\otimes A)^* turns out to be a vector space isomorphism; if \nabla\eta are the structure maps of A, then by setting \Delta_*=\nabla^*\circ\chi^{-1}\varepsilon=\eta^*, we see that (A^*,\Delta_*,\varepsilon) becomes a coalgebra.

Firstly, I can’t see how \Delta_* can be a comultiplication for A^*. We require \Delta_*:A^*\rightarrow A^*\otimes A^*\cong(A\otimes A)^*, yet \chi^{-1} doesn’t act on A^* — it acts on (A\otimes A)^*. I’m going to try \Delta_*=\nabla^* — at least \nabla^*:A^*\rightarrow A^*\otimes A^*. Let \rho\in A^* and \tau_1\otimes\tau_2\in A^*\otimes A^*. We know that

\langle \nabla(\tau_1\otimes\tau_2),\rho\rangle_{A^*}=\langle \tau_1\otimes\tau_2,\nabla^*\rho\rangle_{A^*\otimes A^*}.

We might want to show that \nabla^*\nabla=I_{A\otimes A} so that \nabla^* is the inverse of \nabla when we restrict to \text{ran}(\nabla). This is equivalent to \nabla^*\nabla=I_{A^*\otimes A^*} or \nabla^*\hat{\Delta}\chi=I_{A^*\otimes A^*} and even further \chi^{-1}\hat{\Delta}^*\hat{\Delta}\chi=I_{A^*\otimes A^*}. From here we may be able to identify elements of A^* with elements of A^*\otimes A^* uniquely and hopefully pull the coassociativity of \nabla^* from this via \hat{\Delta}. I don’t know if this is necessarily true but I can still look at statements about \nabla and turn them into statements about \Delta — which has coassociativity.  If we note that \nabla=\hat{\Delta}\circ\chi, then \nabla^*=\chi^{-1}{\hat{\Delta}}^* so we need to show that

(\chi^{-1}\hat{\Delta}^*\otimes I)\circ \chi^{-1}\hat{\Delta}^*(\rho)=(I\otimes\chi^{-1}\hat{\Delta}^*)\circ\chi^{-1}\hat{\Delta}^*(\rho), for all \rho\in A^*.

Could we express \hat{\Delta}^* in terms of \Delta alone? We start by noting that \hat{\Delta}:(A\otimes A)^*\rightarrow A^* by

\theta\mapsto \theta\circ \Delta^*

While this is indeed progress and I’m getting better at visualising what’s going on I’ll leave it there for now…

This is called the dual coalgebra of A.

Example 2.1 (Modified)

Let V be a vector space with an orthonormal basis E=\{e_\lambda\}_{\lambda\in\Lambda}. Define linear maps:

\Delta:V\rightarrow V\otimes V\varepsilon(V)\rightarrow\mathbb{C}

for e\in E by \Delta(e)=e\otimes e\varepsilon(e)=1, then (V,\Delta,\varepsilon) becomes a coalgebra (\checkmark). The dual space V^* can be identified with F(E) where the dual algebra structure of V is given by




for f,\,g\in F(E)e\in E and k\in\mathbb{C}.

First let us examine \nabla: V^*\otimes V^*\rightarrow V^*. Take an element f\otimes g\in V^*\otimes V^*. Now \chi(f\otimes g)\in (V\otimes V)^* and \hat{\Delta}\chi(f\otimes g)(e)=\chi(f\otimes g)(\Delta^* e). So if I can show that \Delta^*e=e\otimes e we are done as this will yield \nabla(f\otimes g)(e)=f(e)g(e). Now, for all e_i,\,e_j\in E we have

\langle e_i,\Delta^*(e_j\otimes e_j)\rangle_V=\langle \Delta e_i,e_j\otimes e_j\rangle_{V\otimes V},

=\langle e_i\otimes e_i,e_j\otimes e_j\rangle_{V\otimes V}=\langle e_i,e_j\rangle_V\langle e_i,e_j\rangle_V=\delta_{ij},

\Rightarrow \langle e_i,\Delta^*(e_j\otimes e_j)\rangle\overset{!}{=}\delta_{ij}.

Now \Delta^*(e_j\otimes e_j)=\sum_{\lambda\in\Lambda}a_\lambda e_\lambda and from this it is clear that \Delta^* (e_j\otimes e_j)=e_j so we are done. 

Now looking at the map \hat{\varepsilon}=\eta:\mathbb{C}\cong\mathbb{C}^*\rightarrow V^*. Now let a\in \mathbb{C}^* so that we want


The only way this makes any kind of sense is if \eta(a)=a\,1_E, where 1_E is the constant function on E.

Example 2.2

Let V be a vector space with a countable orthonormal basis \{e_n\}_{n\in\mathbb{N}} and define

\Delta e_n=\sum_{i=1}^{n}e_i\otimes e_{n-i}\varepsilon(e_n)=\delta_{0n}.

to obtain the coalgebra (V,\Delta,\varepsilon) (\checkmark). We will briefly look at the structure of the dual algebra of V. If we let x_i\in V^* be defined by

x_i(e_j)=\langle x_i,e_j\rangle=\delta_{ij}i,\,j=0,1,2,\dots.

then \rho\in V^* can be written

\rho=\sum_{i=0}^\infty a_ix_i, where a_i\in\mathbb{C}

With regard to multiplication on V^*, we have

\langle x_i x_j,e_k\rangle=\langle x_i\otimes x_j,\Delta e_k\rangle=\left\langle x_i\otimes x_j,\sum_{l=0}^ke_l\otimes e_{k-l}\right\rangle

=\sum_{l=0}^k\langle x_i,e_l\rangle\langle x_j,e_{k-l}\rangle=\delta_{j,k-i}=\delta_{i+j,k}.

Thus the relation x_ix_j=x_{i+j} holds, and we get x_i=x_1^i for i=0,1,2,\dots. Therefore we see that V^* is isomorphic to the ring \mathbb{C}[x_1].

Example 2.3

 Let n be a natural number and set E=\{e_{ij}:1\leq i,j\leq n\} and V the vector space with basis E. Defining

\Delta(e_{ij})=\sum_{k=1}^{n}e_{ik}\otimes e_{kj}\varepsilon(e_{ij})=\delta_{ij},

we obtain a coalgebra (V,\Delta,\varepsilon) (\checkmark). The dual space V^* of V is a vector space of dimension n^2 and if we define f_{ij}\in V^* by

\langle f_{ij},e_{ij}\rangle=\delta_{ik}\delta_{jl},

then \{f_{ij}:1\leq i\,j\leq n\} is a basis for V^*. By identifying A=\sum_{i,j=1}^na_{ij}f_{ij} with the n\times n square matrix [a_{ij}]_{ij}V^* becomes the algebra M_n(\mathbb{C}) of all n\times n square matrices with complex entries.

Notation for Coalgebra Operations

In general, the notation used for operations of coalgebras is not as concise as that for operations of algebras. The following notation is effective in simplifying various types of operations. Given a colalgebra (A,\Delta,\varepsilon) and a\in A, we can write

\Delta(a)=\sum_{i=1}^na_{1i}\otimes a_{2i}, for a_{1i},\,a_{2i}\in A.

We write this formally as

\Delta(a)= a_{(1)}\otimes a_{(2)},

and for linear operators (or functionals) f,\,g, we write

(f\otimes g)\Delta(a)=f(a_{(1)})\otimes g(a_{(2)}).

Moreover, since the associative law holds, we have

 (\Delta\otimes I_A)\circ\Delta(a)=(I_A\otimes\Delta)\circ\Delta(a)=a_{(1)}\otimes a_{(2)}\otimes a_{(3)},

and in general we define \Delta_1=\Delta, and

\Delta_n=(\underbrace{I_A\otimes\cdots\otimes I_A}_{n-1 \text{ times}}\otimes \Delta)\circ \Delta_{n-1},

for n>1and write

\Delta_n(a)=a_{(1)}\otimes a_{(2)}\otimes\cdots\otimes a_{(n+1)}.

Using this method of notation, the counitary property may be expressed as


Exercise 2.1

Prove the following equalities

\Delta (a)=\varepsilon(a_{(2)})\otimes\Delta(a_{(1)})=\Delta(a_{(2)})\otimes\varepsilon(a_{(1)}),



 Solution : First write 

\Delta(a)=a_{(1)}\otimes a_{(2)}.

Now we note that, using the counitary property, we can write

\Delta(a)=(I\otimes I)\Delta(a)=(I\otimes \varepsilon)(\Delta\otimes I)\Delta(a).


(I\otimes \varepsilon)(\Delta\otimes I)(a_{(1)}\otimes a_{(2)})=(I\otimes \varepsilon)\Delta(a_{(1)})\otimes a_{(2)}


Now use the counitary and coassociativity to write 

(I\otimes \varepsilon)(\Delta \otimes I)\circ\Delta=(\varepsilon\otimes I)(\Delta\otimes I)\circ \Delta=(\varepsilon\otimes I)(I\otimes \Delta)\circ\Delta.

This yields:

\Delta(a)=\varepsilon(a_{(1)})\otimes a_{(2)}.

After we make the identifications A\cong A\otimes\mathbb{C}\cong\mathbb{C}\otimes A, this is the first set of equalities.

By applying coassociativity we have

\Delta(a)=(I\otimes \varepsilon)(\Delta\otimes I)\circ\Delta(a)=(I\otimes\varepsilon)(I\otimes\Delta)\circ\Delta(a).

A quick calculation shows the first of the second set of equalities.  A calculation shows that  

(I\otimes I\otimes\varepsilon)(I\otimes \Delta)\circ \Delta(a)=a_{(1)}\otimes a_{(2)}\varepsilon(a_{(3)}).

By coassociativity:

(I\otimes I\otimes\varepsilon)(I\otimes\Delta)\circ\Delta=(I\otimes I\otimes\varepsilon)(\Delta\otimes I)\circ\Delta.

Now hit a with this:

(I\otimes I\otimes\varepsilon)(\Delta\otimes I)(a_{(1)}\otimes a_{(2)})=(I\otimes I\otimes\varepsilon)(\Delta(a_{(1)})\otimes a_{(2)})


by the first of the equality above.

For the third equality, the right-hand side is given by:

(\varepsilon\otimes I\otimes \varepsilon)(I\otimes \Delta)\Delta(a).

Now by using coassociativity and the counitary propery twice:

(\varepsilon\otimes I\otimes \varepsilon)(I\otimes \Delta)\circ \Delta=(\varepsilon\otimes I\otimes \varepsilon)(\Delta\otimes I)\circ\Delta

=(\varepsilon\otimes I)\circ\Delta=I.

So all three sets of inequalities are proven.

Theorem 2.1.1

Given a vector space H, suppose that there are linear maps

\nabla:H\otimes H\rightarrow H\eta:\mathbb{C}\rightarrow H\Delta:H\rightarrow H\otimes H\varepsilon:H\rightarrow\mathbb{C}

such that (H,\nabla,\eta) is an algebra and (H,\Delta,\varepsilon) is a coalgebra. Then the following are equivalent.

  1. \nabla and \eta are coalgebra morphisms.
  2. \Delta and \varepsilon are algebra morphisms.
  3. \Delta(gh)=g_{(1)}h_{(1)}\otimes g_{(2)}h_{(2)}\Delta(1_H)=1_{H\otimes H}\varepsilon(gh)=\varepsilon(g)\varepsilon(h)\varepsilon(1_H)\rightarrow 1.

Proof : The conditions under which \Delta is an algebra morphism are (see here):

(a)   \Delta\circ\nabla=(\nabla\otimes\nabla)\circ(I_H\otimes\tau \otimes I_H)\circ(\Delta\otimes\Delta)

(b)    \Delta\circ\eta=\eta\otimes\eta

The conditions under which \varepsilon is an algebra morphism are

(c)    \varepsilon\otimes\varepsilon=\varepsilon\circ\nabla

(d)    1=\varepsilon\circ\eta.

On the other hand, \nabla is a coalgebra morphism exactly when it satisfies conditions (a) and (c) (\checkmark); and \eta is a coalgebra morphism if it satisfies conditions (b) and (d) (\checkmark). This fact allows us to conclude that 1. \Leftrightarrow 2.

Now assume (2). The equation

(\nabla\otimes\nabla)\circ (I_A\otimes\tau\otimes I_A)\circ(\Delta\otimes\Delta)=\Delta\circ\nabla

implies that \Delta(gh)=g_{(1)}h_{(1)}\otimes g_{(2)}h_{(2)}.

The equation \eta\otimes\eta=\Delta\circ\eta we get that \Delta(1_A)=1_{A\otimes A}.

The equation \nabla_{\mathbb{C}}\circ(\varepsilon\otimes\varepsilon)=\varepsilon\circ\nabla_A yields \varepsilon(gh)=\varepsilon(g)\varepsilon(h).

Finally the equation I_{\mathbb{C}}\otimes I_{\mathbb{C}}=\varepsilon\circ\eta_A yields \varepsilon(1_A)=1. Clearly (iii) \Rightarrow  (ii) and we are done \bullet

When a vector space H together with linear maps \nabla,\,\eta,\,\Delta,\,\varepsilon satisfies one of the equivalent conditions of Theorem 2.1.1, then (H,\nabla,\eta,\Delta,\varepsilon), or simply  H, is called a bialgebra. Given two bialgebras H and K, when a linear map \sigma:H\rightarrow K is an algebra morphism and is also a coalgebra morphism, then \sigma is called a bilalgebra morphism.

If a subspace K of a bialgebra H is a subalgebra as well as a subcoalgebra, then K becomes a bialgebra, and is called a sub-bialgebra of H. Moreover, if a bialgebra H is finite dimensional, then a bialgebra structure may be defined on its dual H^*, which we call the dual bialgebra of H.

Example 2.4

Let S be a semigroup with identity, and let \mathbb{C}S be the free complex vector space generated by S. In the sense of example 2.1, \mathbb{C}S has a coalgebra structure when we consider S as a basis for \mathbb{C}S. With regard to these two structures, \mathbb{C}S admits a bialgebra structure. Such a bialgebra is called a semigroup bialgebra. In particular, when S is a group, it is called a group bialgebra. The dual space (\mathbb{C}S)^* can be identified with F(S), and in particular, when S is finite, then (\mathbb{C}S)^* becomes a dual bialgebra of \mathbb{C}S. The algebra structure of (\mathbb{C}S)^* is the one described in Example 1.5, and its coalgebra structure is given for f\in F(S)x,\,y\in S, and the identity element e\in S by

\langle \Delta(f),x\otimes y\rangle=\langle f,xy\rangle\langle \varepsilon f,x\rangle=\langle f,e\rangle.

I’m not one hundred percent what these \langle,\rangle are — do they just mean:

\Delta(f)(x\otimes y)=f(xy) and \varepsilon(f)(x)=f(e).

When an element a of a coalgebra A is such that \varepsilon(a)=1 and \Delta(a)=a\otimes a, then a is said to be a group-like element. We denote the set of all group-like elements by g(A).

Theorem 2.1.2

Let A be a coalgebra. 

(i) The elements of g(A) are linearly independent. Thus \mathbb{C}g(A) may be regarded as a subcoalgebra of A.

(ii) When H is a bialgebra, g(H) is a semigroup with respect to multiplication, and the subspace of H generated by g(H) is a sub-bialgebra of H isomorphic to the semigroup bialgebra \mathbb{C}g(H) of g(H).

Proof : Suppose that the elements in g(A) are linearly dependent, and let n+1 be the least value for a set of elements of g(A) to be linearly dependent. Then there exist g,g_1,\dots,g_n\in g(A) such that g_1,\dots,g_n are linearly independent and g can be written

g=\lambda_1g_1+\cdots+\lambda_ng_n\lambda_i\in\mathbb{C} and \lambda_i\neq 0.


\Delta g=g\otimes g=\sum_{i,j=1}^n\lambda_i\lambda_jg_i\otimes g_j, and also

\Delta g=\sum_{i=1}\lambda_i\Delta g_i=\sum_{i=1}^n\lambda_i g_i\otimes g_i.

Hence we may write

\sum_{i=1}^n(\lambda_i^2-\lambda_i)(g_i\otimes g_i)+\sum_{i\neq j}\lambda_i\lambda_j(g_i\otimes g_j)=0.

Since \{g_i\otimes g_j\}_{i,j\in\{1,\dots,n\}} is a set of linearly independent elements of A\otimes A (\checkmark), this implies that all of the \lambda_i=0 or 1 — and also that the mixed terms \lambda_i\lambda_j all vanish. Hence n=1 and g=g_1, which contradicts the choice of n (i.e. g_2,\dots,g_n is a set of n-1 linearly dependent elements. We also have no problem when n+1=2 \checkmark). Hence the elements of g(A) are linearly independent. It is apparent that \mathbb{C}g(A) is a subcoalgebra of A.

To prove part (ii), the first thing is to show that \nabla:g(A)\otimes g(A)\rightarrow g(A). Note that associativity is clear as g(A)\otimes g(A)\subset A\otimes A. Using the equation

\Delta\circ\nabla=(\nabla\otimes\nabla)\circ(I_H\otimes \tau\otimes I_H)\circ (\Delta\otimes \Delta),

we can show that \Delta(\nabla(g\otimes h))=(\nabla(g\otimes h))\otimes (\nabla(g\otimes h)) for all g,\,h\in g(A), as required \bullet

An element c of a bialgebra A (resp a coalgebra B which has only one group-like element, i.e. g(B)=\{g\}) which satisfies \Delta c=c\otimes 1_A+1_A\otimes c (resp. \Delta(c)=c\otimes g+g\otimes c) is called a primitive element. The set of all primitive elements of A is denoted by P(A). Now we have the following.

Theorem 2.1.3

If H is a bialgebra, then P(H) is a subspace of H, and for x,\,y\in P(H), we have [x,y]=xy-yx\in P(H). Thus P(H) has the structure of a Lie Algebra. Moreover, if x\in P(H)\varepsilon(x)=0.

Proof : If x,\,y\in P(H), then

\Delta([x,y])=\Delta x\Delta y-\Delta y\Delta x

=(x\otimes 1+1\otimes x)(y\otimes 1+1\otimes y)

-(y\otimes 1+1\otimes y)(x\otimes 1+1\otimes x)

=[x,y]\otimes 1+1\otimes [x,y].

Thus [x,y]\in P(H). For x\in P(H) we have

(1\otimes \varepsilon)\Delta x=x\otimes 1+1\otimes \varepsilon(x)=x\otimes 1.

Hence \varepsilon(x)=0 \bullet

Example 2.6

Let S=\{a_0,a_1,\dots\}, and let A=\mathbb{C}S be the coalgebra defined in Example 2.2. Defining a multiplication by

a_ia_j={i+j\choose i}a_{i+j},

A becomes an algebra. With respect to these two structures, A becomes a bialgebra. By setting


we obtain d_i d_j=d_{i+j}, which implies d_i=d_1^i for i=0,1,2,\dots. Thus as an algebra, A is isomorphic to the polynomial ring \mathbb{C}[d_1] in one variable. In this situation, P(A)=\mathbb{C}d_1.

1.2 Hopf Algebras

Given a coalgebra C and an algebra A. If f,\,g\in L(C,A),

f\star g=\nabla_A\circ(f\otimes g)\circ\Delta_C

is said to be the convolution of f and g. If x\in g(C), then (f\star g)(x)=f(x)g(x), which is simply the definition of the product of two functions on g(C) via multiplication on AL(C,A) becomes an algebra with structure maps

\nabla_{L(C,A)}(f\otimes g)=f\star g\eta_{L(C,A)}(\alpha)=\alpha\eta_A\circ\varepsilon_C.

Given a bialgebra H, let H^A and H^C respectively denote H regarded simply as an algebra and a coalgebra, and view L(H^C,H^A) as an algebra via convolution as defined above. When the identity map I_A of H is an invertible element of L(H^C,H^A) with respect to multiplication on L(H^C,H^A), the inverse S of I_H is called the antipode of H.  The antipode S is the element which satisfies one of the following equivalent conditions

S\star I_H=I_H\star S=\eta\circ\varepsilon

\nabla\circ(S\otimes I_H)\circ\Delta=\nabla(I_H\otimes S)\circ\Delta=\eta\circ\varepsilon

A bialgebra with antipode is called a Hopf algebra. Let H,\,K be Hopf algebras and let S_H,\,S_K be the antipodes of H,\,K respectively. When a bialgebra morphism \sigma:H\rightarrow K satisfies the condition

S_K\circ \sigma=\sigma\circ S_H,

\sigma is called a Hopf algebra morphism.

Example 2.7

Denote by \mathbb{C}G the group bialgebra of a group G (see Example 2.4). We define a linear map S:\mathbb{C}G\rightarrow \mathbb{C}G by S(x)=x^{-1} for x\in G. Then

(I\star S)(x)=xS(x)=xx^{-1}=e=\varepsilon(x)e=\eta\circ\varepsilon(x)x\in G.

Thus S is the antipode of \mathbb{C}G, so that \mathbb{C}G becomes a Hopf algebra. S is an anti-automorphism of \mathbb{C}G as an algebra and S^2 is the identity map of \mathbb{C}G.

Theorem 2.1.4

The following properties hold for an antipode S of a Hopf algebra H.

  1. S(gh)=S(h)S(g), for all g,\,h\in H.
  2. S(I_H)=I_H; namely S\circ\eta=\eta.
  3. \varepsilon\circ S=\varepsilon.
  4. \tau\circ(S\otimes S)\circ\Delta=\Delta\circ S; in other words.
  5. The following conditions are equivalent
  • h\in H implies S(h_{(2)})h_{(1)}=\eta\circ\varepsilon(h).
  • h\in H implies h_{(2)}S(h_{(1)})=\eta\circ\varepsilon(h).
  • S\circ S=1.
      6. If H is commutative or cocommutative, then S^2=1.


1. and 2. imply that S is an anti-algebra morphism; 3. and 4. imply that S

is an anti-coalgebra morphism.

Proof : 1. Define elements of \mu\,\nu,\,\rho\in L((H\otimes H)^C,H^A) in the following manner. For g,\,h\in H, we write

\mu(g\otimes h)=gh\nu(g\otimes h)=S(h)S(g)\rho(g\otimes h)=S(gh).

Now, if we show that \rho\star\mu=\mu\star \nu=\eta\circ\varepsilon, then we get \rho=\nu, which would prove 1.

I can show that 

S(g_{(1)}h_{(1)})g_{(2)}h_{(2)}=S(g_{(1)})S(h_{(1)})g_{(2)}h_{(2)}, and


but can’t see how this implies that \rho=\nu — which is the technique Abe uses. To show \rho\star\mu=\mu\star\nu=\eta\circ\varepsilon is just a calculation.

2. Since \varepsilon(I_H)=1\Delta(I_H)=I_H\otimes I_H, we have

I_A=\eta\circ\varepsilon(I_H)=\nabla\circ(1_H\otimes S)(I_H\otimes 1)=S(I_H).

3. From the fact that \varepsilon\circ\eta\circ\varepsilon(h)=\varepsilon(h)\varpesilon(I_H)=\varepsilon(h) and…