.

The algebra of functions is a multimatrix algebra:

.

As it happens, where , the counit on is given by , that is , dual to .

To help with intuition, making the incorrect assumption that is a classical group (so that is commutative — it’s not), because , the statement , implies that for a real coefficient ,

,

as for classical groups .

That is the condition is a quantum analogue of .

Consider a random walk on a classical (the algebra of functions on is commutative) *finite* group driven by a .

The following is a very non-algebra-of-functions-y proof that implies that the convolution powers of converge.

*Proof: *Let be the smallest subgroup of on which is supported:

.

We claim that the random walk on driven by is *ergordic* (see Theorem 1.3.2).

The driving probability is not supported on any proper subgroup of , by the definition of .

If is supported on a coset of proper normal subgroup , say , then because , this coset must be , but this also contradicts the definition of .

Therefore, converges to the uniform distribution on

Apart from the big reason — that this proof talks about points galore — this kind of proof is not available in the quantum case because there exist that converge, but not to the Haar state on any quantum subgroup. A quick look at the paper of Zhang shows that some such states have the quantum analogue of .

So we have some questions:

- Is there a proof of the classical result (above) in the language of the algebra of functions on , that necessarily bypasses talk of points and of subgroups?
- And can this proof be adapted to the quantum case?
- Is the claim perhaps true for all finite quantum groups but not all compact quantum groups?

Test 2, worth 15% of your final grade, based on Chapter 3: Algebra, will take place on Friday 30 November in the usual lecture venue of B214.

Here is a provisional sample test. If we don’t cover something by 20 November inclusive it won’t be on the test: this is so you will have two full tutorials before the test.

I will give you a copy of the sample today, Friday 9 November. The sample is to give you an idea of the length of the test. You know from Test 1 the layout (i.e. you write your answers on the paper). You will be allowed use a calculator for all questions.

I strongly advise you that, for those who might have done poorly, or not particularly well, in Test 1, attending tutorials alone will not be sufficient preparation for this test, and you will have to devote extra time outside classes to study aka do exercises.

In Week 8 we finished talking about equations and started studying quadratics.

In Week 9 we will finish talking about quadratics and begin studying exponents.

Please feel free to ask me questions about the exercises via email or even better on this webpage — especially those of us who struggled in the test.

Please see the Student Resources tab on the top of this page for information on the Academic Learning Centre, etc.

]]>

I will have the assignments with me tomorrow if you want to see your work.

Some comments on common mistakes.

Assessment 2 is on p.136. It has a hand-in time of 16:00 Monday 26 November.

As suggested in class, I would advise you to — if possible — complete this assignment early if you can, freeing up time in your tutorial to get work done on Chapter 3: Probability & Statistics.

We looked at the Poisson distribution, the Normal distribution, and started discussing Sampling.

We will complete Chapter 3 by looking at Sampling and Hypothesis Testing. We may have an extra tutorial during one of the lectures.

Please feel free to ask me questions about the exercises via email or even better on this webpage — especially those of us who struggled in the test.

Please see the Student Resources tab on the top of this page for information on the Academic Learning Centre, etc.

]]>The 15% Test 2 will take place at 16:00 on Monday 26 October, Week 11, in B263. There is a sample test in the notes, p.146. Chapter 3: Differentiation is going to be examined. A Summary of Vectors (p.144): you will want to know this stuff very well. You will be given a copy of these tables.

I strongly advise you that attending tutorials alone will not be sufficient preparation for this test and you will have to devote extra time outside classes to study aka do exercises.

We looked at Implicit Differentiation and Partial Differentiation. If you are interested in a very “mathsy” approach to curves you can look at this.

We will look at applications of partial differentiation to differentials and error analysis. We might start Chapter 4 on (Further) Integration. A good revision of integration/antidifferentiation may be found here.

Please feel free to ask me questions about the exercises via email or even better on this webpage.

Please see the Student Resources tab on the top of this page for information on the Academic Learning Centre, etc.

]]>*TL;DR: The strategy to antidifferentiate a function that I present is as follows:*

*Direct**Manipulation**-Substitution**Parts*

Roughly, if a real-valued function is positive *on * then the *integral *of on is the area under the curve between and . This can be made as rigourous as required: for example see here and here.

*If this is the graph of then is the area shaded.*

Using the Second Fundamental Theorem of Calculus, to calculate integrals one needs to *antidifferentiate*. Recall that to differentiate a function we find the function , the *derivative *of , whose value at , , is the slope of the tangent to at . For example, the derivative of is and we write

.

This is an *operator *that takes as an input a function (of ), and outputs another function of (the derivative of the input).

Now *anti*differentiating is doing this in reverse. So an *anti*derivative of is and, for the moment, we may write

.

*Antidifferentiating is like running differentiation backwards*

The only problem is that this isn’t quite the whole story… because e.g. has more than one *anti*derivative, for example to name but two. Indeed every function given by is an antiderivative of . To make all this precise we need the language of equivalence classes but actually for the purposes of integration it doesn’t matter if you use , , or — for the purposes of integration you get the same answer (and this is one sense in which these functions are all *equivalent*).

This is usually called the *constant of integration*. Were it up to me I would call it the constant of *antidifferentiation*.

Therefore we may as well think of the antiderivative of — for the purposes of integration — as , i.e. just with no (the *why *will be revealed).

*If is an antiderivative of (so that ), then*

.

*Equivalently,*

.

*In words, to calculate an integral, find an antiderivative, then do ‘(substitute) top limit minus bottom limit’.*

*To calculate the integral of we need to antidifferentiate it: we need to find an such that *

We said earlier it doesn’t matter which antiderivative we use. It can be shown that all antiderivatives of a function differ by a constant (their graphs are the same except shifted up or down — same derivatives means same slopes: they are ‘parallel’) and so all are of the form for a constant . So if is an antiderivative of and we use instead of we find the integral equal to

,

i.e. the same thing as we would had without using the .

Therefore, all in all, we need to be able to antidifferentiate if we want to calculate integrals.

Perhaps, amongst other reasons, because antidifferentiation is not a ‘perfect’ inverse of differentiation (we get a ‘family’ of antiderivatives rather than just one) the notation is not used. Instead the notation is used (note the lack of limits). This can be read as ‘antidifferentiate’ (), with respect to ) (). This ‘antiderivative operator’ isn’t exactly an operator in the sense that but it makes a little sense to write something like:

.

The strategy to antidifferentiate a function that I present is as follows:

- Direct
- Manipulation
- -Substitution
- Parts

Historically ‘integrals’ were being calculated in eras BC by the likes of Archimedes long before derivatives were by the likes of Newton and Leibniz in the 17th century, but in a modern learning environment one learns about differentiating and derivatives first.

Therefore there should be a number of derivatives that you are familiar with, for example

etc.

Usually such derivatives will be presented in a table of derivatives. Of course running them backwards (s omitted):

(or rather )

etc.

gives a table of antiderivatives. A lot of antiderivatives may of course be found in a table of derivatives: just run things backwards.

Probably the three functions that people assume are not in the tables, and attempt to use more sophisticated than necessary techniques on are

.

All are in the tables. Note finally that linear combinations of functions may be antidifferentiated term-by-term-fixing-the-constants:

.

You can use a single here: if you use two (say ) you end up with — which is just a constant . This is as a consequence of the linearity of differentiation:

Find the following

(a)

(b)

(c)

(a) Using term-by-term-fixing-the-constants, and the table-found antiderivatives:

we find

.

(b) Fixing the constant by *(a bit of a manipulation — see below) *and using the table antiderivative:

,

we find

.

(c) This is the table antiderivative:

with . We antidifferentiate directly (note :

.

There are a whole host of functions that need a bit of manipulation before they can be antidifferentiated directly. It would be impossible to list all the possible manipulations one might need, but here we present a selection of commonly occurring examples.

(a) **Negative Powers** : if you know your way around indices, you know that while multiplying gives positive powers:

,

that dividing gives by contrast *negative *powers:

.

So write

,

and then the table antiderivative

, ,

may be used. Let us call this the power rule.

So, for example,

.

(b) **Surds** If you know your way around indices you know that

,

which may be antidifferentiated using the Power Rule. For example, consider

(c) **Multiply Out **Functions such as or can be written as a linear combination. For example,

.

Another example,

.

(d) **Divide in **Functions such as or for polynomials . For an example of the first,

.

The other type is harder but if you know how to do polynomial long division you can tackle antiderivatives such as

(e) **Trigonometric **manipulations. There are a wealth of trigonometric identities that can be used to simplify a function. For example, perhaps isn’t in your table of antiderivatives but what you can do is use the identity

,

to rewrite … this does still leave you with which is not in tables but we will learn below how to attack this using -substitution (see below).

Another example might be something like

.

If you know your way around trig you can find that and so

.

Other, slightly more advanced but routine techniques include partial fractions (writing a rational function as a sum of simpler fractions), and completing the square (useful for antidifferentiating some of these so-called simpler fractions). An example of the use of partial fractions:

.

We can antidifferentiate the three of these using a -substitution.

An example of completing the square would be:

,

with again the -substitution being required to finish the antiderivative off.

When we run the rules of differentiation backwards, we get rules for antidifferentiation. -substitution comes from running the Chain Rule backwards. The chain rules says that if is a composition, , then the derivative of is given by:

.

Written as a rule of antidifferentiation we get

.

This is perhaps a difficult pattern to spot so what we do is let , and then get a relationship between and by differentiating :

,

and putting the antiderivative back together:

.

Strictly speaking when we wrote:

,

well, this is nonsense, but it can be shown that although nonsense, it always gives you the correct antiderivative.

How do you know if an antiderivative requires a -substitution? Well, in terms of this strategy you have failed to find the antiderivative in the tables and none of the manipulations you have done have put the function you want to antidifferentiate in the form of something which can be antidifferentiated using the tables (sometimes you might need a back-substitution: you have but you need to replace with ).

How do you pick the ? Well there are four ways, listed from best to worst:

(a) spot a function inside another, and a constant-multiple of its derivative:

(b) spot a function inside another:

(c) use LIATE. That is pick the as the first thing you can see in the following list:

**L**ogs**I**nverse Trig**A**lgebraic: sums of powers of —- highest power first**T**rigonometric**E**xponential

(d) just try something! Usually there are only two options for — if the first doesn’t work, start again with the other.

Find the following

(a)

(b)

(c)

(a) Let us go through Direct and Manipulation first. Firstly, this is not in the tables. In terms of a manipulation we could say :

.

With nothing left to do we should try a substitution. There are three reasons to try .

- its derivative is and we have a multiple of this ()
- is inside (and originally )
- by LIATE: no logs, no inverse trig, algebraic – yes. Pick the higher power of over .

Proceed with :

,

so that

.

Note that , which can be antidifferentiated directly:

.

The original had a — antidifferentiate with respect to — so we should go back in terms of — and include the :

.

(b) This is not in the tables and the only apparent manipulation is writing

.

So we try a substitution. There are two good reasons to try :

- it’s derivative is , and we have a multiple of that.
- by LIATE: no logs, inverse trig: yes!

The third way doesn’t work great because there are two functions ‘inside’ another. In this case you would just try and and see what happens.

We know we should try but less us try to see how we know things have gone wrong. Proceeding we have

,

so that

.

This mix of and is BAD and suggests we won’t be able to find the antiderivative.

It might be possible to proceed though: perhaps we could go from to $x=\pm\sqrt{u-1}$ (we will choose … dangerously?) to get

This is worse, suggesting — correctly — we should have tried . We calculate:

:

,

which is in the tables;

.

(c) This is not in the tables. There are two possible manipulations. One involves writing as a sum and then multiplying this sum by … and then writing those sums as products. It is a good lesson to go through this as it exhibits the principle that for antidifferentiation, sums are easy while products are hard — so try and write things as sums.

But we won’t pursue that here. Instead we will rewrite (for that is what means — warning though, but the inverse of the function. So we have

,

and as this is not in the tables and we are not pursuing further manipulations, we will try a -substitution. There are two reasons to pick (note LIATE fails):

- has derivative , which also appears.
- appears inside another function; .

Off we go

,

so that

.

This is in the tables:

.

Consider the following

.

It isn’t in the tables, there are no obvious manipulations. The substitution yields

,

that is the same thing (and why in general you won’t do the -substitution ). The substitution does actually give something that looks useful. It gives

,

but , and so we have via the back-substitution

,

which is simpler but isn’t in the tables…

We need a change of tack. Note is a product. Derivatives that are products come from products. For example, occurs when you differentiate using the product rule:

.

Of course therefore

,

that is we can run the product rule backwards to generate a new way of antidifferentiating. This is (integration by) Parts. The ‘parts’ basically means that when running the product rule backwards:

,

there are two antidifferentiations and , you do one and then the other — you break it into two parts.

What we basically do is subtract from both sides to get:

,

commonly written

,

where and .

So given something you want to antidifferentiate

,

you let something and the result be :

.

Now look at the formula again:

,

you have ; how do you get from ? The answer is you integrate… wait why didn’t I say antidifferentiate?

OK, let us slow down. What I want to say is that

.

There are two good ways to see why this is the case, and one involves integration.

- , because .
- . The symbol is an elongated ‘S’ standing for ‘sum’. The means a very small bit of . So really . And what happens when you add up all the small bits of ? Well you get so !

So, you’ll have both and and now you are looking at . Well, you can get by differentiating .

Wait! How do we know how to pick the ? A good way is LIATE: that is pick the as the first thing you can see in the following list:

**L**ogs**I**nverse Trig**A**lgebraic: sums of powers of —- highest power first**T**rigonometric**E**xponential

LIATE doesn’t work really well for -substitution but generally works well for Parts. The reason it works well for Parts is because LIATE is in reverse order of ease of antidifferentiation. Recall you will

- differentiate , and
- antidifferentiate

In a certain sense (in another what I am about to say is incorrect), it is easier to differentiate in that you know how to differentiate commonly appearing functions, so you want to make the antidifferentiating of as easy as possible. The things at the top of LIATE are the most difficult to antidifferentiate, so we pick near the top of LIATE: then will be lower down and hence easier to antidifferentiate.

(a)

(b)

(a) We pick by LIATE (no logs, no inverse trig, algebraic — yes, the ‘multiplying’ ) and everything else — — is .

We differentiate :

;

we antidifferentiate :

,

that antiderivative was in the tables.

Now we use

to give

,

and of course is in the tables so we have

.

(b) We might expect this to be in the tables but it is not. Of course if it was in the tables the question would have to be begged, how did they find that antiderivative? Outside trial and error they used Parts. Let (logs — yes), and so the rest of — — is .

We differentiate :

;

antidifferentiating :

.

Now we use

to give

.

With and a -substitution we had

.

Using this example, we have

…

but and are inverse, and so , and we recover

.

Just tying up that little knot.

There are many, many more techniques. There is a big world out there… click here to have a look.

]]>Let be a the algebra of functions on a finite or perhaps compact quantum group (with comultiplication ) and a state on . We say that a quantum group with algebra of function (with comultiplication ) is a quantum subgroup of if there exists a surjective unital *-homomorphism such that:

.

In the classical case, where the algebras of functions on and are commutative,

There is a natural embedding, in the classical case, if is open (always true for finite) (thanks UwF) of ,

,

with for , and otherwise.

Furthermore, is has the property that

,

which resembles .

In the case where is a probability on a classical group , supported on a subgroup , it is very easy to see that convolutions remain supported on . Indeed, is the distribution of the random variable

,

where the i.i.d. . Clearly and so is supported on .

We can also prove this using the language of the commutative algebra of functions on , . The state being supported on implies that

.

Consider now two probabilities on but supported on , say . As they are supported on we have

and .

Consider

,

that is is also supported on and inductively .

Back to quantum groups with non-commutative algebras of functions.

- Can we embed in with a map and do we have , giving the projection-like quality to ?
- Is a suitable definition for being supported on the subgroup .

If this is the case, the above proof carries through to the quantum case.

- If there is no such embedding, what is the appropriate definition of a being supported on a quantum subgroup ?
- If does not have the property of , in this or another definition, is it still true that being supported on implies that is too?

UwF has recommended that I look at this paper to improve my understanding of the concepts involved.

]]>In Week 7 we started delving more into algebra and started talking about equations.

In Week 8 we will finish talking about equations and start studying quadratics.

Test 2, based on Chapter 3: Algebra, will take place around the end of Week 10, start of Week 11.

Please feel free to ask me questions about the exercises via email or even better on this webpage — especially those of us who struggled in the test.

]]>

I will have the assignments with me tomorrow and next Friday if you want to see your work.

Some comments on common mistakes.

Assessment 2 is on p.136. It has a hand-in time of 16:00 Monday 26 November.

We finished looking at Chapter 2 by looking at the Three Term Taylor Method for approximating solutions of ordinary differential equations.

We started Chapter 3 (Probability and Statistics) by looking at some general concepts in probability and then we looked at random variables with a binomial distribution.

We will look at the Poisson distribution and perhaps the Normal distribution.

We looked at Parametric Differentiation and Related Rates

We will look at Implicit Differentiation and Partial Differentiation. If you are interested in a very “mathsy” approach to curves you can look at this.

On Chapter 3, not until Week 11: perhaps Monday 26 November.

Please feel free to ask me questions about the exercises via email or even better on this webpage.

I am starting corrections today and will get the results to you as soon as I can. I cannot give an accurate day at this stage: it could be Monday but just as easily could be a few days after this – I can’t make any promises.

Some comments on common mistakes.

Assessment 2 is on p.136. It has a hand-in date of Monday 26 November and we have already covered everything that will be asked and so you have over five weeks to complete the assignment.

On Monday you will be sent a 15 minute survey that you will take on a mobile internet device — such as your mobile phone — during Monday’s lecture.

This survey is part of a larger project the Mathematics Department is undertaking — **Mathematics in Context: Developing Relevancy-Orientated Problems **— in an effort to improve our teaching.

If you do not have an internet ready device you may leave class early.

Maths Classes will be going full steam ahead on Monday 22 October as well as Wednesday, Thursday, Friday 1, 2, 3 November. I will call the next two weeks by Week 7.

In Week 6 we finished looking at cantilvers and then summarised what we learnt about beams. We had one lecture as a tutorial but then looked at numerical approximations to solutions of differential equations that we cannot solve exactly.

After the storm last year I recorded some examples. If you missed some classes this week you could do worse than watch this cantilever example and this summary of beams to catch up

In Week 7 we will look at the Three Term Taylor Method and begin Chapter 3 on Probability and Statistics.