*This strategy is by no means optimal nor exhaustive. It is for students who are struggling with basic integration and anti-differentiation and need something to help them start calculating straightforward integrals and finding anti-derivatives. *

*TL;DR: The strategy to antidifferentiate a function that I present is as follows:*

*Direct**Manipulation**-Substitution**Parts*

## Some Conceptional Basics

Roughly, if a real-valued function is positive *on * then the *integral *of on is the area under the curve between and . This can be made as rigourous as required: for example see here and here.

*If this is the graph of then is the area shaded.*

Using the Second Fundamental Theorem of Calculus, to calculate integrals one needs to *antidifferentiate*. Recall that to differentiate a function we find the function , the *derivative *of , whose value at , , is the slope of the tangent to at . For example, the derivative of is and we write

.

This is an *operator *that takes as an input a function (of ), and outputs another function of (the derivative of the input).

Now *anti*differentiating is doing this in reverse. So an *anti*derivative of is and, for the moment, we may write

.

*Antidifferentiating is like running differentiation backwards*

The only problem is that this isn’t quite the whole story… because e.g. has more than one *anti*derivative, for example to name but two. Indeed every function given by is an antiderivative of . To make all this precise we need the language of equivalence classes but actually for the purposes of integration it doesn’t matter if you use , , or — for the purposes of integration you get the same answer (and this is one sense in which these functions are all *equivalent*).

This is usually called the *constant of integration*. Were it up to me I would call it the constant of *antidifferentiation*.

Therefore we may as well think of the antiderivative of — for the purposes of integration — as , i.e. just with no (the *why *will be revealed).

### Second Fundamental Theorem of Calculus

*If is an antiderivative of (so that ), then*

.

*Equivalently,*

.

*In words, to calculate an integral, find an antiderivative, then do ‘(substitute) top limit minus bottom limit’.*

*To calculate the integral of we need to antidifferentiate it: we need to find an such that *

We said earlier it doesn’t matter which antiderivative we use. It can be shown that all antiderivatives of a function differ by a constant (their graphs are the same except shifted up or down — same derivatives means same slopes: they are ‘parallel’) and so all are of the form for a constant . So if is an antiderivative of and we use instead of we find the integral equal to

,

i.e. the same thing as we would had without using the .

Therefore, all in all, we need to be able to antidifferentiate if we want to calculate integrals.

Perhaps, amongst other reasons, because antidifferentiation is not a ‘perfect’ inverse of differentiation (we get a ‘family’ of antiderivatives rather than just one) the notation is not used. Instead the notation is used (note the lack of limits). This can be read as ‘antidifferentiate’ (), with respect to ) (). This ‘antiderivative operator’ isn’t exactly an operator in the sense that but it makes a little sense to write something like:

.

The strategy to antidifferentiate a function that I present is as follows:

- Direct
- Manipulation
- -Substitution
- Parts

## 1. Direct Antidifferentiation

Historically ‘integrals’ were being calculated in eras BC by the likes of Archimedes long before derivatives were by the likes of Newton and Leibniz in the 17th century, but in a modern learning environment one learns about differentiating and derivatives first.

Therefore there should be a number of derivatives that you are familiar with, for example

etc.

Usually such derivatives will be presented in a table of derivatives. Of course running them backwards (s omitted):

(or rather )

etc.

gives a table of antiderivatives. A lot of antiderivatives may of course be found in a table of derivatives: just run things backwards.

Probably the three functions that people assume are not in the tables, and attempt to use more sophisticated than necessary techniques on are

.

All are in the tables. Note finally that linear combinations of functions may be antidifferentiated term-by-term-fixing-the-constants:

.

You can use a single here: if you use two (say ) you end up with — which is just a constant . This is as a consequence of the linearity of differentiation:

#### Examples

Find the following

(a)

(b)

(c)

*Solutions*

(a) Using term-by-term-fixing-the-constants, and the table-found antiderivatives:

we find

.

(b) Fixing the constant by *(a bit of a manipulation — see below) *and using the table antiderivative:

,

we find

.

(c) This is the table antiderivative:

with . We antidifferentiate directly (note :

.

## 2. Manipulation

There are a whole host of functions that need a bit of manipulation before they can be antidifferentiated directly. It would be impossible to list all the possible manipulations one might need, but here we present a selection of commonly occurring examples.

(a) **Negative Powers** : if you know your way around indices, you know that while multiplying gives positive powers:

,

that dividing gives by contrast *negative *powers:

.

So write

,

and then the table antiderivative

, ,

may be used. Let us call this the power rule.

So, for example,

.

(b) **Surds** If you know your way around indices you know that

,

which may be antidifferentiated using the Power Rule. For example, consider

(c) **Multiply Out **Functions such as or can be written as a linear combination. For example,

.

Another example,

.

(d) **Divide in **Functions such as or for polynomials . For an example of the first,

.

The other type is harder but if you know how to do polynomial long division you can tackle antiderivatives such as

(e) **Trigonometric **manipulations. There are a wealth of trigonometric identities that can be used to simplify a function. For example, perhaps isn’t in your table of antiderivatives but what you can do is use the identity

,

to rewrite … this does still leave you with which is not in tables but we will learn below how to attack this using -substitution (see below).

Another example might be something like

.

If you know your way around trig you can find that and so

.

Other, slightly more advanced but routine techniques include partial fractions (writing a rational function as a sum of simpler fractions), and completing the square (useful for antidifferentiating some of these so-called simpler fractions). An example of the use of partial fractions:

.

We can antidifferentiate the three of these using a -substitution.

An example of completing the square would be:

,

with again the -substitution being required to finish the antiderivative off.

## 3. -Substitution

When we run the rules of differentiation backwards, we get rules for antidifferentiation. -substitution comes from running the Chain Rule backwards. The chain rules says that if is a composition, , then the derivative of is given by:

.

Written as a rule of antidifferentiation we get

.

This is perhaps a difficult pattern to spot so what we do is let , and then get a relationship between and by differentiating :

,

and putting the antiderivative back together:

.

Strictly speaking when we wrote:

,

well, this is nonsense, but it can be shown that although nonsense, it always gives you the correct antiderivative.

How do you know if an antiderivative requires a -substitution? Well, in terms of this strategy you have failed to find the antiderivative in the tables and none of the manipulations you have done have put the function you want to antidifferentiate in the form of something which can be antidifferentiated using the tables (sometimes you might need a back-substitution: you have but you need to replace with ).

How do you pick the ? Well there are four ways, listed from best to worst:

(a) spot a function inside another, and a constant-multiple of its derivative:

(b) spot a function inside another:

(c) use LIATE. That is pick the as the first thing you can see in the following list:

**L**ogs**I**nverse Trig**A**lgebraic: sums of powers of —- highest power first**T**rigonometric**E**xponential

(d) just try something! Usually there are only two options for — if the first doesn’t work, start again with the other.

#### Examples

Find the following

(a)

(b)

(c)

*Solutions*

(a) Let us go through Direct and Manipulation first. Firstly, this is not in the tables. In terms of a manipulation we could say :

.

With nothing left to do we should try a substitution. There are three reasons to try .

- its derivative is and we have a multiple of this ()
- is inside (and originally )
- by LIATE: no logs, no inverse trig, algebraic – yes. Pick the higher power of over .

Proceed with :

,

so that

.

Note that , which can be antidifferentiated directly:

.

The original had a — antidifferentiate with respect to — so we should go back in terms of — and include the :

.

(b) This is not in the tables and the only apparent manipulation is writing

.

So we try a substitution. There are two good reasons to try :

- it’s derivative is , and we have a multiple of that.
- by LIATE: no logs, inverse trig: yes!

The third way doesn’t work great because there are two functions ‘inside’ another. In this case you would just try and and see what happens.

We know we should try but less us try to see how we know things have gone wrong. Proceeding we have

,

so that

.

This mix of and is BAD and suggests we won’t be able to find the antiderivative.

It might be possible to proceed though: perhaps we could go from to $x=\pm\sqrt{u-1}$ (we will choose … dangerously?) to get

This is worse, suggesting — correctly — we should have tried . We calculate:

:

,

which is in the tables;

.

(c) This is not in the tables. There are two possible manipulations. One involves writing as a sum and then multiplying this sum by … and then writing those sums as products. It is a good lesson to go through this as it exhibits the principle that for antidifferentiation, sums are easy while products are hard — so try and write things as sums.

But we won’t pursue that here. Instead we will rewrite (for that is what means — warning though, but the inverse of the function. So we have

,

and as this is not in the tables and we are not pursuing further manipulations, we will try a -substitution. There are two reasons to pick (note LIATE fails):

- has derivative , which also appears.
- appears inside another function; .

Off we go

,

so that

.

This is in the tables:

.

## Parts

Consider the following

.

It isn’t in the tables, there are no obvious manipulations. The substitution yields

,

that is the same thing (and why in general you won’t do the -substitution ). The substitution does actually give something that looks useful. It gives

,

but , and so we have via the back-substitution

,

which is simpler but isn’t in the tables…

We need a change of tack. Note is a product. Derivatives that are products come from products. For example, occurs when you differentiate using the product rule:

.

Of course therefore

,

that is we can run the product rule backwards to generate a new way of antidifferentiating. This is (integration by) Parts. The ‘parts’ basically means that when running the product rule backwards:

,

there are two antidifferentiations and , you do one and then the other — you break it into two parts.

What we basically do is subtract from both sides to get:

,

commonly written

,

where and .

So given something you want to antidifferentiate

,

you let something and the result be :

.

Now look at the formula again:

,

you have ; how do you get from ? The answer is you integrate… wait why didn’t I say antidifferentiate?

OK, let us slow down. What I want to say is that

.

There are two good ways to see why this is the case, and one involves integration.

- , because .
- . The symbol is an elongated ‘S’ standing for ‘sum’. The means a very small bit of . So really . And what happens when you add up all the small bits of ? Well you get so !

So, you’ll have both and and now you are looking at . Well, you can get by differentiating .

Wait! How do we know how to pick the ? A good way is LIATE: that is pick the as the first thing you can see in the following list:

**L**ogs**I**nverse Trig**A**lgebraic: sums of powers of —- highest power first**T**rigonometric**E**xponential

LIATE doesn’t work really well for -substitution but generally works well for Parts. The reason it works well for Parts is because LIATE is in reverse order of ease of antidifferentiation. Recall you will

- differentiate , and
- antidifferentiate

In a certain sense (in another what I am about to say is incorrect), it is easier to differentiate in that you know how to differentiate commonly appearing functions, so you want to make the antidifferentiating of as easy as possible. The things at the top of LIATE are the most difficult to antidifferentiate, so we pick near the top of LIATE: then will be lower down and hence easier to antidifferentiate.

#### Examples

(a)

(b)

##### Solutions

(a) We pick by LIATE (no logs, no inverse trig, algebraic — yes, the ‘multiplying’ ) and everything else — — is .

We differentiate :

;

we antidifferentiate :

,

that antiderivative was in the tables.

Now we use

to give

,

and of course is in the tables so we have

.

(b) We might expect this to be in the tables but it is not. Of course if it was in the tables the question would have to be begged, how did they find that antiderivative? Outside trial and error they used Parts. Let (logs — yes), and so the rest of — — is .

We differentiate :

;

antidifferentiating :

.

Now we use

to give

.

##### Remark

With and a -substitution we had

.

Using this example, we have

…

but and are inverse, and so , and we recover

.

Just tying up that little knot.

## Beyond Parts?

There are many, many more techniques. There is a big world out there… click here to have a look.

## 7 comments

Comments feed for this article

November 8, 2018 at 10:06 am

MATH6040: Winter 2018, Week 8 | J.P. McCarthy: Math Page[…] We will look at applications of partial differentiation to differentials and error analysis. We might start Chapter 4 on (Further) Integration. A good revision of integration/antidifferentiation may be found here. […]

November 16, 2018 at 7:44 am

MATH6040: Winter 2018, Week 9 | J.P. McCarthy: Math Page[…] A good revision of integration/antidifferentiation may be found here. […]

November 22, 2018 at 11:08 am

MATH6040: Winter 2018, Week 10 | J.P. McCarthy: Math Page[…] A good revision of integration/antidifferentiation may be found here. […]

April 3, 2019 at 10:11 am

MATH6040: Spring 2019, Week 10 | J.P. McCarthy: Math Page[…] We only started a revision of Antidifferentiation to start Chapter 4 on (Further) Integration. I have this section completed here. […]

April 10, 2019 at 9:38 am

MATH6040: Spring 2019, Week 11 | J.P. McCarthy: Math Page[…] completed our review of antidifferentiation before starting Chapter 4 […]

November 8, 2019 at 12:29 pm

MATH6040: Winter 2019, Week 8 | J.P. McCarthy: Math Page[…] We might start Chapter 4 on (Further) Integration. A good revision of integration/antidifferentiation may be found here. […]

November 14, 2019 at 10:10 am

MATH6040: Winter 2019, Week 9 | J.P. McCarthy: Math Page[…] Chapter 4 on (Further) Integration. A good revision of integration/antidifferentiation may be found here, but this material will be gone through in Monday’s […]