I will try and have these for you by tomorrow. Unfortunately if I don’t have them done by tomorrow it will be Wednesday.

Tuesday 20 March, 13:00 – 14:00 in B212.

We started looking at the inverse Laplace transform after looking at partial fractions.

In our three lectures we will drive into Section 3.4 in the hope that if you so wish to do so, you can complete (or almost complete) Assignment 2 over Easter. After Easter we should be able to return to two tutorials per week.

Assignment 2 will have a hand-in date of 17:00 23 April: the Monday of Week 11. Assignment 2 is in the manual, P. 149. Once we get someway into the examples on p.105, you should be able to make a start.

Please feel free to ask me questions about the exercises via email or even better on this webpage.

Please see the Student Resources tab on the top of this page for information on the Academic Learning Centre, etc..

]]>

Takes place Wednesday 21 March, in Week 7* [21 March].

The test will take place from 19:00-20:30 but most students should be able to complete the test in about an hour. It has about 35 Marks worth of questions: five in all (with one very short, and three shortened versions of longer questions).

Anything done in the first five weeks is examinable (see “Independent Learning” below) and it is recommended that you understand what is going on with the summaries of p. 57-59.

The nine questions from p. 60 on are a good revision but not every possible question is listed there.

We started the class with one more example of Cramer’s Rule, and then started pushing into statistics, looking at everything up to and including standard deviation.

In Maple, we did Lab 3, which was really revision for the Linear Algebra Test.

The test is going to begin at 19:00 sharp and run until 20:30. Class will resume at 20:35 sharp. This seems a very short break but the test is designed so that it shouldn’t take much longer than an hour to complete, so almost everyone should have a solid enough break.

At 20:35 we will continue working on statistics by looking at frequency distributions.

This may or may not be a Maple night (it depend on how far we get in the previous week).

Any students who cannot make this class should email me and request that the class be recorded (I might not be able to record all of the class but most of it.

We will return to class after Easter 11 April.

If you have missed a lab you have two options: either download Maple onto your own machine (instructions may be found here) or come into CIT at another time to use Maple.

Go through the missed lab on your own, doing *all* the exercises in Maple. Save the worksheet and email it to me.

Questions you can do include:

**After Week 5:**P.44, Q. 1-3, Q. 4-5 more abstract. P.47, Q. 1-3, Q. 4 more abstract. P.56, Q. 1-3, Q. 4 more abstract. P.69, Q. 9 is an important question. A version might be

Use only

determinantsto determine if the following homogeneous system of linear equations has non-zero solutions:

**After Week 4:**P. 41, Q. 1-4**After Week 3:**P. 28, Q. 1-5, 6-9 have answers with Q. 7 a harder question. P. 34 exercises.**After Week 2:**P. 18 Q. 2**After Week 1:**P. 18 Q. 1, 3 – 6. Harder questions are 7 and 8. For those who do not yet have the manual, see here.**After Week 6:**P. 74, Q. 1-4; P. 77, Q. 1-3

I am not suggesting you should do *all *of these. It is recommended by the module descriptor that you do two hours of independent and directed learning every week but of course this isn’t feasible for everyone.

Please see the Student Resources tab on the top of this page for information on the Academic Learning Centre, etc.

]]>I would hope to have these with ye by the end of Week 8.

I would hope to have these with ye by the end of Week 8

In the first lecture you sat your first written assignment.

In the second lecture we looked more at boundary value problems (in particular the Shooting Method and Goal Seek). We started talking about Finite Differences.

In VBA we looked at Runge-Kutta methods.

We will continue our look at Finite Differences for differential equations.

In VBA we will look at the shooting method and finite differences for boundary value problems.

The following is the proposed assessment schedule:

**Week 6**, 20% First VBA Assessment, More Info Below**Week 7,**20 % In-Class Written Test, More Info in Week 5**Week 11**, 20% Second VBA Assessment, More Info in Week 9**Week 12,**40% Written Assessment(s), More Info in Week 10

Study should consist of

- doing exercises from the notes
- completing VBA exercises

Please see the Student Resources tab on the top of this page for information on the Academic Learning Centre, etc..

]]>Will now take place Wednesday 21 March, in Week 7* [21 March].

The test will take place from 19:00-20:30 but most students should be able to complete the test in about an hour. It has about 35 Marks worth of questions: five in all (with one very short, and three shortened versions of longer questions).

Anything done in the first five weeks is examinable (see “Independent Learning” below) and it is recommended that you understand what is going on with the summaries of p. 57-59.

The nine questions from p. 60 on are a good revision but not every possible question is listed there. In next week’s Maple you will get a chance to revise these questions.

We saw how linear systems can be written as matrix equations, and (sometimes) solved using matrix inverses. Then we spoke about determinants, and their use in figuring out if homogeneous linear systems have non-zero solutions. Finally we looked at Cramer’s Rule.

We will start the class with one more example of Cramer’s Rule, and then start pushing into statistics.

In Maple, we will do Lab 3, which is really revision for the Linear Algebra Test.

The test is going to begin at 19:00 sharp and run until 20:30. Class will resume at 20:35 sharp. This seems a very short break but the test is designed so that it shouldn’t take much longer than an hour to complete, so almost everyone should have a solid enough break.

At 20:35 we will continue working on statistics.

This may or may not be a Maple night (it depend on how far we get in the previous week).

It appears that at most one student will miss the class, which isn’t too bad. So now we now go back to the poll to pick between the two nights.

If you have missed a lab you have two options: either download Maple onto your own machine (instructions may be found here) or come into CIT at another time to use Maple.

Go through the missed lab on your own, doing *all* the exercises in Maple. Save the worksheet and email it to me.

Questions you can do include:

**After Week 5:**P.44, Q. 1-3, Q. 4-5 more abstract. P.47, Q. 1-3, Q. 4 more abstract. P.56, Q. 1-3, Q. 4 more abstract. P.69, Q. 9 is an important question. A version might be

Use only

determinantsto determine if the following homogeneous system of linear equations has non-zero solutions:

**After Week 4:**P. 41, Q. 1-4**After Week 3:**P. 28, Q. 1-5, 6-9 have answers with Q. 7 a harder question. P. 34 exercises.**After Week 2:**P. 18 Q. 2**After Week 1:**P. 18 Q. 1, 3 – 6. Harder questions are 7 and 8. For those who do not yet have the manual, see here.

I am not suggesting you should do *all *of these. It is recommended by the module descriptor that you do two hours of independent and directed learning every week but of course this isn’t feasible for everyone.

Please see the Student Resources tab on the top of this page for information on the Academic Learning Centre, etc.

]]>We started looking at “The Engineer’s Transform” — the Laplace Transform. We looked at the first shift theorem, and how the Laplace Transform interacts with differentiation. We started looking at partial fractions.

We will continue looking at partial fractions and the inverse Laplace Transform.

Assignment 2 will have a hand-in date of 17:00 23 April: the Monday of Week 11. Assignment 2 is in the manual, P. 149. Once we get someway into the examples on p.105, you should be able to make a start.

Corrections starting today, Wednesday.

Please feel free to ask me questions about the exercises via email or even better on this webpage.

Please see the Student Resources tab on the top of this page for information on the Academic Learning Centre, etc..

]]>

Written Assessment 1 takes place Tuesday 13 March at 09:00 in the usual lecture venue.

Here is a copy of last year’s assessment. This should give you an idea of the length and format but not what questions are coming up. There are far more things I could examine.

Roughly, everything up to p. 57 is examinable. More specifically:

Examples 1 & 2 on p. 17; Q. 1-2 on p.19

p.22, Example, Q. 1-4

p.29, Examples 1-4; p.38, Q. 1-6, 8-9; p.48, Q. 1, 5(a)

p.35, Example; p. 36, Examples 1-2; p.38, Q. 6-7, 10-14; p.48, Q. 3

p.43, Examples 1-2; p.48, Q. 2, 4, 5(b)

p.52, Example; p.54, Example; p. 56, Q. 1-14 (some repetition here).

VBA Assessment 1 will take place in Week 6, (6 & 9 March) in your usual lab time. You will not be allowed any resources other than the library of code (p.124) and formulae (p.123 parts 1 and 2) at the end of the assessment (both are provided on the assessment paper). More information in last week’s weekly summary.

In lecture we took a very quick look at Runge-Kutta Methods. We also discussed boundary value problems and the Shooting Method.

In the lab time you have/had your VBA assessment.

In the first lecture you will sit your first written assignment. In the second lecture we will look more at boundary value problems. In VBA we will have to look at Runge-Kutta methods.

The following is the proposed assessment schedule:

**Week 6**, 20% First VBA Assessment, More Info Below**Week 7,**20 % In-Class Written Test, More Info in Week 5**Week 11**, 20% Second VBA Assessment, More Info in Week 9**Week 12,**40% Written Assessment(s), More Info in Week 10

Study should consist of

- doing exercises from the notes
- completing VBA exercises

*“Straight-Line-Graph-Through-The-Origin”*

The words of Mr Michael Twomey, physics teacher, in Coláiste an Spioraid Naoimh, I can still hear them.

There were two main reasons to produce this *straight-line-graph-through-the-origin:*

- to measure some quantity (e.g. acceleration due to gravity, speed of sound, etc.)
- to demonstrate some law of nature (e.g. Newton’s Second Law, Ohm’s Law, etc.)

We were correct to draw this *straight-line-graph-through-the origin *for measurement, but not always, perhaps, in my opinion, for the demonstration of laws of nature.

The purpose of this piece is to explore this in detail.

Two variables and are in direct proportion when there is some (real number) constant such that .

In this case we say is *directly proportional to *, written , and we call the *constant of proportionality*. If we assume for the moment that both and can be equal to zero, then when , then is certainly also equal to zero:

.

Now imagine a plane, with values on the -axis, and -values on the -axis, and the values of plotted against those of . We certainly have that (aka the *origin*) is on this graph. Now suppose that is another such point.

The tangent of this angle , opposite divided by adjacent is given by:

,

however by assumption, so that:

.

Similarly, any other point , has . This means the line segment connecting to any point on the graph, any , makes an angle with the -axis: they all lie on a *straight-line*, and as we noted above, *through-the-origin*:

Consider again the graph with the right-angled triangle. The slope of a line is defined as the rise-over-the-run: aka the tangent of the angle formed with the -axis. Therefore, we have that the slope of this line is given by , the constant of proportionality.

If we have two quantities that are in direction proportion… we need an example:

When a light travels from a vacuum into a medium, it bends:

[image robbed from Geometrics.com and edited]

It turns out, using Maxwell’s Equations of Electromagnetism, that and are directly proportional:

.

This constant of proportionality, , more usually denoted by , is called the *Refractive Index *of the material.

For the moment, make the totally unrealistic assumption that we can measure and , and calculate and , without error. Then we can hit the medium with light, calculate the specific and , and calculate the refractive index:

.

This, however, is completely unrealistic. We cannot measure without error (nor calculate without error).

Recall again that , a graph of vs is a *straight-line-graph-through-the-origin:*

This line represents the true, exact relationship between and – and if we had it’s equation: , we would know the refractive index. Let us call this the *line of true behaviour.*

In the real world, any measurement of and , and subsequent calculation of and , comes with an error.

For example, suppose, for a number of values of , say , we measure and record the corresponding , say . Suppose we calculate the corresponding sines and then plot the coordinates:

:

Now these errors in measurement and calculation mean that these points do not lie on a line. Teaching MATH6000 here in CIT, would see some students wanting to draw a line between the first and last data points:

We can get the equation of this line, it’s not too difficult, and the slope of this line approximates the refractive index (we get )… but why did we bother with making seven measurements when we only used two to draw the line?

When we do this we are completely neglecting the following principle:

Data is only correcton average. Individual data points are always wrong.

The second statement should be self-evident: individual data points contain errors and are certainly wrong. When we are not dealing with *discrete* data, but *continuous* data this statement *always wrong* can be quantified (in the sense of almost always).

The first statement is a little more difficult, on relies the Law of Large Numbers, but for the purposes of this piece, just suppose that the errors are just as likely to be positive (under-estimate) as negative (over-estimate), then we might hope that errors cancel each other out — and we might expect furthermore that the more data we have, the more likely that these errors do indeed cancel each other out — and the data correct *on average*.

So using just two data points is not a good idea: we should try and use all seven. You can at this point come up with various strategies of how to draw this line, this *line of best fit*. Maybe you want to get just as many points above the line as below for example.

Recall individual data points always contain errors. So for example, we try and measure , we try and measure the corresponding – these have measurement errors. Then we calculate and and these errors are propogated into errors here.

It turns out, if we make assumptions about these errors (for the purposes of this talk errors are just as likely to be positive as negative), that if we have many, many data points, they scatter around the true relationship between and , categorised by the line in the below, in a very certain way:

The errors have the property, that the *sum of the vertical deviations – squared – is as small as possible*. We explain with a picture what vertical deviation means:

*The vertical deviations, briefly just the deviations, are as shown.*

Now we flip this on it’s head. If we have some data we know that the line of true behaviour is such that if we had lots of data the sum of the squared deviations would be as small as possible.

Therefore, if we find the line that has the property that the sum of squared deviations for our smaller sample of data is a minimum, then this, *line of best fit, *approximates the *line of true behaviour.*

*line of best fit line of true behaviour*

The more data we have (subject to various assumptions), the better this approximation.

Now how do we find this line of best fit? An example shows the way:

The quest is to find the line such that the sum of the squared deviations is as small as possible. As we note above, if then the line of best fit is of the form for some constant : and different values of give different errors. For example, below we see graphs with together with the data:

How we find this line of best fit is to consider as a variable, find a formula for

= sum of squared deviations = ,

and then minimise the value of — in other words finding the line that minimises . Here we calculate for an arbitrary line — value of — the ten deviations. The deviation between the data point and the corresponding point on the line of best fit, , is given by the absolute difference of the two:

.

Now note that , and so

.

A little time later we have, in this case,

.

How to minimise this? If you know any calculus this is an easy problem, however a rudimentary knowledge of quadratics is enough to help us find the minimum. All quadratics can be written in the form:

.

This is minimised when . Therefore, is minimised at

I wouldn’t ordinarily be an advocate for rounding fractions like this, however the important approximation

*line of best fit line of true behaviour*

means that we cannot be so prescriptive. We finish here with

.

More generally, where we have data and we want to find the line (through the origin) of best fit we have (via )

so that as before, the quadratic is at a minimum at

.

Maxwell’s Equations imply Snell’s Law but Snell’s Law was proposed almost 1000 years before Maxwell!

Suppose you hear about Snell’s Law for the first time. How could someone convince you of it (without running the gamut on Maxwell’s Equations!)

Well, they’d have to do an experiment of course! They would have to allow you measure different values of and , the corresponding values of and , and plot them. For it to be *through-the-origin *they would have to convince you that .

Then you would find the line of best fit: *straight-line-graph-through-the-origin.*

You would see the data lining up well, providing qualitative evidence for and hence Snell’s Law.

Then they might show you how to calculate how well the data fits the line of best fit, perhaps by calculating the correlation coefficient.

Things get slightly more complicated if we have a *straight-line-graph *that doesn’t go through the origin. Or a different curve than that of a line. We are however able to to fit data to a family of curves, curves that minimise the sum of the squared deviations. This is Linear Least Squares where partial differentiation can help find the *curve-of-best-fit *that approximates the *curve-of-true-behaviour.*

Things get more and more complicated after this but we can leave it there now.

]]>

Due to the weather, Assignment 1 now has a hand-in time and date of 17:30 Monday 5 March (Week 6).

Assignment 2 will have a hand-in date of 17:00 23 April: the Monday of Week 11. Assignment 2 is in the manual.

We finished our study of the method of undetermined coefficients.

We will start looking at “The Engineer’s Transform” — the Laplace Transform.

Please feel free to ask me questions about the exercises via email or even better on this webpage.

]]>

VBA Assessment 1 will take place in Week 6, (6 & 9 March) in your usual lab time. You will not be allowed any resources other than the library of code (p.124) and formulae (p.123 parts 1 and 2) at the end of the assessment (both are provided on the assessment paper). More information in last week’s weekly summary.

Written Assessment 1 takes place Tuesday 13 March at 09:00 in the usual lecture venue.

Here is a copy of last year’s assessment. This should give you an idea of the length and format but not what questions are coming up. There are far more things I could examine.

Roughly, everything up to p. 57 is examinable. More specifically:

Examples 1 & 2 on p. 17; Q. 1-2 on p.19

p.22, Example, Q. 1-4

p.29, Examples 1-4; p.38, Q. 1-6, 8-9; p.48, Q. 1, 5(a)

p.35, Example; p. 36, Examples 1-2; p.38, Q. 6-7, 10-14; p.48, Q. 3

p.43, Examples 1-2; p.48, Q. 2, 4, 5(b)

p.52, Example; p.54, Example; p. 56, Q. 1-14 (some repetition here).

We look at higher order initial value problems: we covered the theory in class and looked at them in VBA in Lab 4

In lecture we will look at Runge-Kutta Methods while in the lab time you will have your VBA assessment.

In the first lecture you will sit your first written assignment. In the second lecture we will look at boundary value problems. In VBA we will have to look at Runge-Kutta methods.

The following is the proposed assessment schedule:

**Week 6**, 20% First VBA Assessment, More Info Below**Week 7,**20 % In-Class Written Test, More Info in Week 5**Week 11**, 20% Second VBA Assessment, More Info in Week 9**Week 12,**40% Written Assessment(s), More Info in Week 10

Study should consist of

- doing exercises from the notes
- completing VBA exercises

We worked with matrix inverses, seeing how the Gauss-Jordan algorithm can be used to calculate the inverse of a matrix. We solved a matrix equation.

Here find a corrected Example 2 from p. 39. In class, I made a slip in the third frame. The row operations are the same.

The final answer is therefore

.

We also had our second Maple lab.

We will see how linear systems can be written as matrix equations, and solved using matrix inverses. Then we will talk about determinants, and perhaps push towards the end of Chapter 1.

Will take place Wednesday 14 March, in Week 7.

If you have missed the first lab you have two options: either download Maple onto your own machine (instructions may be found here) or come into CIT at another time to use Maple.

Go through the missed lab on your own, doing *all* the exercises in Maple. Save the worksheet and email it to me.

Questions you can do include:

**After Week 4:**P. 41, Q. 1-4**After Week 3:**P. 28, Q. 1-5, 6-9 have answers with Q. 7 a harder question. P. 34 exercises.**After Week 2:**P. 18 Q. 2**After Week 1:**P. 18 Q. 1, 3 – 6. Harder questions are 7 and 8. For those who do not yet have the manual, see here.

I am not suggesting you should do *all *of these. It is recommended by the module descriptor that you do two hours of independent and directed learning every week but of course this isn’t feasible for everyone.

Please see the Student Resources tab on the top of this page for information on the Academic Learning Centre, etc.

]]>