Summer 2011: Question 3

Suppose that n is a positive integer. Pick n intervals I_1,I_2,\dots,I_n on the real number line. Assume that any pair of these intervals are disjoint. Pick n real numbers a_1,a_2,\dots,a_n.

  1. Using ideas of linear algebra, prove that there is a polynomial function p(x) of degree at most n-1, with real number coefficients, so that  \int_{I_j}p(x)\,dx=a_j
  2. How many such polynomial functions are there? Justify your answer?
[HINT: Make a linear map. If the integral of a continuous function on an interval vanishes, then the function vanishes somewhere on the interval].

Solution

  1. [Vahid] Let p(x)=\sum_{k=0}^{n-1}e_kx^k and I_i=(c_i,d_i) such that \int_{c_i}^{d_i}p(x)\,dx=a_i. That is, for each i=1,2,\dots,n

\sum_{k=0}^{n-1}e_k\left(\frac{d_i^{k+1}}{k+1}-\frac{c_i^{k+1}}{k+1}\right)=a_i.

This corresponds to n equations in n unknowns:

e_{n-1}\left(\frac{d_1^n}{n}-\frac{c_1^n}{n}\right)+e_{n-2}\left(\frac{d_1^{n-1}}{n-1}-\frac{c_1^{n-1}}{n-1}\right)+\cdots+e_0(d_1-c_1)=a_1.

\vdots

e_{n-1}\left(\frac{d_n^n}{n}-\frac{c_n^n}{n}\right)+e_{n-2}\left(\frac{d_n^{n-1}}{n-1}-\frac{c_n^{n-1}}{n-1}\right)+\cdots+e_0(d_n-c_n)=a_n.

This is equivalent to the matrix equation MX=A, where X is the vector (e_{n-1}\, e_{n-2}\,\dots\,e_0)^TA is the vector (a_1\,,\,a_2\,,\,\dots\,,\,a_n)^T and M is the matrix

[M]_{ij}=\left(\frac{d_i^{n+1-j}}{n+1-j}-\frac{c_i^{n+1-j}}{n+1-j}\right).

A solution to this matrix equation gives a polynomial which satisfies the given condition.

     2. Does a solution always exist?? I’m stuck here! It depends on whether the matrix is invertible, etc.

Summer 2010: Question 1

Suppose that T:V\rightarrow V is a linear map. Suppose that v_1,v_2,\dots,v_p are eigenvectors of T with eigenvalues \lambda_1,\lambda_2,\dots,\lambda_p, and that these eigenvectors are all distinct. Prove that the eigenvectors v_1,v_2,\dots,v_p are linearly independent.

Solution

We use induction on the number of eigenvectors n. Let P(n) be the proposition that n of the eigenvectors of T are linearly independent.

P(1) is true because the set \{v_1\} is linearly independent if v_1\neq 0 — and eigenvectors are non-zero by definition.

Assume P(k). That is assume if

\sum_{i=1}^ka_iv_i=\mathbf{0} \Rightarrow c_i=0  for i=1,\dots,n.

Consider P(k+1). Multiply both sides of \sum_{i=1}^{k+1} a_iv_i by T-\lambda_{k+1}I:

(T-\lambda_{k+1}I)(\sum_{i=1}^{k+1}a_iv_i)=(T-\lambda_{k+1}I)\mathbf{0}

\Rightarrow \sum_{i=1}^{k+1}a_i(\lambda_i-\lambda_{k+1})v_i=0

=a_1(\lambda_1-\lambda_{k+1})v_1+\cdots+a_k(\lambda_k-\lambda_{k+1})v_k+\underbrace{a_{k+1}(\lambda_{k+1}-\lambda_{k+1})v_{k+1}}_{=0}

=0.

By the inductive hypothesis (P(k)) we have that a_i(\lambda_i-\lambda_{k+1})=0 for each i=1,\dots,k. Since \lambda_i\neq\lambda_{k+1}, it follows that a_i=0 in every case. Therefore we are left with

\sum_{i=1}^{k+1} a_iv_i=a_{k+1}v_{k+1}=0\Rightarrow a_{k+1}=0,

also.

Hence by the inductive hypothesis, the eigenvectors v_1,\dots,v_p are linearly independent \bullet

Advertisement