week1 | 8-25 | 8-27 | 8-29 |
week 2 | 9-1 | 9-3 | 9-5 |
week 3 | 9-8 | 9-10 | 9-12 |
week 4 |
9-15 | 9-17 |
9-19 |
week 5 |
9-22 |
9-24 |
9-26 |
week 6 |
9-29 |
10-1 |
10-3 |
week 7 |
10-6 |
10-8 |
10-10 |
week 8 |
10-13 |
10-15 |
10-17 Exam |
week 9 |
10-20 |
10-22 |
10-24 |
week 10 |
10-27 |
10-29 |
10-31 |
week 11 |
11-3 |
11-5 |
11-7 |
week 12 |
11-10 |
11-12 |
11-14 |
week 13 |
11-17 |
11-19 |
11-21 |
week 14 NoClasses |
11-24 |
11-26 |
11-28 |
week 15 |
12-1 |
12-3 |
12-5 |
week 16 |
12-8 |
12-10 |
12-12 |
|
|
|
The geometry of complex arithmetic:
If z = a+bi = ||z||(cos(t) +i sin(t)) and w = c+di = ||w||(cos(s) +i sin(s)) then
z+w = (a+c)+(b+d)i which corresponds geometrically to the "vector " sum of z and w in the plane, and
zw = ||z||(cos(t) +i sin(t)) ||w||(cos(s) +i sin(s))= ||z||
||w|| (cos(t) +i sin(t))(cos(s) +i sin(s))
= ||z|| ||w|| (cos(t) cos(s) - sin(t)sin(s)
+ (sin(t) cos(s) + sin(s)cos(t)) i)
= ||z|| ||w|| (cos(t+s) + sin(t+s)
i)
So you use the product of the magnitudes of z and w to determine the magnitude of the product and use the sum of the angles to determine the angle of the product.
Notation: cos(t) + i sin(t) is somtimes written
as cis(t).
Note: If we consider the series for ex = 1 + x + x2/2!
+x3/3! + ...
then eix = 1 + ix + (ix)2/2! +(ix)3/3!
+ ... = 1 + ix - x2/2! - ix3/3! + ...
... = cos(x) + i sin(x)
Thus epi = cos(p)
+ i sin(p)= -1. Thus ln(-1) = p
i.
Matrices with complex number entries.
If r and s are complex numbers in the matrix A, then as n get large
if ||r|| < 1 and ||s|| < 1 the powers of A will get close to the
zero matrix , if r=s=1 the powers of A will always be A, and
otherwise the powers of A will diverge .
Polynomials with complex coefficients.
Because multiplication and addition make sense for complex numbers,
we can consider polynomials with coefficients that are complex numbers
and use a complex number for the variable, making a complex polynomial
a function from the complex numbers to the complex numbers.
This can be visualized using one plane for the domain of the polynomial
and a second plane for the co-domain, target, or range of the polynomial.
The Fundamental Theorem of Algebra: If f is a non constant polynomial with complex number coefficients then there is at least on complex number z* where f(z*) = 0.
|
|
How are these questions related to Motivation Question I?
Do Examples F[X] = { f in F∞, where f(n) = 0 for all but a finite number of n.} < F∞
(Internal) Sums , Intersections, and Direct Sums of Subspaces
Suppose U1, U2, ... , Un are all subspaces of V.Definition: U1+ U2+ ... + Un = {v in V where v = u1+ u2+ ... + un for uk in Uk , k = 1,2,...,n} called the internal sum of the subspaces.
Facts: (i) U1+ U2+ ... + Un < V.
(ii) Uk < U1+ U2+ ... + Un for each k, k= 1,2,...,n.
(iii) If W<V and Uk < W for each k, k= 1,2,...,n, then U1+ U2+ ... + Un <W.
So ...
U1+ U2+ ... + Un is the smallest subspace of V that contains Uk for each k, k= 1,2,...,n.Examples:
U1 = {(x,y,z): x+y+2z=0} U2 = {(x,y,z): 3x+y-z=0}. U1 + U2 = R3.Let Uk = {f in P(F): f(x) = akxk where ak is in F} . Then U0+ U1+ U2+ ... + Un = {f : f (x) = a0 + a1x + a2x2 + ...+ anxn where a0 ,a1 ,a2,...,an are in F}.
Definition: U1^U2^ ... ^ Un = {v in V where v is in Uk , for all k = 1,2,...,n} called the intersection of the subspaces.
Facts:(i) U1^ U2^ ... ^ Un < V.
(ii) U1^U2^ ... ^ Un < Uk for each k, k= 1,2,...,n.
(iii) If W<V and W < Uk for each k, k= 1,2,...,n, then W<U1^ U2^ ... ^ Un .
So ...
U1^ U2^ ... ^ Un is the largest subspace of V that is contained in Uk for each k, k= 1,2,...,n.
Examples: U1 = {(x,y,z): x+y+2z=0} U2 = {(x,y,z): 3x+y-z=0}. U1 ^ U2 = {(x,y,z): x+y+2z=0 and 3x+y-z=0}= ...
Let Uk = {f in P(F): f(x) = akxk where ak is in F} then Uj^Uk = {0} for j not equal to k....
9-12-03
Direct Sums: Suppose U1, U2, ... , Un are all subspaces of V and U1+ U2+ ... + Un = V, we say V is the direct sum of U1, U2, ... , Un if for any v in V, the expression of v as v = u1+ u2+ ... + un for uk in Uk is unique, i.e., if v = u1'+ u2'+ ... + un' for uk' in Uk then u1 = u1', u2=u2', ... , un=un'. In these notes we will write V = U1# U2# ... # Un.
Examples:Uk = {v in Fn: v = (0,... 0,a,0, ... 0) where a is in F is in the kth place on the list.} Then U1# U2# ... # Un = V.Theorem: (Prop 1.8) V = U1# U2# ... # Un if and only if (i)U1+ U2+ ... + Un = V AND 0=u1+ u2+ ... + un for uk in Uk implies u1=u2=...=un=0.
Theorem: (Prop 1.9) V = U#W if and only if V = U+W and U^W={0}.
Examples using subspaces and direct sums in appplications:
Suppose A is a square matrix (n by n) with entries in the field F.
For c in F, let Wc = { v in Fn where vA = cv}.
9-15-03
Fact: For any A and any c, Wc< Fn . [Comment: for most c, Wc= {0}. ]
Definition: If Wc is not the trivial subspace, then c is called an eigenvalue or characteristic value for the matrix A and nonzero elements of Wc are called eigen vectors or characteristic vectors for A.
Application 1 : Consider the coke and pepsi matrices:
Questions: For which c is Wc non-trivial?
Example A. vA = cv? where
A= (
5/6 1/6 1/4 3/4 )
Example B. vB = cv where
B= (
2/3 1/3 1/4 3/4 )
To answer this question we need to find (x,y) [not (0,0)] so that
Is R2 = Wc1 + Wc2 for these subspaces? Is this sum direct?
Example A
(x,y) (
5/6 1/6 1/4 3/4 ) = c(x,y)
Example B
(x,y) (
2/3 1/3 1/4 3/4 ) = c(x,y)
Focusing on Example B we consider for which c will the matrix equation have a nontrivial solution (x,y)?
We consider the equations: 2/3 x +1/4 y = cx and 1/3 x+3/4 y = cy.
Multiplying by 12 to get rid of the fractions and bringing the cx and cy to the left side we find that
(8-12c)x + 3 y = 0 and 4x + (9-12c)y = 0
Multiplying by 4 and (8-12c) then subtracting the first equation from the second we have
((8-12c)(9-12c) - 12 )y = 0. For this system to have a nontrivial solution, it must be that
((8-12c)(9-12)c - 12 ) = 0 or 72 - (108+96) c+144c^2 -12 = 0 or 60 -204c +144c^2 = 0.
Clearly one root of this equation is 1, so factoring we have (1-c)(60-144c) = 0 and c = 1 and c = 5/12 are the two solutions... so there are exactly two distinct eigenvalues for example B,
c= 1 and c = 5/12 and two non trivial eigenspaces W1 and W5/12 .
General Claim: If c is different from k, then Wc ^ Wk = {0}
Proof:?
generalize?
What does this mean for vn when n is large?
Does the distribution of vn when n is large depend on v0?
9-17-03
Application 2: For c a real number let
Wc = {f in C∞(R) where f '(x)=c f(x)} < C∞(R).
What is this subspace explicitly?
Let V={f in C∞(R) where f ''(x) - f(x) = 0} < C∞(R).
Claim: V = W1 # W-1
Begin? We'll come back to this later in the course!
If c is different for k, then Wc ^ Wk = {0}
Proof:...
Back to looking at things from the point of view of individual vectors:
Linear combinations:
Def'n. Suppose S is a set of vectors in a vector space V over the field F. We say that a vector v in V is a linear combination of vectors in S if there are vectors u1, u2, ... , un in S and scalars a1, a2, ..., an in F where v = a1u1+ a2u2+ ... + anun .
Comment: For Axler: S is a finite set.
Def'n. Span (S) = {v in V where v is a linear combination of vectors in S}
Span (S) = V we say that S spans V. "finite dimensional" v.s.
Linear Independence.
Def'n. A set of vectors S is linearly dependent if there are vectors u1, u2, ... , un in S and scalars a1, a2, ..., an NOT ALL 0 in F where 0 = a1u1+ a2u2+ ... + anun .
A set of vectors S is linearly independent if it is not linearly dependent.
Other ways to characterize linearly independent.
A set of vectors S is linearly independent if whenever there are vectors u1, u2, ... , un in S and scalars a1, a2, ..., an in F where 0 = a1u1+ a2u2+ ... + anun , the scalars are all 0, i.e. a1 = a2 = ... =an = 0 .
9-19-03
Examples: Suppose A is an n by m matrix: the row space of A= span ( row vectors of A) , the column space of A = Span(column vectors of A).
Relate to R(A)
Recall R(A) = "the range space of A" = { w in Fk where for some v in Fn, vA= w } < Fk.
w is in R(A) if and only if w is a linear combination of the row vectors, i.e., R(A) = the row space of A.
If you consider Av instead of vA, the R*(A) = the column space of A.
"Infinite dimensional" v.s. examples: P(F), F∞, C∞ (R)
P(F) was shown to be infinite dimensional. [ If p is in SPAN(p1,....,pn) then the degree of p is no larger than the maximum of the degrees of {p1,...pn}. So P(F) cannot equal SPAN(p1,...,pn) for any finite set of polynomials- i.e, P(F) is NOT finite dimensional.
Some Standard examples.
2.4 Linear dependence Lemma : Suppose S is a finite linearly dependent set indexed by 1,2,.. , n and v1 is not 0, then for some index j, vj is in the span(v1,...v(j-1)) and Span (S) = Span(S -{vj}).
Proof: See LA p25.
2.6 Theorem: Suppose S is a finite set of vectors with V = Span (S) and T is a linearly independent set of vectors in V. Then T is also finite and n( T)< = n(S)
Proof: See LA p25-26.
Comments:
- (2.4) shows how to constuct a basis for a non trivial finite dimensional v.s., V. Start with a finite set of vectors S that span V. We can assume S has some non-zero vector in it. Put that element first.
- If S is linearly independent you are done. If not, apply (2.4) repeatedly until the resulting set of vectors is linearly independent. This must happen since at worst you will be left with v1 which was not 0. Thus we have proven
Theorem 2.10: Every finite spanning list in a vector space can be reduced to a basis.
and the Corollary (2.11). Every finite dimensional vector space has a basis.
Comment:The proof of Theorem 2.6 also shows that given T, a linearly independent subset of V and S, a finite set where SPAN(S) = V, one can step by step replace the elements of S with elements of T at the beginning of the list of vectors, so that eventually you have a new set S' where Span(S') = V and T contained in S'. Now by applying repeatedly the Lemma to S', one will arrive at a set B that is a basis for V with T contained in B. This proves
Theorem 2.12: Every Linearly independent subset of a finite dimansional vector space can be extended to a basis of the vector space.
Prop: A Subspace of a finite dimensional vs is finite dimensional.
Problem 2.12: Suppose p0,...,pm are in Pm(F) and pi(2) = 0 for all i.
Prove {p0,...,pm} is linearly dependent.
Proof: Suppose {p0,...,pm} is linearly independent.
Notice that by the assumption for any coefficients
(a0p0+..+ampm )(2) = a0p0(2)+..+ampm(2) = 0and since u(x)= 1 has u(2) = 1, u (= 1) is not in the SPAN(p0,...,pm).
Thus SPAN(p0,...,pm) is not Pm(F).
But SPAN ( 1,x, ..., xm) = Pm(F) .
By repeatedly applying Lemma 2.4 to these two sets of m+1 polynomials as in Theorem 2.6, we have SPAN (p0,...,pm)=Pm(F), a contradiction. So {p0,...,pm} is not linearly independent.
End of proof.
Bases- def'n.
Definition: A set B is called a basis for the vector space V over F if (i) B is linearly independent and (ii) SPAN( B) = V.
Prop. If V is finite dimensional vs and B and B' are bases for V, then n(B) = n(B').
Proof: fill in ... based on 2.6.
Define dimension of a finite dimensional v.s. over F.
9-22: Filled in much above on Bases and the proofs of theorems about bases.
9-24
Discuss dim({0}).
What is Span of the empty set? Characterize SPAN(S) = the intersection of all subspaces that contain S. Then Span (empty set) = Intersection of all subspaces= {0}.The empty set is linearly independent!... so The empty set is a basis for {0} and the dimension of {0} is 0!
2.8: bases and representation of vectors in a f.d.v.s.
Suppose B is a finite basis for V with its elements in a list, (u1, u2, ... , un) . If v is in V, then there are unique vectors scalars a1, a2, ..., an in F where v = a1u1+ a2u2+ ... + anun . The scalars are called the coordinates of v w.r.t. B, and we will write v = [a1, a2, ..., an]B.
Examples: In R2, P4(R).
Connect to Coke and Pepsi example: find a basis of eigen vectors using the B example for R2. [Use the on-line technology]
Example B
(x,y) (
2/3 1/3
1/4 3/4 ) = c(x,y)
We considerd the equations: 2/3 x +1/4 y = cx and 1/3 x+3/4 y = cy and showed that
there are exactly two distinct eigenvalues for example B,
c= 1 and c = 5/12 and two non trivial eigenspaces W1 and W5/12 .
Now we can use technology to find eigenvectors in each of these subspaces.
Matrix calculator, gave as a result that the eignevalue 1 had an eigenvector (1,4/3) while 5/12 had an eigenvector (1,-1). These two vectors are a basis for R2.
Dimension Results: Suppose Dim(V) = n, S a set of vectors with N(S) = n. Then
(1) If S is Linearly independent, then S is a basis.
(2) If Span(S) = V, then S is a basis.
Proof: (1) S is contained is a basis, B. If B is larger than S, then B has more than n elements, which contradicts that fact that any basis for V has exactly n elements. So B = S and S is a basis.
(2) S contains a basis, B. If B is smaller than S then B has less than n elements, which contradicts that fact that any basis for V has exactly n elements. So B = S and S is a basis.
IRMC
9-26
2.18: If U, W <V are finite dimensional, then so is U+W and
dim(U+W) = Dim(U) + Dim(W) - Dim(U^W).
Proof: (idea) build up bases of U and W from U^W.... then check that the union of these bases is a basis for U+W.
Linear Transformations: V and W vector spaces over F.
Definition: A function T:V -> W is a linear transformation if for any x,y in V and in F, T(x+y) = T(x) + T(y) and T(ax) = a T(x).
Examples: T(x,y) = (3x+2y,x-3y) is a linear transformation T: R2 -> R2.
G(x,y) = (3x+2y, x^2 -2y) is not a linear trasnformation.
G(1,1) = (5, -1) , G(2,2) = (10, 0)... 2*(1,1) = (2,2) but 2* (5,-1) is not (10,0)!
Notice that T(x,y)can be thought of as the result of a matric multiplication
So the two key properties are the direct consequence of the properties of matrix multiplication.... (v+w)A= vA+wA and (cv)A = c(vA).
(x,y) (
3
1
2
-2 )
For A a k by n matrix : TA (left argument) and AT (right) are linear transformations on Fk and Fn.
TA (x) = x A for x in Fk and AT(y) = A[y]tr for y in Fn and [y]tr indicates the entries of the vector treated as a one column matrix.
The set of all linear transformations from V to W is denoted L(V,W).
More notes on Chapter 1 and 2
1.9:V = U # W if and only if V = U+W and U^W={0}.
Proof: => suppose v is in U^W, then v=u in U and v=w in W, so 0 = u-v. But since V= U#W, this means u=w = 0 so v=0, so U^W={0}.
Note: This argument extends to V as the direct sum of any family of subspaces.<= Suppose u is in U and w is in W and u+w = 0. Then, u = -w so u is also in W, and thus u is is U^W={0}. So u=0 and then w= 0 . Since V=U+W, we have by 1.8, V=U#W. EOP
2.19 If V is f.d.v.s. and U1, ...Un are subspaces with V = U1 +...+ Un and
dim(V) = dim(U1)+...+ dim(Un) then V = U1 #...# UnProof outline: Choose bases for U1, ..., Un and let B be the union of these setes. Since V = U1 +...+ Un every vector in v is a linear combination of elements from B. But B has exactly dim(U1)+...+ dim(Un) = dim(V) elements in it, B is a basis for V. Now suppose 0=u1+ u2+ ... + un for uk in Uk. Then each ui =can be expressed as a linear combination of the basis vectors for Ui, and the entire linear combination is 0 implies that each coefficient is 0 because B is a basis. So u1=...=un=0 and V = U1 #...# Un. EOP
How do you find a basis for the SPAN(S) in Rn?
Outline of use of row operations...
Back to linear transformations:
Consequences of the definition: If T:V->W is a linear transformation, then for any x and y in V and a in F,
(i) T(0) = 0.
(ii) T(-x) = -T(x)
(iii) T(x+ay) = T(x) + aT(y).
Quick test: If T:V->W is a function and (iii) holds for any x and y in V and a in F, then the function is a linear transformation.
Visualize with Winplot?
Why this called a "linear" transformation:
The geometry of linear: A line in R2 is {(x,y): Ax +By = C where A and B are not both 0} = {(x,y): (x,y) = (a,b) + t(u,v)}= L, line through (a,b) in direction of (u,v).Suppose T is a linear transformation :
Let T(L) = L' = {(x'y'): (x',y')= T(x,y)}
T(x,y) = T(a,b) + t T(u,v).
If T(u,v) = (0,0) then L' = T(L) = {T(a,b)}.
If not then L' is also a line though T(a,b) in the direction of T(u,v).
Coke/Pepsi example B: T(x,y) =(2/3 x +1/4 y, 1/3 x+3/4 y)
T(v0) = v1, T(v1) = v2.... T(vk)=T(vk+1).
T(v*)=v* means a nonzero v* is an eigenvector with eigenvalue 1. T(1, 4/3) = (1,4/3). Also T(3/7, 4/7) = T[(3/7)(1,4/3)] = 3/7T(1,4/3) =3/7(1,4/3) =(3/7,4/7).
T(1,-1) =(5/12,-5/12 )= (5/12)(1,-1) means that (1,-1) is an eigenvector with eigenvalue 5/12.
D... Differentiation: on polynomials, on ...Example: (D(f))(x) = f' (x) or D(f) = f'.
T(f)(x) = f''(x) - f(x) or T(f) = DD(f) - f = (DD-Id) f.
Wednesday 10-1
Theorem: T : V->W linear, B a basis, gives S(T):B ->W.
Suppose S:B -> W, then there is a unique linear transformation T(S):V->W such that S(T(S))=S.
Proof: Let T(S)(v) be defined as follows: Suppose v is expressed (uniquely) as a linear combination of elements of B, ie. v = a1u1+ a2u2+ ... + anun ... then let T(v) = a1S(u1)+ a2S(u2)+ ... + anS(un) ....
This well defined since the representation of v is unique. Left to show T is linear. Clearly... if u is in B then S(T(S))(u) = S(u).
Example: T: P(F) -> P(F).... S(xn) = nx n-1.
Or another example: S(xn) = 1/(n+1) x n+1.
Algebraic stucture on L(V,W)
Definition of the sum and scalar multiplication:
T, U in L(V,W), a in F, (T+U)(v) = T(v) + U(v).
Fact:T+U is also linear.
(aT)(v) = a T(v) .
Fact:aT is also Linear.
Check: L(V,W) is a vector space over F.Composition: T:V -> W and U : W -> Z both linear, then UT:V->Z where UT(v) = U(T(v)) is linear.
Note: If T':V-> W and U':W->Z are also linear, then U(T+T') = UT + UT' and (U+U') T = UT + UT'. If S:Z->Y is also linear then S(TU) = (ST)U.
Key focus: L(V,V) , the set of linear "operators" on V.... also called L(V).
If T and U are in L(V) then UT is also in L(V). This is the key example of what is called a "Linear Algebra"... a vector space with an extra internal operation usually described as the product. That satisfies the distributive and associative properties.
Key Spaces related to T:V->W
Null Space of T= kernel of T = {v in V where T(v) = 0 [ in W] }= N(T) < V
Range of T = Image of T = T(V) = {w in W where w = T(v) for some v in V} <W.
Recall definition of "injective" or "1:1" function.
Theorem: T is 1:1 (injective) if and only if N(T) = {0}
Proof: => Suppose T is 1:1. We now that T(0)=0 , so if T(v) = 0, then v = 0. Thus 0 is the only element of N(T) or N(T) = {0}.
<= Suppose N(T) = {0}. If T (v) = T(w) then T(v-w) =T(v)-T(w) = 0 so v-w is in N(T).... ok, than must mean that v-w = 0, so v=w and T is 1:1.
Friday 10-3
More details to follow on this lecture:
he first part of the lecture discussed the importance of the Null Space of T, N(T) is undertanding what T does in general.
Example 1. D:P(R) -> P(R)... D(f) = f'. Then N(D) = { f: f(x) = C for some constant C.} [from calculus 109!]
Notice: If f'(x) = g'(x) the f(x) = g(x) + C for some C.
Proof: consider D(f(x) - g(x)) = Df(x) - Dg(x) = 0, so f(x) -g(x) is in N(T).
Example 2: Solving a system of homogeneous
linear equations. This was connected to finding the null space of a
linear trasnformation connected to a matrix. Then what about a non- homogeneous
system with the same matrix. Result: If z is a solution of the non- homogeneous
system of linear equations and z ' is another solution, then z' = z + n where
n is a solution to the homogeneous system.
General Proposition: T:V->W. If b is a vector in W and a is in V with T(a) = b, then T-1(b} = {v in V: v = a +n where n is in N(T)} = a + N(T)
Comment: a + N(T) is called the coset of a mod N(T)...these are analogous to lines in R2. More on this later in the course.
Major result of the day: Suppose T:V->W and V is
a finite dimensional v.s. over F. Then N(T) and R(T) are also finite dimensional
and Dim(V) = Dim (N(T)) + Dim(R(T)).
Proof: Done in class- see text: Outline: start with a basis C for N(T)
and extend this to a basis B for V. Show that T(B-C) is a basis
for R(T).
Next: Monday. Oct.6. Matrices and Linear transformations. (with Dr. B).
MBB(T)= | ( |
| ) |
MBB(Tn)=[MBB(T)]n= | ( |
| )n | = |
|
MEE(T)= | ( |
|
) |
MBE(Id)= | ( |
|
) |
MBB(T)= | ( |
|
) |
The Division Algorithm, [proof?]