week1 | 8-27 |
8-29 |
|
week 2 | 9-1 No Class |
9-3 |
9-5 |
week 3 | 9-8 |
9-10 |
9-12 |
week 4 |
9-15 |
9-17 |
9-19 |
week 5 |
9-22 |
9-24 |
9-26 |
week 6 |
9-29 |
10-1 |
10-3 |
week 7 |
10-6 |
10-8 |
10-10 |
week 8 |
10-13 |
10-15 |
10-17 |
week 9[exam!] | 10-20 |
10-22 |
10-24 |
week 10 |
10-27 |
10-29 |
10-31 |
week 11 |
11-3 |
11-5 |
11-7 |
week 12 |
11-10 |
11-12 |
11-14 |
week 13 |
11-17 |
11-19 |
11-21 |
week 14 NoClasses |
|||
week 15 |
12-1 |
12-3 |
12-5 |
week 16 |
12-8 |
12-10 |
12-12 |
`A^n =A` if `n` is odd. |
|
|
`A^n` `->` |
( |
|
) |
A= | ( |
|
) |
The geometry of complex arithmetic:
If z = a+bi = ||z||(cos(t) +i sin(t)) and w = c+di = ||w||(cos(s) +i sin(s)) then
z+w = (a+c)+(b+d)i which corresponds geometrically to the "vector " sum of z and w in the plane, and
zw = ||z||(cos(t) +i sin(t)) ||w||(cos(s) +i sin(s))= ||z||
||w|| (cos(t) +i sin(t))(cos(s) +i sin(s))
= ||z|| ||w|| (cos(t) cos(s) - sin(t)sin(s)
+ (sin(t) cos(s) + sin(s)cos(t)) i)
= ||z|| ||w|| (cos(t+s) + sin(t+s)
i)
So you use the product of the magnitudes of z and w to determine the magnitude of the product and use the sum of the angles to determine the angle of the product.
Notation: cos(t) + i sin(t) is somtimes written
as cis(t).
Note: If we consider the series for ex = 1 + x + x2/2!
+x3/3! + ...
then eix = 1 + ix + (ix)2/2! +(ix)3/3!
+ ... = 1 + ix - x2/2! - ix3/3! + ...
... = cos(x) + i sin(x)
Thus ei*pi = cos(pi)
+ i sin(pi)= -1. So ln(-1) = i *pi.
Furthermore: `e^{a+bi} = e^a*e^{bi} = e^a ( cos (b) + sin(b) i) `
Matrices with complex number entries.
If r and s are complex numbers in the matrix A, then as n get large
if ||r|| < 1 and ||s|| < 1 the powers of A will get close to the
zero matrix , if r=s=1 the powers of A will always be A, and
otherwise the powers of A will diverge .
Polynomials with complex coefficients.
Because multiplication and addition make sense for complex numbers,
we can consider polynomials with coefficients that are complex numbers
and use a complex number for the variable, making a complex polynomial
a function from the complex numbers to the complex numbers.
This can be visualized using one plane for the domain of the polynomial
and a second plane for the co-domain, target, or range of the polynomial.
The Fundamental Theorem of Algebra: If f is a non constant
polynomial with complex number coefficients then there
is at least on complex number z* where f(z*) = 0.
For more on complex numbers see: Dave's Short Course on Complex Numbers,
|
|
How are these questions related to Motivation Question I?
Do Examples F[X] = { f in F∞, where f(n) = 0 for all but a finite number of n.} < F∞
(Internal) Sums , Intersections, and Direct Sums of Subspaces
Suppose U1, U2, ... , Un are all subspaces of V.Definition: U1+ U2+ ... + Un = {v in V where v = u1+ u2+ ... + un for uk in Uk , k = 1,2,...,n} called the (internal) sum of the subspaces.
Facts: (i) U1+ U2+ ... + Un < V.
(ii) Uk < U1+ U2+ ... + Un for each k, k= 1,2,...,n.
(iii) If W<V and Uk < W for each k, k= 1,2,...,n, then U1+ U2+ ... + Un <W.
So ...
U1+ U2+ ... + Un is the smallest subspace of V that contains Uk for each k, k= 1,2,...,n.Examples:
U1 = {(x,y,z): x+y+2z=0} U2 = {(x,y,z): 3x+y-z=0}. U1 + U2 = R3.Let Uk = {f in P(F): f(x) = akxk where ak is in F} . Then U0+ U1+ U2+ ... + Un = {f : f (x) = a0 + a1x + a2x2 + ...+ anxn where a0 ,a1 ,a2,...,an are in F}.
Definition: U1 `cap` U2`cap` ... `cap` Un = {v in V where v is in Uk , for all k = 1,2,...,n} called the intersection of the subspaces.
Facts:(i) U1`cap` U2`cap` ... `cap` Un < V.
(ii) U1`cap`U2`cap` ... `cap` Un < Uk for each k, k= 1,2,...,n.
(iii) If W<V and W < Uk for each k, k= 1,2,...,n, then W<U1`cap` U2`cap` ... `cap` Un .
So ...
U1`cap` U2`cap` ... `cap` Un is the largest subspace of V that is contained in Uk for each k, k= 1,2,...,n.
Examples: U1 = {(x,y,z): x+y+2z=0} U2 = {(x,y,z): 3x+y-z=0}. U1 `cap` U2 = {(x,y,z): x+y+2z=0 and 3x+y-z=0}= ...
Let Uk = {f in P(F): f(x) = akxk where ak is in F} then Uj`cap`Uk = {0} for j not equal to k....
9-26
Suppose V is a v.s over F and `U_1` and `U_2` are subspaces of V. We say that V is the direct sum of U1 and U2 and we write
V = `U_1` `oplus` `U_2` if (1) `U_1` + `U_2` and (2) U1`cap` U2 = {0}.
Prop: Suppose V = `U_1` `oplus` `U_2` and `v in V`, v = `u_1 + u_2 = w_1 + w_2 ` with `u_i` and `w_i` are in `U_i` for i = 1 and 2.
Then `u_i = w_i` for i = 1,2.
Conversely, if V = `U_1` + and `U_2` and if v = `u_1 + u_2 = w_1 + w_2 ` with `u_i` and `w_i` are in `U_i` for i = 1 and 2.
implies `u_i = w_i` for i = 1,2 then V = `U_1` `oplus` `U_2`.Proof: From the hypothesis, `u_1 _ (- w_1) = w_2 + (- u_2) in U_1` and `U_2`, so it is in `U_1 cap U_2` = {0}. Thus ... `u_i = w_i` for i = 1, 2.
Conversely: if `v in U_1 nn U_2` then v = v + 0 = 0 + v, so v = 0. Thus V = ` U_1` `oplus` `U_2`.To generalize the direct sum to U1, U2, ... , Un, we would start by assuming V = U1 + U2 + ... + Un.
We might try to generalize the intersection property by assuming that `U_i` `oplus` `U_j` = {0} for all i and j that are not equal. This won't work .
9-29
Discuss Exercise: If U and W are subspaces of V and U `uu` W is also a subspace of V, then either U < W or W < U.
Direct Sums: Suppose U1, U2, ... , Un are all subspaces of V and U1+ U2+ ... + Un = V, we say V is the direct sum of U1, U2, ... , Un if for any v in V, the expression of v as v = u1+ u2+ ... + un for uk in Uk is unique, i.e., if v = u1'+ u2'+ ... + un' for uk' in Uk then u1 = u1', u2=u2', ... , un=un'. In these notes we will write V = U1 `oplus` U2 `oplus`...`oplus` Un
Examples:Uk = {v in Fn: v = (0,... 0,a,0, ... 0) where a is in F is in the kth place on the list.} Then U1`oplus` U2`oplus` ... `oplus` Un = V.Theorem: V = U1`oplus` U2`oplus` ... `oplus` Un if and only if (i)U1+ U2+ ... + Un = V AND 0=u1+ u2+ ... + un for uk in Uk implies u1=u2=...=un=0.
Theorem: V = U`oplus`W if and only if V = U+W and U`cap`W={0}.
Examples using subspaces and direct sums in appplications:
Suppose A is a square matrix (n by n) with entries in the field F.
For c in F, let Wc = { v in Fn where vA = cv}.
Fact: For any A and any c, Wc< Fn . [Comment: for most c, Wc= {0}. ]
Definition: If Wc is not the trivial subspace, then c is called an eigenvalue or characteristic value for the matrix A and nonzero elements of Wc are called eigen vectors or characteristic vectors for A.
Application 1 : Consider the coke and pepsi matrices:
Questions: For which c is Wc non-trivial?
Example A. vA = cv? where
A= (
5/6 1/6 1/4 3/4 )
Example B. vB = cv where
B= (
2/3 1/3 1/4 3/4 )
To answer this question we need to find (x,y) [not (0,0)] so that
Is R2 = Wc1 + Wc2 for these subspaces? Is this sum direct?
Example A
(x,y) (
5/6 1/6 1/4 3/4 ) = c(x,y)
Example B
(x,y) (
2/3 1/3 1/4 3/4 ) = c(x,y)
Focusing on Example B we consider for which c will the matrix equation have a nontrivial solution (x,y)?
We consider the equations: 2/3 x +1/4 y = cx and 1/3 x+3/4 y = cy.
Multiplying by 12 to get rid of the fractions and bringing the cx and cy to the left side we find that
(8-12c)x + 3 y = 0 and 4x + (9-12c)y = 0
Multiplying by 4 and (8-12c) then subtracting the first equation from the second we have
((8-12c)(9-12c) - 12 )y = 0. For this system to have a nontrivial solution, it must be that
((8-12c)(9-12)c - 12 ) = 0 or `72 - (108+96) c+144c^2 -12 = 0` or
`60 -204c +144c^2 = 0`.
Clearly one root of this equation is 1, so factoring we have (1-c)(60-144c) = 0 and c = 1 and c = 5/12 are the two solutions... so there are exactly two distinct eigenvalues for example B,
c= 1 and c = 5/12 and two non trivial eigenspaces W1 and W5/12 .
General Claim: If c is different from k, then Wc `cap` Wk = {0}
Proof:?
Generalize?
What does this mean for vn when n is large?
Does the distribution of vn when n is large depend on v0?
Application 2: For c a real number let
Wc = {f in C∞(R) where f '(x)=c f(x)} < C∞(R).
What is this subspace explicitly?
Let V={f in C∞(R) where f ''(x) - f(x) = 0} < C∞(R).
Claim: V = W1 `oplus` W-1
Begin? We'll come back to this later in the course!
If c is different for k, then Wc `cap` Wk = {0}
Proof:...
Back to looking at things from the point of view of individual vectors:
Linear combinations:
Def'n. Suppose S is a set of vectors in a vector space V over the field F. We say that a vector v in V is a linear combination of vectors in S if there are vectors u1, u2, ... , un in S and scalars a1, a2, ..., an in F where v = a1u1+ a2u2+ ... + anun .
Comment: For many introductory textbooks: S is a finite set.
Recall. Span (S) = {v in V where v is a linear combination of vectors in S}
If S is finite and Span (S) = V we say that S spans V and V is a "finite dimensional" v.s.
Linear Independence.
Def'n. A set of vectors S is linearly dependent if there are vectors u1, u2, ... , un in S and scalars `alpha_1, alpha_2, ..., alpha_n in F` NOT ALL 0 where `0 = alpha_1u_1+ alpha_2u_2+ ... + alpha_n u_n` .
A set of vectors S is linearly independent if it is not linearly dependent.
Other ways to characterize linearly independent.
A set of vectors S is linearly independent if whenever there are vectors u1, u2, ... , un in S and scalars `alpha_1, alpha_2, ..., alpha_n in F` in F where `0 = alpha_1u_1+ alpha_2u_2+ ... + alpha_n u_n` , the scalars are all 0, i.e. `alpha_1, alpha_2, ..., alpha_n = 0` .
Examples: Suppose A is an n by m matrix: the row space of A= span ( row vectors of A) , the column space of A = Span(column vectors of A).
Relate to R(A)
Recall R(A) = "the range space of A" = { w in Fk where for some v in Fn, vA= w } < Fk.
w is in R(A) if and only if w is a linear combination of the row vectors, i.e., R(A) = the row space of A.
If you consider Av instead of vA, the R*(A) = the column space of A.
"Infinite dimensional" v.s. examples: P(F), F∞, C∞ (R)
F[X] was shown to be infinite dimensional. [ If p is in SPAN(p1,....,pn) then the degree of p is no larger than the maximum of the degrees of {p1,...pn}. So F[X] cannot equal SPAN(p1,...,pn) for any finite set of polynomials- i.e, F[X] is NOT finite dimensional.
Some Standard examples.
Bases- def'n.
Definition: A set B is called a basis for the vector space V over F if (i) B is linearly independent and (ii) SPAN( B) = V.
Bases and representation of vectors in a f.d.v.s.
10-8
Suppose B is a finite basis for V with its elements in a list, (u1, u2, ... , un) .
If v is in V, then there are unique vectors scalars `alpha_1, alpha_2, ..., alpha_n` in F where v = ` alpha_1u_1+ alpha_2u_2+ ... + alpha_n u_n` .
The scalars are called the coordinates of v w.r.t. B, and we will write
v = [`alpha_1, alpha_2, ..., alpha_n`]B.
Linear Independence Theorems
Theorem 1 : Suppose S is a linearly independent set and v1 is not an element of Span(S), then S `cup` v1 is also linearly independent.
Proof Outline: Suppose vectors u1, u2, ... , un in S and scalars `alpha_1, alpha_2, ..., alpha_n, alpha in F` where `0 = alpha_1u_1+ alpha_2u_2+ ... + alpha_n u_n + alpha` v1 . If `alpha` is not 0 then
`v1= -alpha^{-1}( alpha_1u_1+ alpha_2u_2+ ... + alpha_n u_n) in` Span(S), contradicting the hypothesis. So `alpha = 0`. Buth then `0 = alpha_1u_1+ alpha_2u_2+ ... + alpha_n u_n ` and since S is linearly independent,
`alpha_1, alpha_2, ..., alpha_n = 0`. Thus S `cup` v1 is linearly independent. EOP.
Theorem 2: Suppose S is a finite set of vectors with V = Span (S) and T is a subset of vectors in V. If n( T) > n(S) then T is linearly dependent.
Proof Outline: Suppose n(S) = N. Then by the assumption ... [Proof works by finding N homogeneous linear equations with N+1 unknowns.]
10-10
Theorem 3: Every finite dimensional vector space has a basis.
Proof outline:How to constuct a basis, B, for a non trivial finite dimensional v.s., V. Since V is finite dimensional it has a subset S that is finite with Span (S) = V.
Start with the empty set. This is linearly independent. Call this B0. If span(B0) = V then you are done. B0 is a basis.
- If Span(B0) is not V then there is a vector v1 in V where v1 is not in Span(B0). Apply Theorem 1 to obtain `B1 = B0 cup {v1}` which is linearly independent. If Span(B1) then B1 is a basis for V. Otherwise continue using Theorem 1 repeatedly until the resulting set of vectors has more then the number of sets in the spanning set. But by Theorem 2, this is a contradiction. So at some stage of the process, Span(Bk) = V, and Bk is a basis for V.
Comment:The proof of the Theorem also shows that given T, a linearly independent subset of V and V a finite dimensional vector space, one can step by step add elements to T, so that eventually you have a new set S where S is lineary independent with Span(S) = V and T contained in S. In other words we can construct a set B that is a basis for V with T contained in B. This proves
Corollary: Every Linearly independent subset of a finite dimensional vector space can be extended to a basis of the vector space.
Theorem 4. If V is finite dimensional vs and B and B' are bases for V, then n(B) = n(B').
Proof: fill in ... based on the Theorem 2. n(B) <= n(B') and n(B') <= n(B) so...
Definition: The dimension of a finite dimensional v.s. over F is the number of elements in a(ny) basis for V.
Discuss dim({0}).
The empty set is linearly independent!... so The empty set is a basis for {0} and the dimension of {0} is 0!
What is Span of the empty set? Characterize SPAN(S) = the intersection of all subspaces that contain S. Then Span (empty set) = Intersection of all subspaces= {0}.
Prop: A Subspace of a finite dimensional vs is finite dimensional.
Suppose Dim(V) = n, S a set of vectors with N(S) = n. Then
(1) If S is Linearly independent, then S is a basis.
(2) If Span(S) = V, then S is a basis.
Proof: (1) S is contained is a basis, B. If B is larger than S, then B has more than n elements, which contradicts that fact that any basis for V has exactly n elements. So B = S and S is a basis.
(2) Outline:V has a basis of n elements, B. Suppose S in linearly dependent and show that there is a set with less than n elemnets that spans V. Hence B cannot be a basis. This, S is a basis.
IRMC
Theorem: Sums, intersections and dimension: If U, W <V are finite dimensional, then so is U+W and
dim(U+W) = Dim(U) + Dim(W) - Dim(U`cap`W).
Proof: (idea) build up bases of U and W from U`cap`W.... then check that the union of these bases is a basis for U+W
Problem 2.12: Suppose p0,...,pm are in Pm(F) and pi(2) = 0 for all i.
Prove {p0,...,pm} is linearly dependent.
Proof: Suppose {p0,...,pm} is linearly independent.
Notice that by the assumption for any coefficients
(a0p0+..+ampm )(2) = a0p0(2)+..+ampm(2) = 0and since u(x)= 1 has u(2) = 1, u (= 1) is not in the SPAN(p0,...,pm).
Thus SPAN(p0,...,pm) is not Pm(F).
But SPAN ( 1,x, ..., xm) = Pm(F) .
By repeatedly applying the Lemma to these two sets of m+1 polynomials as in Theorem 2.6, we have SPAN (p0,...,pm)=Pm(F), a contradiction. So {p0,...,pm} is not linearly independent.
End of proof.
Examples: In R2, P4(R).
Connect to Coke and Pepsi example: find a basis of eigen vectors using the B example for R2. [Use the on-line technology]
Example B
(x,y) (
2/3 1/3
1/4 3/4 ) = c(x,y)
We considered the equations: 2/3 x +1/4 y = cx and 1/3 x+3/4 y = cy and showed that
there are exactly two distinct eigenvalues for example B,
c= 1 and c = 5/12 and two non trivial eigenspaces W1 and W5/12 .
Now we can use technology to find eigenvectors in each of these subspaces.
Matrix calculator, gave as a result that the eignevalue 1 had an eigenvector (1,4/3) while 5/12 had an eigenvector (1,-1). These two vectors are a basis for R2.
Linear Transformations: V and W vector spaces over F.
Definition: A function T:V ` ->` W is a linear transformation if for any x,y in V and in F, T(x+y) = T(x) + T(y) and T(ax) = a T(x).
Examples: T(x,y) = (3x+2y,x-3y) is a linear transformation T: R2 -> R2.
G(x,y) = (3x+2y, x^2 -2y) is not a linear trasnformation.
G(1,1) = (5, -1) , G(2,2) = (10, 0)... 2*(1,1) = (2,2) but 2* (5,-1) is not (10,0)!
Notice that T(x,y)can be thought of as the result of a matric multiplication
So the two key properties are the direct consequence of the properties of matrix multiplication.... (v+w)A= vA+wA and (cv)A = c(vA).
(x,y) (
3
1
2
-2 )
For A a k by n matrix : TA (left argument) and AT (right) are linear transformations on Fk and Fn.
TA (x) = x A for x in Fk and AT(y) = A[y]tr for y in Fn and [y]tr indicates the entries of the vector treated as a one column matrix.
The set of all linear transformations from V to W is denoted L(V,W).
V = U `oplus` W if and only if V = U+W and U`cap`W={0}.
Proof: => suppose v is in U`cap`W, then v=u in U and v=w in W, so 0 = u-v. But since V= U`oplus`W, this means u=w = 0 so v=0, so U`cap`W={0}.
Note: This argument extends to V as the direct sum of any family of subspaces.<= Suppose u is in U and w is in W and u+w = 0. Then, u = -w so u is also in W, and thus u is is U`cap`W={0}. So u=0 and then w= 0 . Since V=U+W, we have by 1.8, V=U`oplus`W. EOP
2.19 If V is f.d.v.s. and U1, ...Un are subspaces with V = U1 +...+ Un and
dim(V) = dim(U1)+...+ dim(Un) then V = U1 `oplus`...`oplus` UnProof outline: Choose bases for U1, ..., Un and let B be the union of these setes. Since V = U1 +...+ Un every vector in v is a linear combination of elements from B. But B has exactly dim(U1)+...+ dim(Un) = dim(V) elements in it, B is a basis for V. Now suppose 0=u1+ u2+ ... + un for uk in Uk. Then each ui =can be expressed as a linear combination of the basis vectors for Ui, and the entire linear combination is 0 implies that each coefficient is 0 because B is a basis. So u1=...=un=0 and V = U1 `oplus`...`oplus` Un. EOP
How do you find a basis for the SPAN(S) in Rn?
Outline of use of row operations...
10-17
Back to linear transformations:
Consequences of the definition: If T:V->W is a linear transformation, then for any x and y in V and a in F,
(i) T(0) = 0.
(ii) T(-x) = -T(x)
(iii) T(x+ay) = T(x) + aT(y).
Quick test: If T:V->W is a function and (iii) holds for any x and y in V and a in F, then the function is a linear transformation.
D... Differentiation is a linear transformation: on polynomials, on ...Example: (D(f))(x) = f' (x) or D(f) = f'.
(D(f + `alpha` g))(x) = (f+`alpha`g)' (x) = f'(x) + `alpha`g'(x) = (f'+`alpha`g') (x) or
D(f+`alpha`g) = f'+ `alpha`g'= D(f) +`alpha` D(g).
Theorem: T : V->W linear, B a basis, gives S(T):B ->W.
Suppose S:B -> W, then there is a unique linear transformation T(S):V->W such that S(T(S))=S.
Proof: Let T(S)(v) be defined as follows: Suppose v is expressed (uniquely) as a linear combination of elements of B, ie. v = a1u1+ a2u2+ ... + anun ... then let T(v) = a1S(u1)+ a2S(u2)+ ... + anS(un) ....
This is well defined since the representation of v is unique. Left to show T is linear. Clearly... if u is in B then S(T(S))(u) = S(u).
Example: T: P(F) `->` P(F).... S(xn) = nx n-1.
Or another example: S(xn) = 1/(n+1) x n+1.
Key Spaces related to T:V->W
Null Space of T= kernel of T = {v in V where T(v) = 0 [ in W] }= N(T) < V
Range of T = Image of T = T(V) = {w in W where w = T(v) for some v in V} <W.
10-20
Major result of the day: Suppose T:V->W and V is a finite dimensional v.s. over F. Then N(T) and R(T) are also finite dimensional and Dim(V) = Dim (N(T)) + Dim(R(T)).
Proof: Done in class- see text: Outline: start with a basis C for N(T) and extend this to a basis B for V. Show that T(B-C) is a basis for R(T).Visualize with Winplot?
10-22
Algebraic stucture on L(V,W)
Definition of the sum and scalar multiplication:
T, U in L(V,W), a in F, (T+U)(v) = T(v) + U(v).
Fact:T+U is also linear.
(aT)(v) = a T(v) .
Fact:aT is also Linear.
Check: L(V,W) is a vector space over F.Composition: T:V -> W and U : W -> Z both linear, then UT:V->Z where UT(v) = U(T(v)) is linear.
Note: If T':V-> W and U':W->Z are also linear, then U(T+T') = UT + UT' and (U+U') T = UT + UT'. If S:Z->Y is also linear then S(TU) = (ST)U.
Key focus: L(V,V) , the set of linear "operators" on V.... also called L(V).
If T and U are in L(V) then UT is also in L(V). This is the key example of what is called a "Linear Algebra"... a vector space with an extra internal operation usually described as the product. That satisfies the distributive and associative properties and has an "identity"- namely Id(v) = v for all v `in V`. [Id T = T Id = T for all T `in L(V)`.
If T `in` L(V), then `T^n in` L(V).
Example: V = `C^{oo}`(R). D: V `->` V is defined by D(f )= f '. Then `D^2 +4D + Id` = (D + 3Id)(D + Id) = T `in` L(V). Finding N(T) is solving the "homogenous linear differential equation" f ''(x) + 4f '(x) + f (x) = 0.
10-24
Linear Transformations and Bases
We proved that if V and W are finite dimensional then so is L(V,W) and dim(L(V,W)) = dim(V) Dim(W).
We did this using bases for V and W to find a basis for L(V,W). That basis for L(V,W) also established a function from L(V,W) to the matrices that is a linear transformation! More details will be supplied for this lecture later.
Matrices and Linear transformations.
Footnote on notation for Matrices: If the basis for V is B and for W is C and T:V->W,
the matrix of T with respect to those bases can be denoted MBC(T). Note - this follows a convention on the representation of a transformation.
The matrix for a vector V is denoted MB(v). If we treat this as a row vector we have MC(T(v))=MB(v)MBC(T).
This can be transposed using column vectors for the matrix of the vectors and we have with this transposed view:
MC(T(v))=MBC(T)MB(v)
The function M : L(V,W) -> Mat (m,n; F) is a linear transformation.
10-27
Recall definition of "injective" or "1:1" function.
Recall definition of "surjective" or "onto" function.
Theorem: T is 1:1 (injective) if and only if N(T) = {0}
Proof: => Suppose T is 1:1. We now that T(0)=0 , so if T(v) = 0, then v = 0. Thus 0 is the only element of N(T) or N(T) = {0}.
<= Suppose N(T) = {0}. If T (v) = T(w) then T(v-w) =T(v)-T(w) = 0 so v-w is in N(T).... ok, than must mean that v-w = 0, so v=w and T is 1:1.
Theorem: T is onto if and only of the Range of T = W.
Theorem: T is onto if and only if for any (some) basis, B, of V, Span(T(B)) = W.
Theorem: If V and W are finite dimensional v.s. / F, dimV = dim W, T : V `->` W is linear, then T is 1:1 if and only if T is onto.
Proof: We know that dim V = dim N(T) + dim R(T).
=> If T is 1:1, then dim N(T) = 0, so dim V = dim R(T) . Thus dim R(T) = dim W and T is onto.
<= If T is onto, then dimR(T) = dim W. So dim N(T) = 0 and thus N(T) = {0} and T is 1:1.
The importance of the Null Space of T, N(T), is understanding what T does in general.
Example 1. D:P(R) -> P(R)... D(f) = f'. Then N(D) = { f: f(x) = C for some constant C.} [from calculus 109!]
Notice: If f'(x) = g'(x) the f(x) = g(x) + C for some C.
Proof: consider D(f(x) - g(x)) = Df(x) - Dg(x) = 0, so f(x) -g(x) is in N(T).
Example 2: Solving a system of homogeneous
linear equations. This was connected to finding the null space of a
linear trasnformation connected to a matrix. Then what about a non- homogeneous
system with the same matrix. Result: If z is a solution of the non- homogeneous
system of linear equations and z ' is another solution, then z' = z + n where
n is a solution to the homogeneous system.
General Proposition: T:V->W. If b is a vector in W and a is in V with T(a) = b, then T-1({b}) = {v in V: v = a +n where n is in N(T)} = a + N(T)
Comment: a + N(T) is called the coset of a mod N(T)...these are analogous to lines in R2. More on this later in the course.
Suppose T is a linear transformation :
Let T(L) = L' = {(x'y'): (x',y')= T(x,y)}
T(x,y) = T(a,b) + t T(u,v).
If T(u,v) = (0,0) then L' = T(L) = {T(a,b)}.
If not then L' is also a line though T(a,b) in the direction of T(u,v).
[View this in winplot?]
The Division Algorithm, [proof?]