Skip to document

Vectors and Matrices 2014-2015 Course Notes

Module

Vectors and Matrices (A1)

8 Documents
Students shared 8 documents in this course
Academic year: 2014/2015
Uploaded by:
Anonymous Student
This document has been uploaded by a student, just like you, who decided to remain anonymous.
Newcastle University

Comments

Please sign in or register to post comments.

Preview text

Part IA — Vectors and Matrices

Based on lectures by N. Peake

Notes taken by Dexter Chua

Michaelmas 2014

These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures. They are nowhere near accurate representations of what was actually lectured, and in particular, all errors are almost surely mine.

Complex numbers Review of complex numbers, including complex conjugate, inverse, modulus, argument and Argand diagram. Informal treatment of complex logarithm,n-th roots and complex powers. de Moivre’s theorem. [2]

Vectors Review of elementary algebra of vectors inR 3 , including scalar product. Brief discussion of vectors inRnandCn; scalar product and the Cauchy-Schwarz inequality. Concepts of linear span, linear independence, subspaces, basis and dimension. Suffix notation: including summation convention,δijandεijk. Vector product and triple product: definition and geometrical interpretation. Solution of linear vector equations. Applications of vectors to geometry, including equations of lines, planes and spheres. [5]

Matrices Elementary algebra of 3×3 matrices, including determinants. Extension ton×n complex matrices. Trace, determinant, non-singular matrices and inverses. Matrices as linear transformations; examples of geometrical actions including rotations, reflections, dilations, shears; kernel and image. [4] Simultaneous linear equations: matrix formulation; existence and uniqueness of solu- tions, geometric interpretation; Gaussian elimination. [3] Symmetric, anti-symmetric, orthogonal, hermitian and unitary matrices. Decomposition of a general matrix into isotropic, symmetric trace-free and antisymmetric parts. [1]

Eigenvalues and Eigenvectors Eigenvalues and eigenvectors; geometric significance. [2] Proof that eigenvalues of hermitian matrix are real, and that distinct eigenvalues give an orthogonal basis of eigenvectors. The effect of a general change of basis (similarity transformations). Diagonalization of general matrices: sufficient conditions; examples of matrices that cannot be diagonalized. Canonical forms for 2×2 matrices. [5] Discussion of quadratic forms, including change of basis. Classification of conics, cartesian and polar forms. [1] Rotation matrices and Lorentz transformations as transformation groups. [1]

Contents IA Vectors and Matrices

  • 0 Introduction Contents
  • 1 Complex numbers
    • 1 Basic properties
    • 1 Complex exponential function
    • 1 Roots of unity
    • 1 Complex logarithm and power
    • 1 De Moivre’s theorem
    • 1 Lines and circles inC
  • 2 Vectors
    • 2 Definition and basic properties
    • 2 Scalar product
      • 2.2 Geometric picture (R 2 andR 3 only)
      • 2.2 General algebraic definition
    • 2 Cauchy-Schwarz inequality
    • 2 Vector product
    • 2 Scalar triple product
    • 2 Spanning sets and bases
      • 2.6 2D space
      • 2.6 3D space
      • 2.6 Rnspace
      • 2.6 Cnspace
    • 2 Vector subspaces
    • 2 Suffix notation
    • 2 Geometry
      • 2.9 Lines
      • 2.9 Plane
    • 2 Vector equations
  • 3 Linear maps
    • 3 Examples
      • 3.1 Rotation inR
      • 3.1 Reflection inR
    • 3 Linear Maps
    • 3 Rank and nullity
    • 3 Matrices
      • 3.4 Examples
      • 3.4 Matrix Algebra
      • 3.4 Decomposition of ann×nmatrix
      • 3.4 Matrix inverse
    • 3 Determinants
      • 3.5 Permutations
      • 3.5 Properties of determinants
      • 3.5 Minors and Cofactors

0 Introduction IA Vectors and Matrices

0 Introduction Contents

Vectors and matrices is the language in which a lot of mathematics is written in. In physics, many variables such as position and momentum are expressed as vectors. Heisenberg also formulated quantum mechanics in terms of vectors and matrices. In statistics, one might pack all the results of all experiments into a single vector, and work with a large vector instead of many small quantities. In group theory, matrices are used to represent the symmetries of space(as well as many other groups). So what is a vector? Vectors are very general objects, and can in theory represent very complex objects. However, in this course, our focus is on vectors inRnorCn. We can think of each of these as an array ofnreal or complex numbers. For example, (1, 6 ,4) is a vector inR 3. These vectors are added in the obvious way. For example, (1, 6 ,4) + (3, 5 ,2) = (4, 11 ,6). We can also multiply vectors by numbers, say 2(1, 6 ,4) = (2, 12 ,8). Often, these vectors represent points in ann-dimensional space. Matrices, on the other hand, representfunctionsbetween vectors, i. a function that takes in a vector and outputs another vector. These, however, are not arbitrary functions. Instead matrices representlinear functions. These are functions that satisfy the equalityf(λx+μy) =λf(x) +μf(y) for arbitrary numbersλ, μand vectorsx,y. It is important to note that the functionx7→x+c for some constant vectorcisnotlinear according to this definition, even though it might look linear. It turns out that for each linear function fromRntoRm, we can represent the function uniquely by anm×narray of numbers, which is what we call the matrix. Expressing a linear function as a matrix allows us to convenientlystudy many of its properties, which is why we usually talk about matrices instead of the function itself.

1 Complex numbers IA Vectors and Matrices

1 Complex numbers

InR, not every polynomial equation has a solution. For example, there does not exist anyxsuch thatx 2 + 1 = 0, since for anyx,x 2 is non-negative, and x 2 + 1 can never be 0. To solve this problem, we introduce the “number”ithat satisfiesi 2 =−1. Theniis a solution to the equationx 2 + 1 = 0. Similarly,−i is also a solution to the equation. We can add and multiply numbers withi. For example, we can obtain numbers 3 +ior 1 + 3i. These numbers are known ascomplex numbers. It turns out that by adding this single numberi,everypolynomial equation will have a root. In fact, for annth order polynomial equation, we will later see that there will always benroots, if we account for multiplicity. We will go into details in Chapter 5. Apart from solving equations, complex numbers have a lot of rather important applications. For example, they are used in electronics to represent alternating currents, and form an integral part in the formulation of quantum mechanics.

1 Basic properties

Definition(Complex number).Acomplex numberis a numberz∈Cof the formz=a+ibwitha, b∈R, wherei 2 =−1. We writea=Re(z) andb=Im(z).

We have

z 1 ±z 2 = (a 1 +ib 1 )±(a 2 +ib 2 ) = (a 1 ±a 2 ) +i(b 1 ±b 2 ) z 1 z 2 = (a 1 +ib 1 )(a 2 +ib 2 ) = (a 1 a 2 −b 1 b 2 ) +i(b 1 a 2 +a 1 b 2 )

z− 1 =

1
a+ib

a−ib a 2 +b 2 Definition(Complex conjugate).Thecomplex conjugateofz=a+ibisa−ib. It is written as ̄zorz∗.

It is often helpful to visualize complex numbers in a diagram:

Definition(Argand diagram).AnArgand diagramis a diagram in which a complex numberz=x+iyis represented by a vectorp=

(

x y

)

. Addition of vectors corresponds to vector addition and ̄zis the reflection ofzin thex-axis.

Re

Im

z 1 z 2

z ̄ 2

z 1 +z 2

1 Complex numbers IA Vectors and Matrices

Proof. ∑∞

n=

∑∞

m=

amn=a 00 +a 01 +a 02 +···

+a 10 +a 11 +a 12 +··· +a 20 +a 21 +a 22 +··· = (a 00 ) + (a 10 +a 01 ) + (a 20 +a 11 +a 02 ) +···

=

∑∞

r=

∑r

m=

ar−m,m

This is not exactly a rigorous proof, since we should not hand-wave about infinite sums so casually. But in fact, we did not even show that the definition of exp(z) is well defined for all numbersz, since the sum might diverge. All these will be done in that IA Analysis I course.

Theorem(z 1 ) exp(z 2 ) = exp(z 1 +z 2 )

Proof.

exp(z 1 ) exp(z 2 ) =

∑∞

n=

∑∞

m=

zm 1 m!

zn 2 n!

=
∑∞

r=

∑r

m=

zr 1 −m (r−m)!

zm 2 m!

=
∑∞

r=

1

r!

∑r

m=

r! (r−m)!m!

z 1 r−mz 2 m

=
∑∞

r=

(z 1 +z 2 )r r!

Again, to define the sine and cosine functions, instead of referring to“angles” (since it doesn’t make much sense to refer to complex “angles”), we again use a series definition. Definition(Sine and cosine functions).Define, for allz∈C,

sinz=

∑∞

n=

(−1)n (2n+ 1)!

z 2 n+1=z−

1
3!

z 3 +

1
5!

z 5 +···

cosz=

∑∞

n=

(−1)n (2n)!

z 2 n = 1−

1
2!

z 2 +

1
4!

z 4 +···

One very important result is the relationship between exp, sin andcos.

Theorem= cosz+isinz.

Alternatively, since sin(−z) =−sinzand cos(−z) = cosz, we have

cosz=

eiz+e−iz 2

,

sinz=

eiz−e−iz 2 i

.

1 Complex numbers IA Vectors and Matrices

Proof.

eiz=

∑∞

n=

in n!

zn

=
∑∞

n=

i 2 n (2n)!

z 2 n+

∑∞

n=

i 2 n+ (2n+ 1)!

z 2 n+

=
∑∞

n=

(−1)n (2n)!

z 2 n+i

∑∞

n=

(−1)n (2n+ 1)!

z 2 n+

= cosz+isinz

Thus we can writez=r(cosθ+isinθ) =reiθ.

1 Roots of unity

Definition(Roots of unity).Thenthroots of unityare the roots to the equation zn= 1 forn∈N. Since this is a polynomial of ordern, there arenroots of unity. In fact, thenth roots of unity are exp

(

2 πikn

)

fork= 0, 1 , 2 , 3 ···n−1.

Propositionω= exp

( 2 πi n

)

, then 1 +ω+ω 2 +···+ωn− 1 = 0

Proof proofs are provided:

(i)Consider the equationzn= 1. The coefficient ofzn− 1 is the sum of all roots. Since the coefficient ofzn− 1 is 0, then the sum of all roots = 1 +ω+ω 2 +···+ωn− 1 = 0.

(ii) Sinceωn−1 = (ω−1)(1 +ω+···+ωn− 1 ) andω 6 = 1, dividing by (ω−1), we have 1 +ω+···+ωn− 1 = (ωn−1)/(ω−1) = 0.

1 Complex logarithm and power

Definition(Complex logarithm).Thecomplex logarithmw=logzis a solution toeω=z, i.ω=logz. Writingz=reiθ, we havelogz=log(reiθ) =logr+iθ. This can be multi-valued for different values ofθand, as above, we should select theθthat satisfies−π < θ≤π.

Example 2i= log 2 +iπ 2

Definition(Complex power).Thecomplex powerzαforz, α∈Cis defined as zα=eαlogz. This, again, can be multi-valued, aszα=eαlog|z|eiαθe 2 inπα(there are finitely many values ifα∈Q, infinitely many otherwise). Nevertheless, we makezαsingle-valued by insisting−π < θ≤π.

1 De Moivre’s theorem

Theorem(De Moivre’s theorem).

cosnθ+isinnθ= (cosθ+isinθ)n.

1 Complex numbers IA Vectors and Matrices

circle, a pointzis on the circle iff its distance tocisρ, i.|z−c|=ρ. Recalling that|z| 2 =z ̄z, we obtain,

|z−c|=ρ |z−c| 2 =ρ 2 (z−c)( ̄z− ̄c) =ρ 2 zz ̄− ̄cz−cz ̄=ρ 2 −c ̄c

Theorem general equation of a circle with centerc∈Cand radiusρ∈R+ can be given by z ̄z− ̄cz−cz ̄=ρ 2 −cc. ̄

2 Vectors IA Vectors and Matrices

2 Vectors

We might have first learned vectors as arrays of numbers, and then defined addition and multiplication in terms of the individual numbers in the vector. This however, is not what we are going to do here. The array of numbers is just arepresentationof the vector, instead of the vector itself. Here, we will define vectors in terms of what they are, and then the various operations are defined axiomatically according to their properties.

2 Definition and basic properties

Definition(Vector).Avector spaceoverRorCis a collection of vectorsv∈V, together with two operations: addition of two vectors and multiplicationof a vector with a scalar (i. a number fromRorC, respectively). Vector additionhas to satisfy the following axioms:

(i)a+b=b+a (commutativity)

(ii) (a+b) +c=a+ (b+c) (associativity)

(iii) There is a vector 0 such thata+ 0 =a. (identity)

(iv) For all vectorsa, there is a vector (−a) such thata+ (−a) = 0 (inverse)

Scalar multiplicationhas to satisfy the following axioms:

(i)λ(a+b) =λa+λb.

(ii) (λ+μ)a=λa+μa.

(iii)λ(μa) = (λμ)a.

(iv) 1a=a.

Often, vectors have a length and direction. The length is denoted by|v|. In this case, we can think of a vector as an “arrow” in space. Note thatλais either parallel (λ≥0) to or anti-parallel (λ≤0) toa.

Definition(Unit vector).Aunit vectoris a vector with length 1. We write a unit vector asvˆ.

Example a vector space with component-wise addition and scalar mul- tiplication. Note that the vector spaceRis a line, but not all lines are vector spaces. For example,x+y= 1 is not a vector space since it does not contain 0.

2 Scalar product

In a vector space, we can define thescalar productof two vectors, which returns a scalar (i. a real or complex number). We will first look at the usual scalar product defined forRn, and then define the scalar product axiomatically.

2 Vectors IA Vectors and Matrices

Example of the usualRnvector space, we can consider the set of all real (integrable) functions as a vector space. We can define the following inner product: 〈f|g〉=

∫ 1

0

f(x)g(x) dx.

2 Cauchy-Schwarz inequality

Theorem(Cauchy-Schwarz inequality).For allx,y∈Rn,

|x·y|≤|x||y|.

Proof the expression|x−λy| 2. We must have

|x−λy| 2 ≥ 0 (x−λy)·(x−λy)≥ 0 λ 2 |y| 2 −λ(2x·y) +|x| 2 ≥ 0.

Viewing this as a quadratic inλ, we see that the quadratic is non-negative and thus cannot have 2 real roots. Thus the discriminant ∆≤0. So

4(x·y) 2 ≤ 4 |y| 2 |x| 2 (x·y) 2 ≤|x| 2 |y| 2 |x·y|≤|x||y|.

Note that we proved this using the axioms of the scalar product. So this result holds forallpossible scalar products onany(real) vector space.

Example= (α, β, γ) andy= (1, 1 ,1). Then by the Cauchy-Schwarz inequality, we have

α+β+γ≤

3

α 2 +β 2 +γ 2 α 2 +β 2 +γ 2 ≥αβ+βγ+γα,

with equality ifα=β=γ.

Corollary(Triangle inequality).

|x+y|≤|x|+|y|.

Proof.

|x+y| 2 = (x+y)·(x+y) =|x| 2 + 2x·y+|y| 2 ≤|x| 2 + 2|x||y|+|y| 2 = (|x|+|y|) 2.

So

|x+y|≤|x|+|y|.

2 Vectors IA Vectors and Matrices

2 Vector product

Apart from the scalar product, we can also define thevector product. However, this is defined only forR 3 space, but not spaces in general.

Definition(Vector/cross product).Considera,b∈R 3. Define thevector product a×b=|a||b|sinθˆn,

whereˆnis a unit vector perpendicular to bothaandb. Since there are two (opposite) unit vectors that are perpendicular to both of them, we pickˆnto be the one that is perpendicular toa,bin aright-handedsense.

a

b

a×b

The vector product satisfies the following properties:

(i)a×b=−b×a.

(ii) a×a= 0.

(iii)a×b= 0 ⇒a=λbfor someλ∈R(orb= 0 ).

(iv)a×(λb) =λ(a×b).

(v)a×(b+c) =a×b+a×c.

If we have a triangleOAB, its area is given by 12 |

−→
OA||
−−→

OB|sinθ= 12 |

−→
OA×
−−→
OB|.

We define the vector area as 12

−→
OA×
−−→

OB, which is often a helpful notion when we want to do calculus with surfaces. There is a convenient way of calculating vector products:

Proposition.

a×b= (a 1 ˆi+a 2 ˆj+a 3 ˆk)×(b 1 ˆi+b 2 ˆj+b 3 kˆ) = (a 2 b 3 −a 3 b 2 )ˆi+···

=
∣ ∣ ∣ ∣ ∣ ∣

ˆi ˆj kˆ a 1 a 2 a 3 b 1 b 2 b 3

∣ ∣ ∣ ∣ ∣ ∣

2 Scalar triple product

Definition(Scalar triple product).Thescalar triple productis defined as

[a,b,c] =a·(b×c).

2 Vectors IA Vectors and Matrices

2.6 3D space

We can extend the above definitions of spanning set and linear independent set toR 3. Here we have

Theorem,b,c∈R 3 are non-coplanar, i.e·(b×c) 6 = 0, then they form a basis ofR 3.

Proof anyr, writer=λa+μb+νc. Performing the scalar product withb×con both sides, one obtainsr·(b×c)=λa·(b×c)+μb·(b×c)+ νc·(b×c)=λ[a,b,c]. Thusλ=[r,b,c]/[a,b,c]. The values ofμandνcan be found similarly. Thus eachrcan be written as a linear combination ofa,b andc. By the formula derived above, it follows that ifαa+βb+γc= 0 , then α=β=γ= 0. Thus they are linearly independent.

Note that while we came up with formulas forλ, μandν, we did not actually prove that these coefficients indeed work. This is rather unsatisfactory. We could, of course, expand everything out and show that this indeed works, but in IB Linear Algebra, we will prove a much more general result, saying that if we have ann-dimensional space and a set ofnlinear independent vectors, then they form a basis. InR 3 , the standard basis isˆi,ˆj,ˆk, or (1, 0 ,0),(0, 1 ,0) and (0, 0 ,1).

2.6 Rnspace In general, we can define

Definition(Linearly independent vectors).A set of vectors{v 1 ,v 2 ,v 3 ···vm} islinearly independentif ∑m

i=

λivi= 0 ⇒(∀i)λi= 0.

Definition(Spanning set). A set of vectors{u 1 ,u 2 ,u 3 ···um} ⊆ Rn is a spanning setofRnif

(∀x∈Rn)(∃λi)

∑n

i=

λiui=x

Definition(Basis vectors).AbasisofRnis a linearly independent spanning set. The standard basis ofRnise 1 = (1, 0 , 0 ,···0),e 2 = (0, 1 , 0 ,···0),···en= (0, 0 , 0 ,···,1).

Definition(Orthonormal basis).A basis{ei}isorthonormalifei·ej= 0 if i 6 =jandei·ei= 1 for alli, j. Using the Kronecker Delta symbol, which we will define later, we canwrite this condition asei·ej=δij.

Definition(Dimension of vector space).Thedimensionof a vector space is the number of vectors in its basis. (Exercise: show that this is well-defined)

We usually denote the components of a vectorxbyxi. So we havex= (x 1 , x 2 ,···, xn).

2 Vectors IA Vectors and Matrices

Definition(Scalar product). Thescalar productofx,y∈Rnis defined as x·y=

xiyi.

The reader should check that this definition coincides with the|x||y|cosθ definition in the case ofR 2 andR 3.

2.6 Cnspace

Cnis very similar toRn, except that we have complex numbers. As a result, we need a different definition of the scalar product. If we still definedu·v=

uivi, then if we letu= (0, i), thenu·u=− 1 <0. This would be bad if we want to use the scalar product to define a norm.

Definition(Cn). Cn={(z 1 , z 2 ,···, zn) :zi∈C}. It has the same standard basis as∑ Rnbut the scalar product is defined differently. Foru,v∈Cn,u·v= u∗ivi. The scalar product has the following properties:

(i)u·v= (v·u)∗

(ii) u·(λv+μw) =λ(u·v) +μ(u·w)

(iii)u·u≥0 andu·u= 0 iffu= 0

Instead of linearity in the first argument, here we have (λu+μv)·w= λ∗u·w+μ∗v·w.

Example.

∑ 4

k=

(−i)k|x+iky| 2

=

(−i)k〈x+iky|x+iky〉

=

(−i)k(〈x+iky|x〉+ik〈x+iky|y〉)

=

(−i)k(〈x|x〉+ (−i)k〈y|x〉+ik〈x|y〉+ik(−i)k〈y|y〉)

=

(−i)k[(|x| 2 +|y| 2 ) + (−1)k〈y|x〉+〈x|y〉]

= (|x| 2 +|y| 2 )

(−i)k+〈y|x〉

(−1)k+〈x|y〉

1

= 4〈x|y〉.

We can prove the Cauchy-Schwarz inequality for complex vector spacesusing the same proof as the real case, except that this time we have to first multiplyy by someeiθso thatx·(eiθy) is a real number. The factor ofeiθwill drop off at the end when we take the modulus signs.

2 Vector subspaces

Definition(Vector subspace).Avector subspaceof a vector spaceVis a subset ofVthat is also a vector space under the same operations. BothVand{ 0 }are subspaces ofV. All others are proper subspaces. A useful criterion is that a subsetU⊆Vis a subspace iff

(i)x,y∈U⇒(x+y)∈U.

2 Vectors IA Vectors and Matrices

(ii) Suffix appears twice in a term: dummy suffix and is summed over

(iii) Suffix appears three times or more: WRONG!

Example.[(a·b)c−(a·c)b]i=ajbjci−ajcjbisumming overjunderstood.

It is possible for an item to have more than one index. These objects are known astensors, which will be studied in depth in the IA Vector Calculus course. Here we will define two important tensors:

Definition(Kronecker delta).

δij=

{

1 i=j 0 i 6 =j

.

We have 

δ 11 δ 12 δ 13 δ 21 δ 22 δ 23 δ 31 δ 32 δ 33

=
1 0 0
0 1 0
0 0 1
=I.

So the Kronecker delta represents an identity matrix.

Example.

(i)aiδi 1 =a 1. In general,aiδij=aj(iis dummy,jis free).

(ii) δijδjk=δik

(iii)δii=nif we are inRn.

(iv)apδpqbq=apbpwithp, qboth dummy suffices and summed over.

Definition(Alternating symbolεijk).Consider rearrangements of 1, 2 ,3. We can divide them into even and odd permutations. Even permutations include (1, 2 ,3), (2, 3 ,1) and (3, 1 ,2). These are permutations obtained by performing two (or no) swaps of the elements of (1, 2 ,3). (Alternatively, it is any “rotation” of (1, 2 ,3)) The odd permutations are (2, 1 ,3), (1, 3 ,2) and (3, 2 ,1). They are the permutations obtained by one swap only. Define

εijk=




+1 ijkis even permutation − 1 ijkis odd permutation 0 otherwise (i. repeated suffices) εijkhas 3 free suffices. We haveε 123 =ε 231 =ε 312 = +1 andε 213 =ε 132 =ε 321 =−1. ε 112 = ε 111 =···= 0.

We have

(i)εijkδjk=εijj= 0

(ii) Ifajk=akj(i.e symmetric), thenεijkajk=εijkakj=−εikjakj. Sinceεijkajk=εikjakj(we simply renamed dummy suffices), we have εijkajk= 0.

2 Vectors IA Vectors and Matrices

Proposition.(a×b)i=εijkajbk Proof expansion of formula

Theorem.εijkεipq=δjpδkq−δjqδkp

Proof by exhaustion:

RHS =



+1 ifj=pandk=q −1 ifj=qandk=p 0 otherwise LHS: Summing overi, the only non-zero terms are whenj, k 6 =iandp, q 6 =i. Ifj=pandk=q, LHS is (−1) 2 or (+1) 2 = 1. Ifj=qandk=p, LHS is (+1)(−1) or (−1)(+1) =−1. All other possibilities result in 0.

Equally, we haveεijkεpqk=δipδjq−δjpδiqandεijkεpjq=δipδkq−δiqδkp. Proposition. a·(b×c) =b·(c×a) Proof suffix notation, we have a·(b×c) =ai(b×c)i=εijkbjckai=εjkibjckai=b·(c×a)

Theorem(Vector triple product).

a×(b×c) = (a·c)b−(a·b)c Proof. [a×(b×c)]i=εijkaj(b×c)k =εijkεkpqajbpcq =εijkεpqkajbpcq = (δipδjq−δiqδjp)ajbpcq =ajbicj−ajcibj = (a·c)bi−(a·b)ci

Similarly, (a×b)×c= (a·c)b−(b·c)a.

Spherical trigonometry

Proposition.(a×b)·(b×c) = (a·a)(b·c)−(a·b)(a·c). Proof. LHS = (a×b)i(a×c)i =εijkajbkεipqapcq = (δjpδkq−δjqδkp)ajbkapcq =ajbkajck−ajbkakcj = (a·a)(b·c)−(a·b)(a·c)

Was this document helpful?

Vectors and Matrices 2014-2015 Course Notes

Module: Vectors and Matrices (A1)

8 Documents
Students shared 8 documents in this course
Was this document helpful?
Part IA Vectors and Matrices
Based on lectures by N. Peake
Notes taken by Dexter Chua
Michaelmas 2014
These notes are not endorsed by the lecturers, and I have modified them (often
significantly) after lectures. They are nowhere near accurate representations of what
was actually lectured, and in particular, all errors are almost surely mine.
Complex numbers
Review of complex numbers, including complex conjugate, inverse, modulus, argument
and Argand diagram. Informal treatment of complex logarithm,
n
-th roots and complex
powers. de Moivre’s theorem. [2]
Vectors
Review of elementary algebra of vectors in
R3
, including scalar product. Brief discussion
of vectors in
Rn
and
Cn
; scalar product and the Cauchy-Schwarz inequality. Concepts
of linear span, linear independence, subspaces, basis and dimension.
Suffix notation: including summation convention,
δij
and
εijk
. Vector product and
triple product: definition and geometrical interpretation. Solution of linear vector
equations. Applications of vectors to geometry, including equations of lines, planes and
spheres. [5]
Matrices
Elementary algebra of 3
×
3 matrices, including determinants. Extension to
n×n
complex matrices. Trace, determinant, non-singular matrices and inverses. Matrices as
linear transformations; examples of geometrical actions including rotations, reflections,
dilations, shears; kernel and image. [4]
Simultaneous linear equations: matrix formulation; existence and uniqueness of solu-
tions, geometric interpretation; Gaussian elimination. [3]
Symmetric, anti-symmetric, orthogonal, hermitian and unitary matrices. Decomposition
of a general matrix into isotropic, symmetric trace-free and antisymmetric parts. [1]
Eigenvalues and Eigenvectors
Eigenvalues and eigenvectors; geometric significance. [2]
Proof that eigenvalues of hermitian matrix are real, and that distinct eigenvalues give
an orthogonal basis of eigenvectors. The effect of a general change of basis (similarity
transformations). Diagonalization of general matrices: sufficient conditions; examples
of matrices that cannot be diagonalized. Canonical forms for 2 ×2 matrices. [5]
Discussion of quadratic forms, including change of basis. Classification of conics,
cartesian and polar forms. [1]
Rotation matrices and Lorentz transformations as transformation groups. [1]
1