QR decomposition
A = QR left multiplying A by Trans(Q) and also QR by Trans(Q) = Trans(Q)A = RContinue Reading
A = QR left multiplying A by Trans(Q) and also QR by Trans(Q) = Trans(Q)A = RContinue Reading
The Gram–Schmidt process is a method for orthonormalising a set of vectors in an inner product space( to a new set that spans the same k-dimensional subspace as it’s original set of vectors). When the matrix is full column rank it yields the QR decomposition which is composed of aContinue Reading
An Orthogonal matrix is indicated with the letter Q, and each of its column’s has a magnitude(norm) of 1, and each pair of its column’s are pairwise orthogonal(have a 0 dot product). <Q1,Q2>=Trans(Q).Q = I –> Which means that if column Q1 is the same as column Q2 the dotContinue Reading
w is a vector and v is its reference vector: Decomposing w into two vectors w||v and w⊥v: w = w⊥v + w||v w is perpendicular to v = w⊥v w is parallel to v =w||v w⊥v = w-w||v so to find the parallel vector length, w||v we can use the projection formula:Continue Reading
transpose(A)(b-Ax) = 0 (The zero vector because b-Ax is a vector) The above equals: transpose(A)(b) – transpose(A)Ax =0 transpose(A)(b) = transpose(A)Ax If the matrix is full column rank or of course full rank we can multiply the left inverse of transpose(A)A and get the identity matrix. (transpose(A)A)¯¹*transpose(A)Ax = (transpose(A)A)¯¹*transpose(A)(b) Which leaves usContinue Reading
A square matrix that is not invertible is called singular or degenerate. A square matrix is singular if and only if its determinant is 0. The easiest way to compute the inverse of a matrix is using the formula of RREF, where you augment the matrix with it’s identity andContinue Reading
Gaussian Elimination is one of the main algorithms for solving systems of linear equations. Any equation that has an unknown is called a point equation and has only one solution. Any equation that has two unknowns resolves to a line, and is called a linear equation and has an infiniteContinue Reading
Given a line ‘a’ and a point ‘b’ we want to find a scaled version(λ)’a’that is as close to ‘b’ as possible. The right answer is where the line of λa to b is at a right angle to line a. This line is the difference of the line λa and theContinue Reading
The Null space of a matrix is the set of vectors which post multiplying column wise or pre multiplying row wise produce the zero vector, and are not the trivial zero vector themselves. The set of all Vectors basis coefficients * lambda = λ*v = the set of all vectorsContinue Reading
Column Space – The vector space that is spanned by all the C(A) columns in the matrix (A), can also be by row notation written as R(Transpose(A)) If a vector is contained in the column space of a certain matrix, then that vector is a column vector and the coefficients/weights neededContinue Reading
Yair Shinar for Clarity and Solutions