Column Space – The vector space that is spanned by all the C(A) columns in the matrix (A), can also be by row notation written as R(Transpose(A)) If a vector is contained in the column space of a certain matrix, then that vector is a column vector and the coefficients/weights neededContinue Reading

Eigen Decomposition reveals the most important vector in the dataset, the Principal Component (also part of PCA). Eigen Decomposition can be performed only on square matrices. It extracts two features: Eigen Value (a lambda(scaler) that pre multiplies the Eigen Vector) and by reverse the Eigen Vector (a vector which postContinue Reading

Matrix Shifting is the action of adding a scalar of the Identity matrix in order to inflate reduced rank matrices so they become full rank which then allows easier work with the matrix. Matrix Shifting has a tradeoff of loss of minimal data.  Continue Reading

Frobenius dot product is achieved by computing the hadamard multiplication (elementwise) of two matrices <A,B>with the same dimensionality and summing up all the elements. Another way to achieve the Frobenius dot product is to vectorise(build columnwise: vec(A),vec(B))  both matrices to two vectors with the same size and then perform aContinue Reading

Symboled by a dot in the middle:, and can be done only on two matrices with the same sizes. It’s just as like multiplication of each element with the corresponding element, just like in addition and subtraction. Another thing to know is that Elementwise matrix division is also possible usingContinue Reading

A symmetric matrix is a square matrix that is equal to its transpose. Createing a Symmetric matrix from a square matrix is easy, just add the matrix to it’s transpose and divide by two. Creating a Symmetric matrix from a non square matrix is done by multiplying the matrix withContinue Reading

Applying (pre multiplying) a matrix to a vector gives you either a rotation or a scale(strech/compress) or both a scale(strech/compress) and a rotation. This is called vector transformation, and the  (pre multiplying)  matrix is called the transformation matrix. No Rotation Case(Pure Scaling ; EigenValue/Eigen Vector) When a vector transformation byContinue Reading

Properties: The result is always a column vector when post multiplying by a vector: A(mXn) * u(nX1) = v(mX1) Taking weighted combinations of the rows of the matrix where the weights are determined  by the elements of the vector. result is always a row vector when pre multiplying by aContinue Reading