Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools

Linear Algebra Formulas

Vector Operations
Matrix Operations
Vector Properties
Determinants
Eigenvalues and Eigenvectors
Matrix Properties
Matrix Inverses
Systems of Equations
Linear Transformations

Vector Operations






Vector Addition



Formula:

u+v=[u1+v1u2+v2un+vn]\mathbf{u} + \mathbf{v} = \begin{bmatrix} u_1 + v_1 \\ u_2 + v_2 \\ \vdots \\ u_n + v_n \end{bmatrix}
This operation adds two vectors component-wise. It's fundamental in vector algebra and represents combining two vectors to get a resultant vector. Vector addition is used in physics for combining forces, velocities, and other vector quantities.
Vectors in Rn\mathbb{R}^n
Components of vectors u\mathbf{u} and v\mathbf{v}
For u=[13]\mathbf{u} = \begin{bmatrix}1 \\ 3\end{bmatrix} and v=[24]\mathbf{v} = \begin{bmatrix}2 \\ 4\end{bmatrix}, u+v=[1+23+4]=[37]\mathbf{u} + \mathbf{v} = \begin{bmatrix}1+2 \\ 3+4\end{bmatrix} = \begin{bmatrix}3 \\ 7\end{bmatrix}
Adding forces, velocities, or any quantities represented by vectors





Scalar Multiplication



Formula:

cv=[cv1cv2cvn]c\mathbf{v} = \begin{bmatrix} c \cdot v_1 \\ c \cdot v_2 \\ \vdots \\ c \cdot v_n \end{bmatrix}
This operation multiplies each component of a vector by a scalar. It scales the vector by stretching or compressing it without changing its direction (unless the scalar is negative, which also reverses the direction).
Scalar (real number)
Vector in Rn\mathbb{R}^n
Components of vector v\mathbf{v}
For c=3c = 3 and v=[21]\mathbf{v} = \begin{bmatrix}2 \\ -1\end{bmatrix}, cv=[323(1)]=[63]c\mathbf{v} = \begin{bmatrix}3 \cdot 2 \\ 3 \cdot (-1)\end{bmatrix} = \begin{bmatrix}6 \\ -3\end{bmatrix}
Scaling vectors in physics and engineering applications





Dot Product (Inner Product)



Formula:

uv=u1v1+u2v2++unvn\mathbf{u} \cdot \mathbf{v} = u_1v_1 + u_2v_2 + \dots + u_nv_n
The dot product computes a scalar from two vectors. It measures the extent to which two vectors point in the same direction and is used to find angles between vectors and projections.
Vectors in Rn\mathbb{R}^n
Components of vectors u\mathbf{u} and v\mathbf{v}
For u=[12]\mathbf{u} = \begin{bmatrix}1 \\ 2\end{bmatrix} and v=[34]\mathbf{v} = \begin{bmatrix}3 \\ 4\end{bmatrix}, uv=(1)(3)+(2)(4)=11\mathbf{u} \cdot \mathbf{v} = (1)(3) + (2)(4) = 11
Calculating work, projections, and determining orthogonality





Cross Product



Formula:

u×v=[u2v3u3v2u3v1u1v3u1v2u2v1]\mathbf{u} \times \mathbf{v} = \begin{bmatrix} u_2v_3 - u_3v_2 \\ u_3v_1 - u_1v_3 \\ u_1v_2 - u_2v_1 \end{bmatrix}
The cross product of two vectors in R3\mathbb{R}^3 results in a vector that is perpendicular to both, with a magnitude equal to the area of the parallelogram they span.
Vectors in R3\mathbb{R}^3
Components of vectors u\mathbf{u} and v\mathbf{v}
For u=[100]\mathbf{u} = \begin{bmatrix}1 \\ 0 \\ 0\end{bmatrix} and v=[010]\mathbf{v} = \begin{bmatrix}0 \\ 1 \\ 0\end{bmatrix}, u×v=[001]\mathbf{u} \times \mathbf{v} = \begin{bmatrix}0 \\ 0 \\ 1\end{bmatrix}
Calculating torque, rotational motion, and normal vectors





Norm Of A Vector



Formula:

v=v12+v22++vn2\|\mathbf{v}\| = \sqrt{v_1^2 + v_2^2 + \dots + v_n^2}
The norm (or length) of a vector is a measure of its magnitude in space.
Vector
Components of v\mathbf{v}
For v=[34]\mathbf{v} = \begin{bmatrix}3 \\ 4\end{bmatrix}, v=5\|\mathbf{v}\| = 5
Calculating distances, normalizing vectors





Unit Vector



Formula:

u=vv\mathbf{u} = \frac{\mathbf{v}}{\|\mathbf{v}\|}
A unit vector has a magnitude of 1 and indicates direction. Any vector can be converted to a unit vector by dividing by its norm.
Original vector
Unit vector in the direction of v\mathbf{v}
For v=[34]\mathbf{v} = \begin{bmatrix}3 \\ 4\end{bmatrix}, u=[3545]\mathbf{u} = \begin{bmatrix} \frac{3}{5} \\ \frac{4}{5} \end{bmatrix}
Direction representation, projections





Projection Of A Vector



Formula:

projvu=(uvv2)v\text{proj}_{\mathbf{v}} \mathbf{u} = \left( \frac{\mathbf{u} \cdot \mathbf{v}}{\|\mathbf{v}\|^2} \right) \mathbf{v}
The projection of u\mathbf{u} onto v\mathbf{v} is the component of u\mathbf{u} in the direction of v\mathbf{v}. It is used in decomposing vectors and in least squares approximations.
Vectors
Norm of v\mathbf{v}
Projecting u=[23]\mathbf{u} = \begin{bmatrix}2 \\ 3\end{bmatrix} onto v=[10]\mathbf{v} = \begin{bmatrix}1 \\ 0\end{bmatrix} yields [20]\begin{bmatrix}2 \\ 0\end{bmatrix}
Shadow computations, component analysis

Matrix Operations






Matrix Addition



Formula:

A+B=[a11+b11a1n+b1nam1+bm1amn+bmn]\mathbf{A} + \mathbf{B} = \begin{bmatrix} a_{11} + b_{11} & \dots & a_{1n} + b_{1n} \\ \vdots & \ddots & \vdots \\ a_{m1} + b_{m1} & \dots & a_{mn} + b_{mn} \end{bmatrix}
Matrices of the same dimensions can be added by adding their corresponding elements. This operation is fundamental in linear algebra and has applications in systems of equations and transformations.
Matrices of size m×nm \times n
Elements of matrices A\mathbf{A} and B\mathbf{B}
For A=[1234]\mathbf{A} = \begin{bmatrix}1 & 2 \\ 3 & 4\end{bmatrix} and B=[5678]\mathbf{B} = \begin{bmatrix}5 & 6 \\ 7 & 8\end{bmatrix}, A+B=[681012]\mathbf{A} + \mathbf{B} = \begin{bmatrix}6 & 8 \\ 10 & 12\end{bmatrix}
Combining linear transformations, solving matrix equations





Scalar Multiplication Of A Matrix



Formula:

cA=[ca11ca1ncam1camn]c\mathbf{A} = \begin{bmatrix} c \cdot a_{11} & \dots & c \cdot a_{1n} \\ \vdots & \ddots & \vdots \\ c \cdot a_{m1} & \dots & c \cdot a_{mn} \end{bmatrix}
Each element of the matrix is multiplied by the scalar. This operation scales the matrix and is essential in linear transformations and eigenvalue problems.
Scalar (real number)
Matrix of size m×nm \times n
Elements of matrix A\mathbf{A}
For c=2c = 2 and A=[1103]\mathbf{A} = \begin{bmatrix}1 & -1 \\ 0 & 3\end{bmatrix}, cA=[2206]c\mathbf{A} = \begin{bmatrix}2 & -2 \\ 0 & 6\end{bmatrix}
Scaling transformations, adjusting system responses





Matrix Multiplication



Formula:

(AB)ij=k=1naikbkj(\mathbf{AB})_{ij} = \sum_{k=1}^{n} a_{ik}b_{kj}
Matrix multiplication combines two matrices to produce a new matrix. It's not commutative in general and represents the composition of linear transformations.
Matrix of size m×nm \times n
Matrix of size n×pn \times p
Element in row ii, column jj of the product matrix
For A=[12]\mathbf{A} = \begin{bmatrix}1 & 2\end{bmatrix} and B=[34]\mathbf{B} = \begin{bmatrix}3 \\ 4\end{bmatrix}, AB=(1)(3)+(2)(4)=11\mathbf{AB} = (1)(3) + (2)(4) = 11
Transformations, solving systems of equations

Determinants






Determinant Of A 2x2 Matrix



Formula:

det(A)=adbc\det(\mathbf{A}) = ad - bc
The determinant of a 2x2 matrix gives the scaling factor of the linear transformation represented by the matrix. It's also used to determine invertibility.
Matrix [abcd]\begin{bmatrix} a & b \\ c & d \end{bmatrix}
Elements of matrix A\mathbf{A}
For A=[2314]\mathbf{A} = \begin{bmatrix}2 & 3 \\ 1 & 4\end{bmatrix}, det(A)=(2)(4)(3)(1)=5\det(\mathbf{A}) = (2)(4) - (3)(1) = 5
Checking invertibility, area scaling

Matrix Inverses






Inverse Of A 2x2 Matrix



Formula:

A1=1det(A)[dbca]\mathbf{A}^{-1} = \frac{1}{\det(\mathbf{A})} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}
The inverse of a matrix reverses the effect of the original matrix. For a 2x2 matrix, the inverse exists if and only if the determinant is non-zero.
Matrix [abcd]\begin{bmatrix} a & b \\ c & d \end{bmatrix}
Determinant of A\mathbf{A}
Inverse of matrix A\mathbf{A}
For A=[2153]\mathbf{A} = \begin{bmatrix}2 & 1 \\ 5 & 3\end{bmatrix}, det(A)=1\det(\mathbf{A}) = 1, so A1=[3152]\mathbf{A}^{-1} = \begin{bmatrix} 3 & -1 \\ -5 & 2 \end{bmatrix}
Solving linear systems, undoing transformations

Eigenvalues And Eigenvectors






Eigenvalues And Eigenvectors



Formula:

Ax=λx\mathbf{A}\mathbf{x} = \lambda\mathbf{x}
An eigenvalue is a scalar that indicates how much the eigenvector is stretched or compressed during the transformation represented by A\mathbf{A}. Eigenvectors point in directions that are invariant under the transformation.
Square matrix
Eigenvector
Eigenvalue
For A=[2003]\mathbf{A} = \begin{bmatrix}2 & 0 \\ 0 & 3\end{bmatrix}, eigenvalues are λ=2,3\lambda = 2, 3 with corresponding eigenvectors along the x and y axes.
Stability analysis, quantum mechanics, principal component analysis

Systems Of Equations






Cramer's Rule



Formula:

xi=det(Ai)det(A)x_i = \frac{\det(\mathbf{A}_i)}{\det(\mathbf{A})}
Cramer's Rule provides a method to solve a system of linear equations using determinants, applicable when the system has a unique solution.
Coefficient matrix
Matrix A\mathbf{A} with its ii-th column replaced by the constants vector b\mathbf{b}
Determinant of A\mathbf{A}
Solution for variable xix_i
For {2x+y=5xy=1\begin{cases} 2x + y = 5 \\ x - y = 1 \end{cases}, solve for xx and yy using determinants.
Solving small systems of equations

Matrix Properties






Rank Of A Matrix



Formula:

The rank is the maximum number of linearly independent rows or columns.
The rank indicates the dimension of the vector space spanned by the rows or columns. It determines the solvability of linear systems.
Matrix
A 3×33 \times 3 matrix with rank 2 has linearly dependent rows or columns.
Analyzing solutions of linear systems, determining invertibility

Linear Transformations






Linear Transformation



Formula:

T(cu+dv)=cT(u)+dT(v)T(c\mathbf{u} + d\mathbf{v}) = cT(\mathbf{u}) + dT(\mathbf{v})
A function is a linear transformation if it preserves vector addition and scalar multiplication. Linear transformations can be represented by matrices.
Linear transformation
Vectors
Scalars
Rotation, reflection, and scaling transformations are linear.
Computer graphics, differential equations

Vector Properties






Orthogonality



Formula:

uv=0\mathbf{u} \cdot \mathbf{v} = 0
Two vectors are orthogonal if their dot product is zero, meaning they are perpendicular in space.
Vectors
Vectors [10]\begin{bmatrix}1 \\ 0\end{bmatrix} and [01]\begin{bmatrix}0 \\ 1\end{bmatrix} are orthogonal.
Projections, defining coordinate systems