Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools

Linear Algebra Terms and Definitions

Essential concepts defining matrices, their structure, and basic classifications.
Go to Matrices Basic Terms section →
Practical uses of matrices in solving systems and representing data.
Go to Matrix Applications section →
Basic arithmetic operations with matrices including addition, multiplication, and scalar operations.
Go to Matrix Operations section →
Key characteristics of matrices like determinant, rank, trace, and eigenvalues that describe their behavior.
Go to Matrix Properties section →
Operations that convert matrices into special forms or decompose them into simpler components.
Go to Matrix Transformations section →
Matrices with unique properties like diagonal, triangular, orthogonal forms that have specific applications.
Go to Special Matrices section →
Core vector arithmetic and geometric operations including addition, multiplication, dot/cross products that form the foundation for manipulating vectors.
Go to Vector Operations section →
Primary vector concepts covering structure, magnitude, direction and fundamental types like unit and zero vectors.
Go to Vectors section →
Practical uses of vectors in describing physical quantities, gradients, and positions.
Go to Vectors Applications section →
Fundamental concepts that define vectors and their components, including basic properties and representations in space.
Go to Vectors Basic Terms section →
Geometric meaning and visualization of vectors through angles, directions, and spatial relationships.
Go to Vectors Geometric Interpretations section →
Concepts related to perpendicular vectors and methods to create orthogonal/orthonormal vector sets.
Go to Vectors Orthogonality section →
Operations that change vectors while preserving certain properties, including linear transformations and their matrix representations.
Go to Vectors Transformations section →

Vectors

Vector



Definition:

A mathematical object that has both magnitude (v|\vec{v}|) and direction in space.
v=[v1v2vn]orv=[v1v2vn]\vec{v} = \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix} \quad \text{or} \quad \vec{v} = \begin{bmatrix} v_1 & v_2 & \cdots & v_n \end{bmatrix}
Each element viv_i represents displacement along the corresponding axis
2D vector: v=[23]\vec{v} = \begin{bmatrix} 2 \\ 3 \end{bmatrix} represents 2 units along x-axis, 3 along y-axis

3D vector: v=[124]\vec{v} = \begin{bmatrix} 1 \\ -2 \\ 4 \end{bmatrix} represents displacements along x, y, and z axes
Denoted by arrow overhead (v\vec{v}) or bold (v\mathbf{v})

Vector Magnitude (Norm)



Definition:

Length of a vector, denoted as v|\vec{v}| or v\|\vec{v}\|
For an n-dimensional vector:
v=i=1nvi2|\vec{v}| = \sqrt{\sum_{i=1}^n v_i^2}
2D vector: v=v12+v22|\vec{v}| = \sqrt{v_1^2 + v_2^2}

3D vector: v=v12+v22+v32|\vec{v}| = \sqrt{v_1^2 + v_2^2 + v_3^2}
Always non-negative: v0|\vec{v}| \geq 0, equals zero only for zero vector
For v=[34]\vec{v} = \begin{bmatrix} 3 \\ 4 \end{bmatrix}:
v=32+42=25=5|\vec{v}| = \sqrt{3^2 + 4^2} = \sqrt{25} = 5

Unit Vector



Definition:

Vector with magnitude of 1: u=1|\vec{u}| = 1
Obtained by normalizing a vector:
u=vv\vec{u} = \frac{\vec{v}}{|\vec{v}|}
Standard unit vectors in R3\mathbb{R}^3:
i^=[100],j^=[010],k^=[001]\hat{i} = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \quad \hat{j} = \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}, \quad \hat{k} = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}
Normalizing v=[34]\vec{v} = \begin{bmatrix} 3 \\ 4 \end{bmatrix}:
u=15[34]=[0.60.8]\vec{u} = \frac{1}{5}\begin{bmatrix} 3 \\ 4 \end{bmatrix} = \begin{bmatrix} 0.6 \\ 0.8 \end{bmatrix}
Unit vectors preserve direction while normalizing magnitude

Zero Vector (Null Vector)



Definition:

A vector where all components are zero: 0\vec{0} or 0\mathbf{0}
Column vector form:
[000]\begin{bmatrix} 0 \\ 0 \\ \vdots \\ 0 \end{bmatrix}
Row vector form:
[000]\begin{bmatrix} 0 & 0 & \cdots & 0 \end{bmatrix}
Key property: 0+v=v\vec{0} + \vec{v} = \vec{v} for any vector v\vec{v}

Column Vector



Definition:

A n×1n \times 1 matrix (vertical array of numbers):
v=[v1v2vn]\vec{v} = \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix}
Examples:
[214][xyz]\begin{bmatrix} 2 \\ -1 \\ 4 \end{bmatrix} \quad \begin{bmatrix} x \\ y \\ z \end{bmatrix}
Can be transposed to get a row vector:
vT=[v1v2vn]\vec{v}^T = \begin{bmatrix} v_1 & v_2 & \cdots & v_n \end{bmatrix}

Row Vector



Definition:

A 1×n1 \times n matrix (horizontal array of numbers):
v=[v1v2vn]\vec{v} = \begin{bmatrix} v_1 & v_2 & \cdots & v_n \end{bmatrix}
Examples:
[214][xyz]\begin{bmatrix} 2 & -1 & 4 \end{bmatrix} \quad \begin{bmatrix} x & y & z \end{bmatrix}
Can be transposed to get a column vector:
vT=[v1v2vn]\vec{v}^T = \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix}

Linear Combination



Definition:

A vector v\vec{v} is a linear combination of vectors {v1,v2,...,vn}\{\vec{v_1}, \vec{v_2}, ..., \vec{v_n}\} if it can be written as v=c1v1+c2v2+...+cnvn\vec{v} = c_1\vec{v_1} + c_2\vec{v_2} + ... + c_n\vec{v_n} for some scalars c1,c2,...,cnc_1, c_2, ..., c_n
The set of all possible linear combinations forms:
- A line if one nonzero vector
- A plane if two linearly independent vectors
- A space if three linearly independent vectors
Vector v=[57]\vec{v} = \begin{bmatrix} 5 \\ 7 \end{bmatrix} as linear combination:
v=2[12]+3[11]\vec{v} = 2\begin{bmatrix} 1 \\ 2 \end{bmatrix} + 3\begin{bmatrix} 1 \\ 1 \end{bmatrix}

Vector Projection



Definition:

Orthogonal projection of vector v\vec{v} onto vector u\vec{u} is the vector component of v\vec{v} parallel to u\vec{u}, given by: projuv=vuu2u\text{proj}_{\vec{u}}\vec{v} = \frac{\vec{v} \cdot \vec{u}}{|\vec{u}|^2}\vec{u}
For vectors v\vec{v} and u\vec{u}:
projuv=vuuuu=(vu^)u^\text{proj}_{\vec{u}}\vec{v} = \frac{\vec{v} \cdot \vec{u}}{\vec{u} \cdot \vec{u}}\vec{u} = (\vec{v} \cdot \hat{u})\hat{u}
The projection decomposes v\vec{v} into:
- Parallel component: projuv\text{proj}_{\vec{u}}\vec{v} (along u\vec{u})
- Perpendicular component: vprojuv\vec{v} - \text{proj}_{\vec{u}}\vec{v}
For v=[34]\vec{v} = \begin{bmatrix} 3 \\ 4 \end{bmatrix} onto u=[10]\vec{u} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}:
projuv=[30]\text{proj}_{\vec{u}}\vec{v} = \begin{bmatrix} 3 \\ 0 \end{bmatrix}

Linearly Independent Vectors



Definition:

A set of vectors {v1,v2,...,vn}\{\vec{v_1}, \vec{v_2}, ..., \vec{v_n}\} is linearly independent if the equation c1v1+c2v2+...+cnvn=0c_1\vec{v_1} + c_2\vec{v_2} + ... + c_n\vec{v_n} = \vec{0} is satisfied only when all ci=0c_i = 0
In R2\mathbb{R}^2:
- Two vectors are linearly independent if neither is a scalar multiple of the other
- In R3\mathbb{R}^3, three vectors are linearly independent if none lies in the plane formed by the other two
Linearly independent vectors:
v1=[10],v2=[01]\vec{v_1} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \quad \vec{v_2} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}


Linearly dependent vectors:
v1=[12],v2=[24]\vec{v_1} = \begin{bmatrix} 1 \\ 2 \end{bmatrix}, \quad \vec{v_2} = \begin{bmatrix} 2 \\ 4 \end{bmatrix}

Linearly Dependent Vectors



Definition:

A set of vectors {v1,v2,...,vn}\{\vec{v_1}, \vec{v_2}, ..., \vec{v_n}\} is linearly dependent if there exist scalars c1,c2,...,cnc_1, c_2, ..., c_n, not all zero, such that c1v1+c2v2+...+cnvn=0c_1\vec{v_1} + c_2\vec{v_2} + ... + c_n\vec{v_n} = \vec{0}
In R2\mathbb{R}^2:
- Two vectors are linearly dependent if one is a scalar multiple of the other
- In R3\mathbb{R}^3, three vectors are linearly dependent if one lies in the plane formed by the other two
v1=[24],v2=[12]\vec{v_1} = \begin{bmatrix} 2 \\ 4 \end{bmatrix}, \quad \vec{v_2} = \begin{bmatrix} 1 \\ 2 \end{bmatrix}

Here 2v2=v12\vec{v_2} = \vec{v_1}, making them linearly dependent

Vector Space



Definition:

A set VV with vectors u,vV\vec{u}, \vec{v} \in V and scalars cc is a vector space if it's closed under addition (u+vV\vec{u} + \vec{v} \in V) and scalar multiplication (cvVc\vec{v} \in V), and satisfies the vector space axioms
For all u,v,wV\vec{u}, \vec{v}, \vec{w} \in V and scalars c,dc,d:
- Commutativity: u+v=v+u\vec{u} + \vec{v} = \vec{v} + \vec{u}
- Associativity: (u+v)+w=u+(v+w)(\vec{u} + \vec{v}) + \vec{w} = \vec{u} + (\vec{v} + \vec{w})
- Zero vector: 0\exists \vec{0} such that v+0=v\vec{v} + \vec{0} = \vec{v}
- Additive inverse: v\exists -\vec{v} such that v+(v)=0\vec{v} + (-\vec{v}) = \vec{0}
- Distributivity: c(u+v)=cu+cvc(\vec{u} + \vec{v}) = c\vec{u} + c\vec{v}
Common vector spaces:
- Rn\mathbb{R}^n: n-dimensional real vectors
- Matrices of fixed size
- Polynomials of degree ≤ n

Vector Subspace



Definition:

A subset WW of a vector space VV is a subspace if it's closed under addition and scalar multiplication: for all u,vW\vec{u}, \vec{v} \in W and scalar cc, both u+vW\vec{u} + \vec{v} \in W and cvWc\vec{v} \in W
Any subspace must:
- Contain zero vector
- Be closed under linear combinations
- Form a vector space itself
Common subspaces of R3\mathbb{R}^3:
- Any plane through origin
- Any line through origin
- The zero subspace {0}\{\vec{0}\}

Span



Definition:

The span of vectors {v1,v2,...,vn}\{\vec{v_1}, \vec{v_2}, ..., \vec{v_n}\} is the set of all their linear combinations: span{v1,v2,...,vn}={c1v1+c2v2+...+cnvnciR}\text{span}\{\vec{v_1}, \vec{v_2}, ..., \vec{v_n}\} = \{c_1\vec{v_1} + c_2\vec{v_2} + ... + c_n\vec{v_n} | c_i \in \mathbb{R}\}
Span represents:
- A line through origin (one vector)
- A plane through origin (two linearly independent vectors)
- All of R3\mathbb{R}^3 (three linearly independent vectors)
span{[10],[01]}=R2\text{span}\left\{\begin{bmatrix} 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \end{bmatrix}\right\} = \mathbb{R}^2

span{[12],[24]}=line through origin\text{span}\left\{\begin{bmatrix} 1 \\ 2 \end{bmatrix}, \begin{bmatrix} 2 \\ 4 \end{bmatrix}\right\} = \text{line through origin}

Basis



Definition:

A basis of a vector space VV is a linearly independent set of vectors that spans VV. For any vector vV\vec{v} \in V, there exists a unique representation v=c1v1+c2v2+...+cnvn\vec{v} = c_1\vec{v_1} + c_2\vec{v_2} + ... + c_n\vec{v_n}

  • - Linearly independent and spans entire space
    - Number of vectors = dimension of space
Standard basis for R3\mathbb{R}^3:
{[100],[010],[001]}\left\{\begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}\right\}

Dimension



Definition:

The dimension of a vector space VV is the number of vectors in any basis of VV, denoted dim(V)\dim(V)

  • - Equal to maximum number of linearly independent vectors
    - dim(Rn)=n\dim(\mathbb{R}^n) = n
  • dim(R2)=2\dim(\mathbb{R}^2) = 2 (plane)
    - dim(R3)=3\dim(\mathbb{R}^3) = 3 (space)
    - dim({0})=0\dim(\{\vec{0}\}) = 0 (zero space)
    - dim(line)=1\dim(\text{line}) = 1

Orthogonal Vectors



Definition:

Two vectors u\vec{u} and v\vec{v} are orthogonal if their dot product is zero: uv=0\vec{u} \cdot \vec{v} = 0
Orthogonal vectors are perpendicular to each other, forming a 90° angle
[10],[01]\begin{bmatrix} 1 \\ 0 \end{bmatrix}, \quad \begin{bmatrix} 0 \\ 1 \end{bmatrix}

uv=1(0)+0(1)=0\vec{u} \cdot \vec{v} = 1(0) + 0(1) = 0

Orthonormal Vectors



Definition:

A set of vectors is orthonormal if they are orthogonal to each other and each has unit length: uiuj=δij\vec{u_i} \cdot \vec{u_j} = \delta_{ij} where δij\delta_{ij} is the Kronecker delta
Orthogonal to each other
Each vector has magnitude equal to 1
Form an orthonormal basis if they span the space
Standard basis vectors are orthonormal:
i^=[100],j^=[010],k^=[001]\hat{i} = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \quad \hat{j} = \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}, \quad \hat{k} = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}

Vectors Basic Terms

Components



Definition:

The individual elements of a vector, e.g., (v1, v2, ..., vn).

Vector Operations

Vector Addition



Definition:

For vectors u,vu,v in Rn\mathbb{R}^n: (u+v)i=ui+vi(u + v)_i = u_i + v_i
u+v=v+uu + v = v + u
u+(v+w)=(u+v)+wu + (v + w) = (u + v) + w
u+0=uu + 0 = u
[123]+[456]=[579]\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} + \begin{bmatrix} 4 \\ 5 \\ 6 \end{bmatrix} = \begin{bmatrix} 5 \\ 7 \\ 9 \end{bmatrix}

Scalar Multiplication



Definition:

For scalar cc and vector vv: (cv)i=cvi(cv)_i = cv_i
c(u+v)=cu+cvc(u + v) = cu + cv
(c+d)v=cv+dv(c + d)v = cv + dv
(cd)v=c(dv)(cd)v = c(dv)
3[214]=[6312]3\begin{bmatrix} 2 \\ -1 \\ 4 \end{bmatrix} = \begin{bmatrix} 6 \\ -3 \\ 12 \end{bmatrix}

Dot Product



Definition:

For vectors u,vu,v in Rn\mathbb{R}^n: uv=i=1nuiviu\cdot v = \sum_{i=1}^n u_iv_i
uv=vuu\cdot v = v\cdot u
uu=u2u\cdot u = \|u\|^2
(cu)v=c(uv)(cu)\cdot v = c(u\cdot v)
uv=uvcosθu\cdot v = \|u\|\|v\|\cos\theta
where θ\theta is angle between vectors
[123][456]=1(4)+2(5)+3(6)=32\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} \cdot \begin{bmatrix} 4 \\ 5 \\ 6 \end{bmatrix} = 1(4) + 2(5) + 3(6) = 32

Cross Product



Definition:

For u,vu,v in R3\mathbb{R}^3: u×v=uvsinθnu\times v = \|u\|\|v\|\sin\theta\, \mathbf{n}
[u1u2u3]×[v1v2v3]=[u2v3u3v2u3v1u1v3u1v2u2v1]\begin{bmatrix} u_1 \\ u_2 \\ u_3 \end{bmatrix} \times \begin{bmatrix} v_1 \\ v_2 \\ v_3 \end{bmatrix} = \begin{bmatrix} u_2v_3 - u_3v_2 \\ u_3v_1 - u_1v_3 \\ u_1v_2 - u_2v_1 \end{bmatrix}
u×v=(v×u)u\times v = -(v\times u)
u×u=0u\times u = 0
u×v=uvsinθ\|u\times v\| = \|u\|\|v\|\sin\theta
Normal vectors
Torque calculation
Area of parallelogram: u×v\|u\times v\|

Vectors Orthogonality

Gram-Schmidt Process



Definition:

Produces orthonormal basis {u1,,un}\{u_1,\ldots,u_n\} from linearly independent vectors {v1,,vn}\{v_1,\ldots,v_n\}
u1=v1v1, uk=vki=1k1projui(vk)vki=1k1projui(vk)u_1 = \frac{v_1}{\|v_1\|},\ u_k = \frac{v_k - \sum_{i=1}^{k-1}\text{proj}_{u_i}(v_k)}{\|v_k - \sum_{i=1}^{k-1}\text{proj}_{u_i}(v_k)\|}
For v1=[110],v2=[101]v_1 = \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}, v_2 = \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}:
u1=12[110],u2=16[112]u_1 = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix}, u_2 = \frac{1}{\sqrt{6}}\begin{bmatrix} 1 \\ -1 \\ 2 \end{bmatrix}

Vectors Geometric Interpretations

Direction Cosines



Definition:

For vector vv, cosines with axes: cosα=vxv,cosβ=vyv,cosγ=vzv\cos\alpha = \frac{v_x}{\|v\|}, \cos\beta = \frac{v_y}{\|v\|}, \cos\gamma = \frac{v_z}{\|v\|}
cos2α+cos2β+cos2γ=1\cos^2\alpha + \cos^2\beta + \cos^2\gamma = 1
Vector [111]\begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} has direction cosines:
cosα=cosβ=cosγ=13\cos\alpha = \cos\beta = \cos\gamma = \frac{1}{\sqrt{3}}

Vectors Transformations

Linear Transformation



Definition:

Function T:VWT: V \to W where T(au+bv)=aT(u)+bT(v)T(au + bv) = aT(u) + bT(v)
Preserves linear combinations
Can be represented by matrix multiplication
T(0)=0T(0) = 0
For transformation T:RnRmT: \mathbb{R}^n \to \mathbb{R}^m:
T(x)=AxT(x) = Ax
where AA is m×nm\times n matrix
Rotation
Scaling
Projection
Reflection

Matrix Representation



Definition:

For transformation TT with basis {v1,,vn}\{v_1,\ldots,v_n\}, [T]B=[T(v1)T(vn)][T]_{\mathcal{B}} = [T(v_1)\cdots T(v_n)]
[T(v)]B=[T]B[v]B[T(v)]_{\mathcal{B}} = [T]_{\mathcal{B}}[v]_{\mathcal{B}}
[T]C=P1[T]BP[T]_{\mathcal{C}} = P^{-1}[T]_{\mathcal{B}}P for change of basis PP
Rotation by θ\theta in R2\mathbb{R}^2:
[cosθsinθsinθcosθ]\begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix}

Vectors Applications

Gradient Vector



Definition:

For scalar function f(x1,,xn)f(x_1,\ldots,x_n): f=(fx1,,fxn)\nabla f = \left(\frac{\partial f}{\partial x_1},\ldots,\frac{\partial f}{\partial x_n}\right)
Points in direction of steepest increase
Perpendicular to level curves/surfaces
(fg)=fg+gf\nabla(fg) = f\nabla g + g\nabla f
For f(x,y)=x2+xy+y2f(x,y) = x^2 + xy + y^2:
f=(2x+y,x+2y)\nabla f = (2x + y, x + 2y)

Position Vector



Definition:

Vector r=(x,y,z)\vec{r} = (x,y,z) from origin to point P(x,y,z)P(x,y,z)
Length gives distance from origin: r\|\vec{r}\|
Direction gives orientation in space
Velocity: v=drdt\vec{v} = \frac{d\vec{r}}{dt}
Acceleration: a=d2rdt2\vec{a} = \frac{d^2\vec{r}}{dt^2}
Point P(3,4,5)P(3,4,5) has position vector:
r=3i^+4j^+5k^=[345]\vec{r} = 3\hat{i} + 4\hat{j} + 5\hat{k} = \begin{bmatrix} 3 \\ 4 \\ 5 \end{bmatrix}

Matrices Basic Terms

Matrix



Definition:

A rectangular array of elements aija_{ij} in mm rows and nn columns
A=[aij]m×n=[a11a12a1na21a22a2nam1am2amn]A = [a_{ij}]_{m\times n} = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix}
Size: m×nm \times n
Elements: aija_{ij} where ii is row, jj is column

Row Matrix



Definition:

Matrix of size 1×n1 \times n (single row)
A=[a1a2an]A = \begin{bmatrix} a_1 & a_2 & \cdots & a_n \end{bmatrix}
Row vector representation
Linear combinations

Column Matrix



Definition:

Matrix of size m×1m \times 1 (single column)
A=[a1a2am]A = \begin{bmatrix} a_1 \\ a_2 \\ \vdots \\ a_m \end{bmatrix}
Vector representation
Solutions to linear systems

Square Matrix



Definition:

A matrix with an equal number of rows and columns, often associated with special properties like determinants and eigenvalues.
A square matrix of size n×n with arbitrary elements:
A=[a11a12a1na21a22a2nan1an2ann]A = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix}

Zero Matrix



Definition:

A matrix with all elements are equal to zero. Also called null matrix.
Zero matrix can have any dimensions m×nm \times n:
O=[000000000]O = \begin{bmatrix} 0 & 0 & \cdots & 0 \\ 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 0 \end{bmatrix}
Square zero matrix has equal number of rows and columns:
On=[000000000]O_n = \begin{bmatrix} 0 & 0 & \cdots & 0 \\ 0 & 0 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 0 \end{bmatrix}

Main Diagonal



Definition:

In a square matrix, the main diagonal (or principal diagonal, or leading diagonal) consists of elements where row index equals column index.
For an n×nn \times n matrix A=[aij]A=[a_{ij}], main diagonal contains elements where i=ji=j:
A=[a11a12a1na21a22a2nan1an2ann]A = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix}
Here a11,a22,...,anna_{11}, a_{22}, ..., a_{nn} form the main diagonal
In 2×2 matrix, elements a11a_{11} and a22a_{22} form main diagonal:
[a11a12a21a22]\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix}
In 3×3 matrix, elements a11a_{11}, a22a_{22}, and a33a_{33} form main diagonal:
[a11a12a13a21a22a23a31a32a33]\begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix}

Special Matrices

Triangular Matrix



Definition:

A square matrix where all the elements either above or below the main diagonal are zero.
2×2 examples:
Upper triangular:
[1203]\begin{bmatrix} 1 & 2 \\ 0 & 3 \end{bmatrix}
Lower triangular:
[1043]\begin{bmatrix} 1 & 0 \\ 4 & 3 \end{bmatrix}
3×3 examples:
Upper triangular:
[123045006]\begin{bmatrix} 1 & 2 & 3 \\ 0 & 4 & 5 \\ 0 & 0 & 6 \end{bmatrix}
Lower triangular:
[100450789]\begin{bmatrix} 1 & 0 & 0 \\ 4 & 5 & 0 \\ 7 & 8 & 9 \end{bmatrix}
Upper triangular matrix:
U=[u11u12u13u1n0u22u23u2n00u33u3n000unn]U = \begin{bmatrix} \color{blue}u_{11} & u_{12} & u_{13} & \cdots & u_{1n} \\ 0 & \color{blue}u_{22} & u_{23} & \cdots & u_{2n} \\ 0 & 0 & \color{blue}u_{33} & \cdots & u_{3n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & \color{blue}u_{nn} \end{bmatrix}
Lower triangular:
L=[l11000l21l2200l31l32l330ln1ln2ln3lnn]L = \begin{bmatrix} \color{blue}l_{11} & 0 & 0 & \cdots & 0 \\ l_{21} & \color{blue}l_{22} & 0 & \cdots & 0 \\ l_{31} & l_{32} & \color{blue}l_{33} & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ l_{n1} & l_{n2} & l_{n3} & \cdots & \color{blue}l_{nn} \end{bmatrix}

Upper Triangular Matrix



Definition:

A square matrix with zeros below the main diagonal. All elements aij=0a_{ij}=0 where i>ji > j.
General form of upper triangular matrix:
U=[u11u12u13u1n0u22u23u2n00u33u3n000unn]U = \begin{bmatrix} \color{red}u_{11} & u_{12} & u_{13} & \cdots & u_{1n} \\ 0 & \color{red}u_{22} & u_{23} & \cdots & u_{2n} \\ 0 & 0 & \color{red}u_{33} & \cdots & u_{3n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & \color{red}u_{nn} \end{bmatrix}
where diagonal elements uii\color{red}u_{ii} can be any real numbers
2×2 example:
[4203]\begin{bmatrix} \color{red}4 & 2 \\ 0 & \color{red}3 \end{bmatrix}
3×3 example:
[153042006]\begin{bmatrix} \color{red}1 & 5 & 3 \\ 0 & \color{red}4 & 2 \\ 0 & 0 & \color{red}6 \end{bmatrix}

Lower Triangular Matrix



Definition:

A square matrix with zeros above the main diagonal. All elements aij=0a_{ij}=0 where i<ji < j.
General form of lower triangular matrix:
L=[l11000l21l2200l31l32l330ln1ln2ln3lnn]L = \begin{bmatrix} \color{red}l_{11} & 0 & 0 & \cdots & 0 \\ l_{21} & \color{red}l_{22} & 0 & \cdots & 0 \\ l_{31} & l_{32} & \color{red}l_{33} & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ l_{n1} & l_{n2} & l_{n3} & \cdots & \color{red}l_{nn} \end{bmatrix}
where diagonal elements lii\color{red}l_{ii} can be any real numbers
2×2 example:
[3024]\begin{bmatrix} \color{red}3 & 0 \\ 2 & \color{red}4 \end{bmatrix}
3×3 example:
[100420753]\begin{bmatrix} \color{red}1 & 0 & 0 \\ 4 & \color{red}2 & 0 \\ 7 & 5 & \color{red}3 \end{bmatrix}

Identity Matrix



Definition:

A square matrix with 1s on the main diagonal and 0s elsewhere.
A square matrix of size n×n that has 1s on the main diagonal and 0s elsewhere:
In=[100010001]I_n = \begin{bmatrix} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \end{bmatrix}
Identity matrix of size 2x2:
I2=[1001]I_2 = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}
Identity matrix of size 3x3:
I3=[100010001]I_3 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}
Identity matrix of size 4x4:
I4=[1000010000100001]I_4 = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}

Anti-symmetric(Skew-symmetric) Matrix



Definition:

A square matrix that equals the negative of its transpose: A=ATA = -A^T. All diagonal elements must be zero.
General form:
A=[0a12a13a1na120a23a2na13a230a3na1na2na3n0]A = \begin{bmatrix} \color{red}0 & a_{12} & a_{13} & \cdots & a_{1n} \\ -a_{12} & \color{red}0 & a_{23} & \cdots & a_{2n} \\ -a_{13} & -a_{23} & \color{red}0 & \cdots & a_{3n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ -a_{1n} & -a_{2n} & -a_{3n} & \cdots & \color{red}0 \end{bmatrix}
where aij=ajia_{ij} = -a_{ji} for all i,ji,j
2×2 example:
[0220]\begin{bmatrix} \color{red}0 & 2 \\ -2 & \color{red}0 \end{bmatrix}
3×3 example:
[031302120]\begin{bmatrix} \color{red}0 & 3 & 1 \\ -3 & \color{red}0 & 2 \\ -1 & -2 & \color{red}0 \end{bmatrix}

Diagonal Matrix



Definition:

A square matrix with non-zero elements only on the main diagonal.
A diagonal matrix of size n×n contains arbitrary values on main diagonal:
Dn=[d1000d2000dn]D_n = \begin{bmatrix} d_1 & 0 & \cdots & 0 \\ 0 & d_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & d_n \end{bmatrix}

Symmetric Matrix



Definition:

A matrix equal to its transpose, A=ATA = A^T.
A symmetric matrix elements mirror across main diagonal:
A=[a11a12a13a1na12a22a23a2na13a23a33a3na1na2na3nann]A = \begin{bmatrix} a_{11} & a_{12} & a_{13} & \cdots & a_{1n} \\ a_{12} & a_{22} & a_{23} & \cdots & a_{2n} \\ a_{13} & a_{23} & a_{33} & \cdots & a_{3n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{1n} & a_{2n} & a_{3n} & \cdots & a_{nn} \end{bmatrix}
where a12=a21,a13=a31,...,aij=ajia_{12}=a_{21}, a_{13}=a_{31}, ..., a_{ij}=a_{ji}

Elementary Matrix



Definition:

Matrix representing one elementary row operation on identity matrix
E=[010100001]E = \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix}
swaps rows 1 and 2
E=[200010001]E = \begin{bmatrix} 2 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}
multiplies row 1 by 2
E=[100310001]E = \begin{bmatrix} 1 & 0 & 0 \\ 3 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}
adds 3 times row 1 to row 2

Orthogonal Matrix



Definition:

Square matrix AA where AT=A1A^T = A^{-1}, equivalently AAT=ATA=IAA^T = A^TA = I
det(A)=±1ATA=AAT=I(AB)T(AB)=I\det(A) = \pm 1 \\ A^T A = AA^T = I \\ (AB)^T(AB) = I for orthogonal A,BA,B
Rotation matrices:
[cosθsinθsinθcosθ]\begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix}
Reflection matrices:
[cos2θsin2θsin2θcos2θ]\begin{bmatrix} \cos 2\theta & \sin 2\theta \\ \sin 2\theta & -\cos 2\theta \end{bmatrix}
Preserve lengths and angles
Used in rotations and reflections
Important in orthogonal transformations

Scalar Matrix



Definition:

A square matrix where all element are equal to zero except those on main diagonal which are equal to constant number.
A scalar matrix is a diagonal matrix where all diagonal entries are equal:
λI=[λ000λ000λ]\lambda I = \begin{bmatrix} \lambda & 0 & \cdots & 0 \\ 0 & \lambda & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda \end{bmatrix}
where λ\lambda is any real number
A 2×2 scalar matrix example:
[3003]\begin{bmatrix} 3 & 0 \\ 0 & 3 \end{bmatrix}
A 3×3 scalar matrix example:
[500050005]\begin{bmatrix} 5 & 0 & 0 \\ 0 & 5 & 0 \\ 0 & 0 & 5 \end{bmatrix}

Singular Matrix



Definition:

Square matrix with det(A)=0\det(A) = 0
rank(A)<n\text{rank}(A) < n for n×nn \times n matrix
System Ax=bAx = b has no unique solution
Non-invertible: A1A^{-1} doesn't exist
[1224]\begin{bmatrix} 1 & 2 \\ 2 & 4 \end{bmatrix}
has dependent rows/columns

Positive Definite Matrix



Definition:

Symmetric matrix AA where xTAx>0x^TAx > 0 for all nonzero xx
λi>0\lambda_i > 0 for all eigenvalues
A1A^{-1} exists and is positive definite
xTAx>0x^TAx > 0 for all x0x \neq 0
[2112]\begin{bmatrix} 2 & -1 \\ -1 & 2 \end{bmatrix}
with eigenvalues 1, 3
det(Ak)>0\det(A_k) > 0 for all leading principal minors
Cholesky decomposition exists: A=LLTA = LL^T

Block Matrix



Definition:

Matrix partitioned into submatrices AijA_{ij}
[A11A12A21A22]\begin{bmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{bmatrix}
where each AijA_{ij} is a matrix
(AB)ij=kAikBkj(AB)_{ij} = \sum_k A_{ik}B_{kj}
[ABCD]1=[PQRS]\begin{bmatrix} A & B \\ C & D \end{bmatrix}^{-1} = \begin{bmatrix} P & Q \\ R & S \end{bmatrix} (if exists)

Sparse Matrix



Definition:

Matrix with mostly zero elements, typically O(n)O(n) nonzero elements
Store only nonzero elements with indices
Compressed Row Storage (CRS)
Compressed Column Storage (CCS)
[1020003000045000]\begin{bmatrix} 1 & 0 & 2 & 0 \\ 0 & 0 & 3 & 0 \\ 0 & 0 & 0 & 4 \\ 5 & 0 & 0 & 0 \end{bmatrix}
Network adjacency matrices
Finite element analysis
Large system solving

Matrix Operations

Transposition



Definition:

An operation that flips a matrix over its main diagonal, switching rows and columns: (AT)ij=Aji(A^T)_{ij} = A_{ji}
For matrix AA of size n×mn \times m, its transpose ATA^T is of size m×nm \times n:
A=[a11a12a13a21a22a23]AT=[a11a21a12a22a13a23]A = \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \end{bmatrix} \rightarrow A^T = \begin{bmatrix} a_{11} & a_{21} \\ a_{12} & a_{22} \\ a_{13} & a_{23} \end{bmatrix}
(AT)T=A(AB)T=BTAT(A+B)T=AT+BT(A^T)^T = A \\ (AB)^T = B^T A^T \\ (A + B)^T = A^T + B^T

Matrix Addition



Definition:

For matrices A,BA,B of same size, (A+B)ij=Aij+Bij(A+B)_{ij} = A_{ij} + B_{ij}
[1234]+[5678]=[681012]\begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} + \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} = \begin{bmatrix} 6 & 8 \\ 10 & 12 \end{bmatrix}
Commutative: A+B=B+AA + B = B + A \\ Associative: (A+B)+C=A+(B+C)(A + B) + C = A + (B + C)

Scalar Addition (Matrix)



Definition:

For scalar cc and matrix AA, (c+A)ij=c+Aij(c+A)_{ij} = c + A_{ij}
2+[1324]=[3546]2 + \begin{bmatrix} 1 & 3 \\ 2 & 4 \end{bmatrix} = \begin{bmatrix} 3 & 5 \\ 4 & 6 \end{bmatrix}
Commutative: c+A=A+cc + A = A + c

Scalar Multiplication(Matrix)



Definition:

For scalar cc and matrix AA, (cA)ij=cAij(cA)_{ij} = cA_{ij}
2[1324]=[2648]2\begin{bmatrix} 1 & 3 \\ 2 & 4 \end{bmatrix} = \begin{bmatrix} 2 & 6 \\ 4 & 8 \end{bmatrix}
Associative: a(bA)=(ab)Aa(bA) = (ab)A \\ Distributive: a(A+B)=aA+aB(a+b)A=aA+bAa(A + B) = aA + aB \\ (a + b)A = aA + bA

Matrix Multiplication



Definition:

For matrices Am×n,Bn×pA_{m\times n}, B_{n\times p}, (AB)ij=k=1naikbkj(AB)_{ij} = \sum_{k=1}^n a_{ik}b_{kj}
[1234][5678]=[1(5)+2(7)1(6)+2(8)3(5)+4(7)3(6)+4(8)]=[19224350]\begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} = \begin{bmatrix} 1(5) + 2(7) & 1(6) + 2(8) \\ 3(5) + 4(7) & 3(6) + 4(8) \end{bmatrix} = \begin{bmatrix} 19 & 22 \\ 43 & 50 \end{bmatrix}
Not commutative: ABBAAB \neq BA \\ Associative: (AB)C=A(BC)(AB)C = A(BC) \\ Distributive: A(B+C)=AB+ACA(B + C) = AB + AC

Matrix Properties

Determinant



Definition:

For square matrix AA, denoted det(A)\det(A) or A|A|
abcd=adbc\begin{vmatrix} a & b \\ c & d \end{vmatrix} = ad - bc
a11a12a13a21a22a23a31a32a33=a11a22a33+a12a23a31+a13a21a32a13a22a31a11a23a32a12a21a33\begin{vmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{vmatrix} = a_{11}a_{22}a_{33} + a_{12}a_{23}a_{31} + a_{13}a_{21}a_{32} - a_{13}a_{22}a_{31} - a_{11}a_{23}a_{32} - a_{12}a_{21}a_{33}
For square matrices: det(AB)=det(A)det(B)det(AT)=det(A)A\\ \det(AB) = \det(A)\det(B) \\ \det(A^T) = \det(A) \\ A invertible     det(A)0\iff \det(A) \neq 0

Inverse Matrix



Definition:

For square matrix AA, its inverse A1A^{-1} satisfies AA1=A1A=IAA^{-1} = A^{-1}A = I
For A=[abcd]A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}, if det(A)0\det(A) \neq 0:
A1=1adbc[dbca]A^{-1} = \frac{1}{ad-bc}\begin{bmatrix} d & -b \\ -c & a \end{bmatrix}
(A1)1=A(AB)1=B1A1det(A1)=1det(A)(A^{-1})^{-1} = A \\ (AB)^{-1} = B^{-1}A^{-1} \\ \det(A^{-1}) = \frac{1}{\det(A)}

Rank



Definition:

Number of linearly independent rows/columns, denoted rank(A)\text{rank}(A)
rank(A)min(m,n)\text{rank}(A) \leq \min(m,n) for Am×nrank(A)=rank(AT)rank(AB)min(rank(A),rank(B))A_{m\times n} \\ \text{rank}(A) = \text{rank}(A^T) \\ \text{rank}(AB) \leq \min(\text{rank}(A), \text{rank}(B))
Matrix has maximum possible rank: rank(A)=min(m,n)\text{rank}(A) = \min(m,n)

Trace



Definition:

Sum of main diagonal elements: tr(A)=i=1naii\text{tr}(A) = \sum_{i=1}^n a_{ii}
tr[2143]=2+3=5\text{tr}\begin{bmatrix} 2 & 1 \\ 4 & 3 \end{bmatrix} = 2 + 3 = 5
tr(A+B)=tr(A)+tr(B)tr(AB)=tr(BA)tr(cA)=ctr(A)\text{tr}(A + B) = \text{tr}(A) + \text{tr}(B) \\ \text{tr}(AB) = \text{tr}(BA) \\ \text{tr}(cA) = c\text{tr}(A)

Matrix Size



Definition:

Matrix with mm rows and nn columns denoted as Am×nA_{m\times n} or ARm×nA \in \mathbb{R}^{m\times n}
Square matrix: m=nm = n
Rectangular matrix: mnm \neq n
Column vector: n=1n = 1
Row vector: m=1m = 1
For multiplication ABAB: Am×nBn×p=Cm×pA_{m\times n}B_{n\times p} = C_{m\times p}

Eigenvalues



Definition:

Scalar λ\lambda satisfying Av=λvAv = \lambda v for nonzero vector vv (eigenvector)
Found by solving characteristic equation:
det(AλI)=0\det(A - \lambda I) = 0
tr(A)=λidet(A)=λiλ(A1)=1λ(A)\text{tr}(A) = \sum \lambda_i \\ \det(A) = \prod \lambda_i \\ \lambda(A^{-1}) = \frac{1}{\lambda(A)} if AA invertible
For A=[3102]A = \begin{bmatrix} 3 & 1 \\ 0 & 2 \end{bmatrix}:
det[3λ102λ]=0    λ=2,3\det\begin{bmatrix} 3-\lambda & 1 \\ 0 & 2-\lambda \end{bmatrix} = 0 \implies \lambda = 2,3

Eigenvectors



Definition:

Nonzero vector vv satisfying Av=λvAv = \lambda v for eigenvalue λ\lambda
v1,v2v_1,v_2 with distinct λ1,λ2\lambda_1,\lambda_2 are linearly independent
kvkv is also eigenvector if vv is eigenvector
For A=[3102]A = \begin{bmatrix} 3 & 1 \\ 0 & 2 \end{bmatrix} with λ=3\lambda = 3:
[0101][xy]=[00]    v=[10]\begin{bmatrix} 0 & 1 \\ 0 & -1 \end{bmatrix}\begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \implies v = \begin{bmatrix} 1 \\ 0 \end{bmatrix}

Matrix Transformations

Echelon Form



Definition:

Matrix where each row's leading non-zero entry (pivot) is to the right of pivots in rows above
[123045006]\begin{bmatrix} 1 & 2 & 3 \\ 0 & 4 & 5 \\ 0 & 0 & 6 \end{bmatrix}
Leading entry in any row is to the right of leading entries above \\ All zero rows are at the bottom \\ Each leading entry has zeros below it

Reduced Row Echelon Form



Definition:

Row echelon form where each pivot is 1 and is the only non-zero entry in its column
[100a010b001c]\begin{bmatrix} 1 & 0 & 0 & a \\ 0 & 1 & 0 & b \\ 0 & 0 & 1 & c \end{bmatrix}
Leading 1s (pivots) are the only non-zero entries in their columns
Each pivot is 1
Unique for each matrix
Used in solving systems of equations

Adjoint



Definition:

For matrix AA, adjoint (adj(A)(A)) is transpose of cofactor matrix
For A=[abcd]A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}:
adj(A)=[dcba]\text{adj}(A) = \begin{bmatrix} d & -c \\ -b & a \end{bmatrix}
A adj(A)=adj(A)A=det(A)IA1=1det(A)adj(A)A\text{ adj}(A) = \text{adj}(A)A = \det(A)I \\ A^{-1} = \frac{1}{\det(A)}\text{adj}(A) when det(A)0\det(A) \neq 0

LU Decomposition



Definition:

Matrix A=LUA = LU where LL is lower triangular and UU is upper triangular
[2187]=[1041][2103]\begin{bmatrix} 2 & 1 \\ 8 & 7 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 4 & 1 \end{bmatrix}\begin{bmatrix} 2 & 1 \\ 0 & 3 \end{bmatrix}
Solve Ax=bAx = b efficiently
Compute determinant as product of diagonal entries
Factorize once, solve for multiple bb

QR Decomposition



Definition:

Matrix A=QRA = QR where QQ is orthogonal (QTQ=IQ^TQ = I) and RR is upper triangular
QTQ=QQT=IQ^TQ = QQ^T = I
Rii0R_{ii} \geq 0 for standard QR
Solve least squares problems
Compute eigenvalues
Solve systems of equations

Diagonalization



Definition:

Matrix A=PDP1A = PDP^{-1} where DD is diagonal matrix of eigenvalues
Matrix AA is diagonalizable if:
nn linearly independent eigenvectors exist
Geometric multiplicity equals algebraic multiplicity
D=[λ100λ2]D = \begin{bmatrix} \lambda_1 & 0 & \cdots \\ 0 & \lambda_2 & \cdots \\ \vdots & \vdots & \ddots \end{bmatrix}
where PP columns are eigenvectors
Compute AnA^n easily: An=PDnP1A^n = PD^nP^{-1}
Solve systems of differential equations

Matrix Applications

Augmented Matrix



Definition:

Matrix [Ab][A|b] representing system Ax=bAx = b
System: x+2y=5x + 2y = 5
3xy=13x - y = 1 becomes:
[125311]\left[\begin{array}{cc|c} 1 & 2 & 5 \\ 3 & -1 & 1 \end{array}\right]
Solve systems using row operations
Find inverse matrix
Gaussian elimination