Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools

Linear Algebra



Introduction to Linear Algebra

Linear algebra is a field of mathematics that focuses on studying vectors, matrices, and the relationships between them, forming the mathematical framework for analyzing structures and transformations in multidimensional spaces. It introduces powerful tools to understand and solve problems where quantities interact linearly, making it fundamental to numerous disciplines.

This field begins with vectors—quantities that have both magnitude and direction—and their operations, such as addition and scaling. It extends to matrices, which are grid-like arrangements of numbers used to represent systems of equations or transformations. Learning how to manipulate matrices and understand their properties is a key part of linear algebra.

Students also explore vector spaces, the environments in which vectors live, and subspaces, which reveal structure and constraints within these spaces. Concepts like linear independence, span, and basis give insight into how vectors relate and interact. The study of linear transformations, which describe how vectors change under operations like rotations or scaling, helps build a deeper understanding of the subject.

To help students navigate these foundational concepts, we created a dedicated Matrix Theory section that provides an in-depth exploration of matrices - from basic definitions and notations to various matrix types and properties. This interactive guide covers matrix structure, indexing, special cases of square matrices, and key matrix properties, with clear mathematical notation and visual examples throughout. The section serves as both a comprehensive learning resource and a quick reference for understanding these essential building blocks of linear algebra.

Eigenvalues and eigenvectors, pivotal concepts in linear algebra, allow students to uncover hidden properties of transformations. Techniques like solving systems of equations, matrix decomposition, and understanding projections or orthogonality are practical outcomes of this study.

Ultimately, linear algebra provides a foundation for solving abstract and applied problems, developing skills to think logically, recognize patterns, and simplify complex systems. It equips students with a versatile toolkit for further studies in mathematics, sciences, engineering, and beyond.

Linear Algebra Formulas

Navigate through an essential collection of linear algebra formulas that power mathematical analysis and transformations. This guide presents key formulas across vector operations, matrix calculations, eigenvalues, and linear transformations - each equipped with clear notation, detailed explanations, and practical examples. You will find precise mathematical representations, component breakdowns, and specific use cases for over 15 fundamental formula categories. The organized structure helps you quickly locate and understand the tools you need, whether for solving equations, analyzing transformations, or applying linear algebra concepts in real-world scenarios. Perfect for students needing formula clarification, researchers requiring quick mathematical reference, or practitioners applying linear algebra in their work.


Vector Component Form

v=(v1,v2,,vn)Rn\mathbf{v} = (v_1, v_2, \ldots, v_n) \in \mathbb{R}^n

Standard Basis Decomposition

v=v1e1+v2e2++vnen\mathbf{v} = v_1 \mathbf{e}_1 + v_2 \mathbf{e}_2 + \cdots + v_n \mathbf{e}_n

Direction Cosines

cosα=v1v,cosβ=v2v,cosγ=v3v\cos\alpha = \frac{v_1}{\|\mathbf{v}\|}, \quad \cos\beta = \frac{v_2}{\|\mathbf{v}\|}, \quad \cos\gamma = \frac{v_3}{\|\mathbf{v}\|}

Direction Cosines Identity

cos2α+cos2β+cos2γ=1\cos^2\alpha + \cos^2\beta + \cos^2\gamma = 1

Vector Addition

a+b=(a1+b1, a2+b2, , an+bn)\mathbf{a} + \mathbf{b} = (a_1 + b_1,\ a_2 + b_2,\ \ldots,\ a_n + b_n)

Vector Subtraction

ab=a+(b)=(a1b1, a2b2, , anbn)\mathbf{a} - \mathbf{b} = \mathbf{a} + (-\mathbf{b}) = (a_1 - b_1,\ a_2 - b_2,\ \ldots,\ a_n - b_n)

Scalar Multiplication of Vectors

ca=(ca1, ca2, , can)c\mathbf{a} = (ca_1,\ ca_2,\ \ldots,\ ca_n)

Linear Combination

c1v1+c2v2++ckvkc_1 \mathbf{v}_1 + c_2 \mathbf{v}_2 + \cdots + c_k \mathbf{v}_k

Span

Span{v1,,vk}={c1v1++ckvkciR}\text{Span}\{\mathbf{v}_1, \ldots, \mathbf{v}_k\} = \{c_1 \mathbf{v}_1 + \cdots + c_k \mathbf{v}_k \mid c_i \in \mathbb{R}\}

Euclidean Norm

v=v12+v22++vn2=i=1nvi2\|\mathbf{v}\| = \sqrt{v_1^2 + v_2^2 + \cdots + v_n^2} = \sqrt{\sum_{i=1}^{n} v_i^2}

Distance Formula

d(a,b)=ab=(a1b1)2+(a2b2)2++(anbn)2d(\mathbf{a}, \mathbf{b}) = \|\mathbf{a} - \mathbf{b}\| = \sqrt{(a_1 - b_1)^2 + (a_2 - b_2)^2 + \cdots + (a_n - b_n)^2}

Vector Normalization

v^=vv\hat{\mathbf{v}} = \frac{\mathbf{v}}{\|\mathbf{v}\|}

Norm Scaling Property

cv=cv\|c\mathbf{v}\| = |c|\,\|\mathbf{v}\|

Triangle Inequality

a+ba+b\|\mathbf{a} + \mathbf{b}\| \leq \|\mathbf{a}\| + \|\mathbf{b}\|

Cauchy-Schwarz Inequality

abab|\mathbf{a} \cdot \mathbf{b}| \leq \|\mathbf{a}\|\,\|\mathbf{b}\|

Dot Product (Algebraic)

ab=a1b1+a2b2++anbn=i=1naibi\mathbf{a} \cdot \mathbf{b} = a_1 b_1 + a_2 b_2 + \cdots + a_n b_n = \sum_{i=1}^{n} a_i b_i

Dot Product (Geometric)

ab=abcosθ\mathbf{a} \cdot \mathbf{b} = \|\mathbf{a}\|\,\|\mathbf{b}\|\cos\theta

Angle Between Vectors

cosθ=abab\cos\theta = \frac{\mathbf{a} \cdot \mathbf{b}}{\|\mathbf{a}\|\,\|\mathbf{b}\|}

Self Dot Product

vv=v12+v22++vn2=v2\mathbf{v} \cdot \mathbf{v} = v_1^2 + v_2^2 + \cdots + v_n^2 = \|\mathbf{v}\|^2

Orthogonality Condition

ab=0    ab\mathbf{a} \cdot \mathbf{b} = 0 \iff \mathbf{a} \perp \mathbf{b}

Scalar Projection

compba=abb\text{comp}_{\mathbf{b}}\,\mathbf{a} = \frac{\mathbf{a} \cdot \mathbf{b}}{\|\mathbf{b}\|}

Vector Projection

projba=abb2b=abbbb\text{proj}_{\mathbf{b}}\,\mathbf{a} = \frac{\mathbf{a} \cdot \mathbf{b}}{\|\mathbf{b}\|^2}\,\mathbf{b} = \frac{\mathbf{a} \cdot \mathbf{b}}{\mathbf{b} \cdot \mathbf{b}}\,\mathbf{b}

Orthogonal Decomposition

a=projba+a,a=aprojba\mathbf{a} = \text{proj}_{\mathbf{b}}\,\mathbf{a} + \mathbf{a}_{\perp}, \quad \mathbf{a}_{\perp} = \mathbf{a} - \text{proj}_{\mathbf{b}}\,\mathbf{a}

Cross Product (Component Form)

a×b=(a2b3a3b2, a3b1a1b3, a1b2a2b1)\mathbf{a} \times \mathbf{b} = (a_2 b_3 - a_3 b_2,\ a_3 b_1 - a_1 b_3,\ a_1 b_2 - a_2 b_1)

Cross Product (Determinant Form)

a×b=ijka1a2a3b1b2b3\mathbf{a} \times \mathbf{b} = \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \end{vmatrix}

Cross Product Magnitude

a×b=absinθ\|\mathbf{a} \times \mathbf{b}\| = \|\mathbf{a}\|\,\|\mathbf{b}\|\sin\theta

Standard Basis Cross Products

i×j=k,j×k=i,k×i=j\mathbf{i} \times \mathbf{j} = \mathbf{k}, \quad \mathbf{j} \times \mathbf{k} = \mathbf{i}, \quad \mathbf{k} \times \mathbf{i} = \mathbf{j}

Cross Product Anti-Commutativity

a×b=(b×a)\mathbf{a} \times \mathbf{b} = -(\mathbf{b} \times \mathbf{a})

Parallelism Test (Cross Product)

a×b=0    ab\mathbf{a} \times \mathbf{b} = \mathbf{0} \iff \mathbf{a} \parallel \mathbf{b}

Vector Triple Product

a×(b×c)=(ac)b(ab)c\mathbf{a} \times (\mathbf{b} \times \mathbf{c}) = (\mathbf{a} \cdot \mathbf{c})\,\mathbf{b} - (\mathbf{a} \cdot \mathbf{b})\,\mathbf{c}

Lagrange Identity

(a×b)(c×d)=(ac)(bd)(ad)(bc)(\mathbf{a} \times \mathbf{b}) \cdot (\mathbf{c} \times \mathbf{d}) = (\mathbf{a} \cdot \mathbf{c})(\mathbf{b} \cdot \mathbf{d}) - (\mathbf{a} \cdot \mathbf{d})(\mathbf{b} \cdot \mathbf{c})

Scalar Triple Product

a(b×c)=a1a2a3b1b2b3c1c2c3\mathbf{a} \cdot (\mathbf{b} \times \mathbf{c}) = \begin{vmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{vmatrix}

Parallelogram Area

Area=a×b\text{Area} = \|\mathbf{a} \times \mathbf{b}\|

Parallelepiped Volume

V=a(b×c)V = |\mathbf{a} \cdot (\mathbf{b} \times \mathbf{c})|

Pyramid Volume

V=16a(b×c)V = \tfrac{1}{6}|\mathbf{a} \cdot (\mathbf{b} \times \mathbf{c})|

Vector Space Axioms

For all u,v,wV and all c,dF:(1) u+vV(2) u+v=v+u(3) (u+v)+w=u+(v+w)(4) 0V:v+0=v(5) vV:v+(v)=0(6) cvV(7) c(dv)=(cd)v(8) c(u+v)=cu+cv(9) (c+d)v=cv+dv(10) 1v=v\begin{aligned} &\text{For all } \mathbf{u}, \mathbf{v}, \mathbf{w} \in V \text{ and all } c, d \in \mathbb{F}: \\ &(1)\ \mathbf{u} + \mathbf{v} \in V \quad (2)\ \mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u} \\ &(3)\ (\mathbf{u} + \mathbf{v}) + \mathbf{w} = \mathbf{u} + (\mathbf{v} + \mathbf{w}) \\ &(4)\ \exists\, \mathbf{0} \in V: \mathbf{v} + \mathbf{0} = \mathbf{v} \\ &(5)\ \exists\, -\mathbf{v} \in V: \mathbf{v} + (-\mathbf{v}) = \mathbf{0} \\ &(6)\ c\mathbf{v} \in V \quad (7)\ c(d\mathbf{v}) = (cd)\mathbf{v} \\ &(8)\ c(\mathbf{u} + \mathbf{v}) = c\mathbf{u} + c\mathbf{v} \\ &(9)\ (c + d)\mathbf{v} = c\mathbf{v} + d\mathbf{v} \quad (10)\ 1\mathbf{v} = \mathbf{v} \end{aligned}

Scalar-Zero Property

0v=0,c0=00\mathbf{v} = \mathbf{0}, \quad c\mathbf{0} = \mathbf{0}

Negative One Scalar

(1)v=v(-1)\mathbf{v} = -\mathbf{v}

Subspace Test

WV is a subspace    {Wu,vWu+vWvW, cFcvWW \subseteq V \text{ is a subspace} \iff \begin{cases} W \neq \emptyset \\ \mathbf{u}, \mathbf{v} \in W \Rightarrow \mathbf{u} + \mathbf{v} \in W \\ \mathbf{v} \in W,\ c \in \mathbb{F} \Rightarrow c\mathbf{v} \in W \end{cases}

Subspace Test Combined

WV is a subspace    W and cu+dvW for all u,vW, c,dFW \subseteq V \text{ is a subspace} \iff W \neq \emptyset \text{ and } c\mathbf{u} + d\mathbf{v} \in W \text{ for all } \mathbf{u}, \mathbf{v} \in W,\ c, d \in \mathbb{F}

Span (Set Definition)

Span{v1,,vk}={c1v1+c2v2++ckvkciF}\text{Span}\{\mathbf{v}_1, \ldots, \mathbf{v}_k\} = \left\{c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \cdots + c_k\mathbf{v}_k \mid c_i \in \mathbb{F}\right\}

Span Membership Criterion

bSpan{v1,,vk}    Ac=b is consistent\mathbf{b} \in \text{Span}\{\mathbf{v}_1, \ldots, \mathbf{v}_k\} \iff A\mathbf{c} = \mathbf{b} \text{ is consistent}

Span Is Smallest Subspace

Span(K)=W subspaceKWW\text{Span}(K) = \bigcap_{\substack{W \text{ subspace} \\ K \subseteq W}} W

Linear Independence Equation

{v1,,vk} is independent    (c1v1++ckvk=0c1==ck=0)\{\mathbf{v}_1, \ldots, \mathbf{v}_k\} \text{ is independent} \iff \bigl(c_1\mathbf{v}_1 + \cdots + c_k\mathbf{v}_k = \mathbf{0} \Rightarrow c_1 = \cdots = c_k = 0\bigr)

Linear Independence Matrix Test

{v1,,vk}Rm is independent    Ac=0 has only the trivial solution\{\mathbf{v}_1, \ldots, \mathbf{v}_k\} \subset \mathbb{R}^m \text{ is independent} \iff A\mathbf{c} = \mathbf{0} \text{ has only the trivial solution}

Linear Independence Determinant Test

{v1,,vn}Rn is independent    det[v1  vn]0\{\mathbf{v}_1, \ldots, \mathbf{v}_n\} \subset \mathbb{R}^n \text{ is independent} \iff \det[\mathbf{v}_1\ \cdots\ \mathbf{v}_n] \neq 0

Max Independent Set Size

S>dimVS is dependent|S| > \dim V \Rightarrow S \text{ is dependent}

Wronskian Test

W(f1,,fn)(x)=det(f1(x)fn(x)f1(x)fn(x)f1(n1)(x)fn(n1)(x))W(f_1, \ldots, f_n)(x) = \det\begin{pmatrix} f_1(x) & \cdots & f_n(x) \\ f_1'(x) & \cdots & f_n'(x) \\ \vdots & \ddots & \vdots \\ f_1^{(n-1)}(x) & \cdots & f_n^{(n-1)}(x) \end{pmatrix}

Basis Definition

B={v1,,vn} is a basis for V    {B is linearly independentSpan(B)=V\mathcal{B} = \{\mathbf{v}_1, \ldots, \mathbf{v}_n\} \text{ is a basis for } V \iff \begin{cases} \mathcal{B} \text{ is linearly independent} \\ \text{Span}(\mathcal{B}) = V \end{cases}

Unique Basis Representation

vV, !(c1,,cn):v=c1v1+c2v2++cnvn\forall\, \mathbf{v} \in V,\ \exists!\, (c_1, \ldots, c_n): \mathbf{v} = c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \cdots + c_n\mathbf{v}_n

Coordinate Vector

[v]B=(c1c2cn)where v=c1v1++cnvn[\mathbf{v}]_\mathcal{B} = \begin{pmatrix} c_1 \\ c_2 \\ \vdots \\ c_n \end{pmatrix} \quad \text{where } \mathbf{v} = c_1\mathbf{v}_1 + \cdots + c_n\mathbf{v}_n

Standard Basis (Rn)

ei=(0,,0,1i-th,0,,0),i=1,,n\mathbf{e}_i = (0, \ldots, 0, \underset{i\text{-th}}{1}, 0, \ldots, 0), \quad i = 1, \ldots, n

Change of Basis Formula

[v]C=PCB[v]B[\mathbf{v}]_\mathcal{C} = P_{\mathcal{C} \leftarrow \mathcal{B}}\, [\mathbf{v}]_\mathcal{B}

Change of Basis Inverse

PBC=(PCB)1P_{\mathcal{B} \leftarrow \mathcal{C}} = \bigl(P_{\mathcal{C} \leftarrow \mathcal{B}}\bigr)^{-1}

Coordinate Map Linearity

[u+v]B=[u]B+[v]B,[cv]B=c[v]B[\mathbf{u} + \mathbf{v}]_\mathcal{B} = [\mathbf{u}]_\mathcal{B} + [\mathbf{v}]_\mathcal{B}, \qquad [c\mathbf{v}]_\mathcal{B} = c\,[\mathbf{v}]_\mathcal{B}

Dimension Definition

dim(V)=Bfor any basis B of V\dim(V) = |\mathcal{B}| \quad \text{for any basis } \mathcal{B} \text{ of } V

Subspace Dimension Inequality

WVdim(W)dim(V),with equality    W=VW \subseteq V \Rightarrow \dim(W) \leq \dim(V), \quad \text{with equality} \iff W = V

Dimension Sum Formula

dim(W1+W2)=dim(W1)+dim(W2)dim(W1W2)\dim(W_1 + W_2) = \dim(W_1) + \dim(W_2) - \dim(W_1 \cap W_2)

Direct Sum Criterion

V=W1W2    V=W1+W2 and W1W2={0}V = W_1 \oplus W_2 \iff V = W_1 + W_2 \text{ and } W_1 \cap W_2 = \{\mathbf{0}\}

Direct Sum Dimension

V=W1W2dim(V)=dim(W1)+dim(W2)V = W_1 \oplus W_2 \Rightarrow \dim(V) = \dim(W_1) + \dim(W_2)

Rank-Nullity Theorem (Matrix Form)

dim(ColA)+dim(NullA)=n\dim(\text{Col}\,A) + \dim(\text{Null}\,A) = n

Four Fundamental Subspaces Dimensions

dim(ColA)=rdim(RowA)=rdim(NullA)=nrdim(NullAT)=mr\begin{aligned} \dim(\text{Col}\,A) &= r & \dim(\text{Row}\,A) &= r \\ \dim(\text{Null}\,A) &= n - r & \dim(\text{Null}\,A^T) &= m - r \end{aligned}

Row Rank Equals Column Rank

dim(RowA)=dim(ColA)=rank(A)\dim(\text{Row}\,A) = \dim(\text{Col}\,A) = \text{rank}(A)

Matrix Equality

A=B    aij=bij for all i,jA = B \iff a_{ij} = b_{ij} \text{ for all } i, j

Matrix Addition

(A+B)ij=aij+bij(A + B)_{ij} = a_{ij} + b_{ij}

Matrix Subtraction

(AB)ij=aijbij(A - B)_{ij} = a_{ij} - b_{ij}

Scalar Multiplication of Matrices

(cA)ij=caij(cA)_{ij} = c \cdot a_{ij}

Matrix Multiplication

(AB)ij=k=1naikbkj(AB)_{ij} = \sum_{k=1}^{n} a_{ik}\, b_{kj}

Matrix-Vector Product (Column Form)

Ax=x1a1+x2a2++xnanA\mathbf{x} = x_1 \mathbf{a}_1 + x_2 \mathbf{a}_2 + \cdots + x_n \mathbf{a}_n

Matrix Multiplication Associativity

(AB)C=A(BC)(AB)C = A(BC)

Matrix Multiplication Distributivity

A(B+C)=AB+AC,(A+B)C=AC+BCA(B + C) = AB + AC, \qquad (A + B)C = AC + BC

Matrix Multiplication Non-Commutativity

ABBAin generalAB \neq BA \quad \text{in general}

Matrix Power

A0=I,Ak=AAAk factors,Ak=(A1)kA^0 = I, \qquad A^k = \underbrace{A \cdot A \cdots A}_{k \text{ factors}}, \qquad A^{-k} = (A^{-1})^k

Transpose Definition

(AT)ij=aji(A^T)_{ij} = a_{ji}

Transpose Involution

(AT)T=A(A^T)^T = A

Transpose of Sum

(A+B)T=AT+BT(A + B)^T = A^T + B^T

Transpose of Scalar Multiple

(cA)T=cAT(cA)^T = c\, A^T

Transpose of Product

(AB)T=BTAT(AB)^T = B^T A^T

Symmetric Matrix Definition

A=AT    aij=aji for all i,jA = A^T \iff a_{ij} = a_{ji} \text{ for all } i, j

Skew-Symmetric Matrix Definition

AT=A    aij=ajiA^T = -A \iff a_{ij} = -a_{ji}

Symmetric Skew Decomposition

A=12(A+AT)+12(AAT)A = \tfrac{1}{2}(A + A^T) + \tfrac{1}{2}(A - A^T)

Gram Matrix Symmetry

(ATA)T=ATA,(AAT)T=AAT(A^T A)^T = A^T A, \qquad (A A^T)^T = A A^T

Identity Matrix Definition

In=[δij],δij={1i=j0ijI_n = [\delta_{ij}], \qquad \delta_{ij} = \begin{cases} 1 & i = j \\ 0 & i \neq j \end{cases}

Identity Matrix Property

AI=IA=AAI = IA = A

Diagonal Matrix Definition

D=diag(d1,d2,,dn)    dij=0 for ijD = \operatorname{diag}(d_1, d_2, \ldots, d_n) \iff d_{ij} = 0 \text{ for } i \neq j

Diagonal Matrix Power

Dk=diag(d1k,d2k,,dnk)D^k = \operatorname{diag}(d_1^k, d_2^k, \ldots, d_n^k)

Diagonal Matrix Determinant

det(diag(d1,,dn))=d1d2dn\det\bigl(\operatorname{diag}(d_1, \ldots, d_n)\bigr) = d_1 d_2 \cdots d_n

Triangular Matrix Determinant

det(T)=t11t22tnn\det(T) = t_{11} \, t_{22} \cdots t_{nn}

Orthogonal Matrix Definition

QTQ=QQT=IQ^T Q = Q Q^T = I

Orthogonal Matrix Determinant

det(Q)=±1\det(Q) = \pm 1

Idempotent Matrix Definition

A2=AA^2 = A

Idempotent Rank Trace

A2=A    rank(A)=tr(A)A^2 = A \implies \operatorname{rank}(A) = \operatorname{tr}(A)

Nilpotent Matrix Definition

Ak=Ofor some k1A^k = O \quad \text{for some } k \geq 1

Neumann Series Nilpotent

Ak=O    (IA)1=I+A+A2++Ak1A^k = O \implies (I - A)^{-1} = I + A + A^2 + \cdots + A^{k-1}

Involutory Matrix Definition

A2=IA^2 = I

Cross Product Skew Matrix

a×b=[a]×b,[a]×=(0a3a2a30a1a2a10)\mathbf{a} \times \mathbf{b} = [\mathbf{a}]_\times \mathbf{b}, \qquad [\mathbf{a}]_\times = \begin{pmatrix} 0 & -a_3 & a_2 \\ a_3 & 0 & -a_1 \\ -a_2 & a_1 & 0 \end{pmatrix}

Inverse Definition

AA1=A1A=IA A^{-1} = A^{-1} A = I

Inverse 2x2 Formula

(abcd)1=1adbc(dbca)\begin{pmatrix} a & b \\ c & d \end{pmatrix}^{-1} = \frac{1}{ad - bc}\begin{pmatrix} d & -b \\ -c & a \end{pmatrix}

Inverse via Adjugate

A1=1det(A)adj(A)A^{-1} = \frac{1}{\det(A)}\, \operatorname{adj}(A)

Inverse via Row Reduction

[AI]  row ops  [IA1][A \mid I] \;\xrightarrow{\text{row ops}}\; [I \mid A^{-1}]

Inverse Involution

(A1)1=A(A^{-1})^{-1} = A

Inverse of Product

(AB)1=B1A1(AB)^{-1} = B^{-1} A^{-1}

Inverse of Transpose

(AT)1=(A1)T(A^T)^{-1} = (A^{-1})^T

Inverse of Scalar Multiple

(cA)1=1cA1(cA)^{-1} = \frac{1}{c}\, A^{-1}

Inverse of Power

(Ak)1=(A1)k=Ak(A^k)^{-1} = (A^{-1})^k = A^{-k}

Determinant of Inverse

det(A1)=1det(A)\det(A^{-1}) = \frac{1}{\det(A)}

Diagonal Matrix Inverse

diag(d1,,dn)1=diag ⁣(1d1,,1dn)\operatorname{diag}(d_1, \ldots, d_n)^{-1} = \operatorname{diag}\!\left(\frac{1}{d_1}, \ldots, \frac{1}{d_n}\right)

Orthogonal Matrix Inverse

Q1=QTQ^{-1} = Q^T

Solve System via Inverse

Ax=b    x=A1bA\mathbf{x} = \mathbf{b} \implies \mathbf{x} = A^{-1}\mathbf{b}

Invertible Matrix Theorem

For ARn×n, the following are equivalent:(1) A is invertible(2) det(A)0(3) rank(A)=n(4) columns of A are linearly independent(5) rows of A are linearly independent(6) columns of A span Rn(7) columns form a basis of Rn(8) Ax=0 has only the trivial solution(9) Ax=b has a unique solution for every b(10) Null(A)={0}(11) rref(A)=I(12) A is a product of elementary matrices(13) 0 is not an eigenvalue of A\begin{aligned} \text{For } A \in \mathbb{R}^{n \times n}, \text{ the following are equivalent:} & \\ (1)\ A \text{ is invertible} \quad (2)\ \det(A) \neq 0 \quad (3)\ \operatorname{rank}(A) = n & \\ (4)\ \text{columns of } A \text{ are linearly independent} & \\ (5)\ \text{rows of } A \text{ are linearly independent} & \\ (6)\ \text{columns of } A \text{ span } \mathbb{R}^n \quad (7)\ \text{columns form a basis of } \mathbb{R}^n & \\ (8)\ A\mathbf{x} = \mathbf{0} \text{ has only the trivial solution} & \\ (9)\ A\mathbf{x} = \mathbf{b} \text{ has a unique solution for every } \mathbf{b} & \\ (10)\ \operatorname{Null}(A) = \{\mathbf{0}\} & \\ (11)\ \operatorname{rref}(A) = I \quad (12)\ A \text{ is a product of elementary matrices} & \\ (13)\ 0 \text{ is not an eigenvalue of } A & \end{aligned}

Singular Matrix Definition

A singular    det(A)=0    rank(A)<nA \text{ singular} \iff \det(A) = 0 \iff \operatorname{rank}(A) < n

Rank Bounds

0rank(A)min(m,n)0 \leq \operatorname{rank}(A) \leq \min(m, n)

Rank of Transpose

rank(AT)=rank(A)\operatorname{rank}(A^T) = \operatorname{rank}(A)

Rank Product Inequality

rank(AB)min(rank(A),rank(B))\operatorname{rank}(AB) \leq \min\bigl(\operatorname{rank}(A), \operatorname{rank}(B)\bigr)

Sylvester Rank Inequality

rank(A)+rank(B)nrank(AB)\operatorname{rank}(A) + \operatorname{rank}(B) - n \leq \operatorname{rank}(AB)

Rank Sum Inequality

rank(A+B)rank(A)+rank(B)\operatorname{rank}(A + B) \leq \operatorname{rank}(A) + \operatorname{rank}(B)

Rank Invariance Invertible

rank(PAQ)=rank(A)\operatorname{rank}(PAQ) = \operatorname{rank}(A)

Gram Rank Identity

rank(ATA)=rank(AAT)=rank(A)\operatorname{rank}(A^T A) = \operatorname{rank}(A A^T) = \operatorname{rank}(A)

Rank-One Outer Product

A=uvT    rank(A)=1(u,v0)A = \mathbf{u}\mathbf{v}^T \implies \operatorname{rank}(A) = 1 \quad (\mathbf{u}, \mathbf{v} \neq \mathbf{0})

Trace Definition

tr(A)=i=1naii=a11+a22++ann\operatorname{tr}(A) = \sum_{i=1}^{n} a_{ii} = a_{11} + a_{22} + \cdots + a_{nn}

Trace Linearity

tr(cA+dB)=ctr(A)+dtr(B)\operatorname{tr}(cA + dB) = c\operatorname{tr}(A) + d\operatorname{tr}(B)

Trace of Transpose

tr(AT)=tr(A)\operatorname{tr}(A^T) = \operatorname{tr}(A)

Trace Cyclic Property

tr(AB)=tr(BA)\operatorname{tr}(AB) = \operatorname{tr}(BA)

Trace Sum of Eigenvalues

tr(A)=i=1nλi\operatorname{tr}(A) = \sum_{i=1}^{n} \lambda_i

Trace Similarity Invariance

tr(P1AP)=tr(A)\operatorname{tr}(P^{-1}AP) = \operatorname{tr}(A)

Trace of Commutator

tr(ABBA)=0\operatorname{tr}(AB - BA) = 0

Trace Symmetric Skew Product

tr(SK)=0(ST=S,  KT=K)\operatorname{tr}(SK) = 0 \quad (S^T = S, \; K^T = -K)

Trace Orthonormal Basis

tr(A)=i=1nqiTAqi\operatorname{tr}(A) = \sum_{i=1}^{n} \mathbf{q}_i^T A \mathbf{q}_i

Frobenius Inner Product

A,BF=tr(ATB)=i,jaijbij\langle A, B \rangle_F = \operatorname{tr}(A^T B) = \sum_{i,j} a_{ij} b_{ij}

Frobenius Norm

AF=tr(ATA)=i,jaij2\|A\|_F = \sqrt{\operatorname{tr}(A^T A)} = \sqrt{\sum_{i,j} a_{ij}^2}

Determinant 2x2

det(abcd)=adbc\det\begin{pmatrix} a & b \\ c & d \end{pmatrix} = ad - bc

Determinant 3x3

det(A)=a11(a22a33a23a32)a12(a21a33a23a31)+a13(a21a32a22a31)\det(A) = a_{11}(a_{22}a_{33} - a_{23}a_{32}) - a_{12}(a_{21}a_{33} - a_{23}a_{31}) + a_{13}(a_{21}a_{32} - a_{22}a_{31})

Determinant Recursive Definition

det(A)={a11n=1j=1n(1)1+ja1jM1jn2\det(A) = \begin{cases} a_{11} & n = 1 \\ \displaystyle\sum_{j=1}^{n} (-1)^{1+j} \, a_{1j} \, M_{1j} & n \geq 2 \end{cases}

Determinant Permutation Formula

det(A)=σSnsgn(σ)aσ(1),1aσ(2),2aσ(n),n\det(A) = \sum_{\sigma \in S_n} \operatorname{sgn}(\sigma) \, a_{\sigma(1),1} \, a_{\sigma(2),2} \cdots a_{\sigma(n),n}

Minor Definition

Mij=det ⁣(A(i,j))M_{ij} = \det\!\left(A^{(i,j)}\right)

Cofactor Definition

Cij=(1)i+jMijC_{ij} = (-1)^{i+j} \, M_{ij}

Cofactor Matrix Definition

cof(A)=[Cij]n×n\operatorname{cof}(A) = \bigl[C_{ij}\bigr]_{n \times n}

Adjugate Definition

adj(A)=cof(A)T\operatorname{adj}(A) = \operatorname{cof}(A)^T

Laplace Row Expansion

det(A)=j=1naijCijfor any fixed row i\det(A) = \sum_{j=1}^{n} a_{ij} \, C_{ij} \qquad \text{for any fixed row } i

Laplace Column Expansion

det(A)=i=1naijCijfor any fixed column j\det(A) = \sum_{i=1}^{n} a_{ij} \, C_{ij} \qquad \text{for any fixed column } j

Determinant Row Swap

det(B)=det(A)\det(B) = -\det(A)

Determinant Row Scaling

det(B)=kdet(A)\det(B) = k \, \det(A)

Determinant Row Addition

det(B)=det(A)\det(B) = \det(A)

Determinant of Transpose

det(AT)=det(A)\det(A^T) = \det(A)

Determinant of Product

det(AB)=det(A)det(B)\det(AB) = \det(A) \, \det(B)

Determinant of Scalar Multiple

det(kA)=kndet(A)\det(kA) = k^n \, \det(A)

Determinant of Power

det(Ak)=(det(A))k\det(A^k) = \bigl(\det(A)\bigr)^k

Determinant of Identity

det(In)=1\det(I_n) = 1

Block Triangular Determinant

det(A11A120A22)=det(A11)det(A22)\det\begin{pmatrix} A_{11} & A_{12} \\ 0 & A_{22} \end{pmatrix} = \det(A_{11}) \, \det(A_{22})

Vandermonde Determinant

det(V)=1i<jn(xjxi)\det(V) = \prod_{1 \leq i < j \leq n} (x_j - x_i)

Adjugate Identity

Aadj(A)=adj(A)A=det(A)IA \cdot \operatorname{adj}(A) = \operatorname{adj}(A) \cdot A = \det(A) \, I

Cramers Rule

xi=det(Ai)det(A)x_i = \frac{\det(A_i)}{\det(A)}

Determinant Signed Area 2D

signed area(u,v)=det(uv)=adbc\text{signed area}(\mathbf{u}, \mathbf{v}) = \det\begin{pmatrix} \mathbf{u} & \mathbf{v} \end{pmatrix} = ad - bc

Determinant Signed Volume 3D

signed volume(a,b,c)=det(abc)=a(b×c)\text{signed volume}(\mathbf{a}, \mathbf{b}, \mathbf{c}) = \det\begin{pmatrix} \mathbf{a} & \mathbf{b} & \mathbf{c} \end{pmatrix} = \mathbf{a} \cdot (\mathbf{b} \times \mathbf{c})

Determinant Volume Scaling Factor

vol(A(S))=det(A)vol(S)\operatorname{vol}\bigl(A(S)\bigr) = |\det(A)| \cdot \operatorname{vol}(S)

Triangle Area via Determinant

Area=12det(x2x1x3x1y2y1y3y1)\text{Area} = \frac{1}{2} \left| \det\begin{pmatrix} x_2 - x_1 & x_3 - x_1 \\ y_2 - y_1 & y_3 - y_1 \end{pmatrix} \right|

Tetrahedron Volume via Determinant

V=16det(e1e2e3)V = \frac{1}{6} \left| \det\begin{pmatrix} \mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3 \end{pmatrix} \right|

Determinant Product of Eigenvalues

det(A)=λ1λ2λn\det(A) = \lambda_1 \, \lambda_2 \cdots \lambda_n

Linear Equation Standard Form

a1x1+a2x2++anxn=ba_1 x_1 + a_2 x_2 + \cdots + a_n x_n = b

Linear System Matrix Form

Ax=bA\mathbf{x} = \mathbf{b}

Vector Equation Form

x1a1+x2a2++xnan=bx_1 \mathbf{a}_1 + x_2 \mathbf{a}_2 + \cdots + x_n \mathbf{a}_n = \mathbf{b}

Augmented Matrix Construction

[Ab]=(a11a12a1nb1a21a22a2nb2am1am2amnbm)[A \mid \mathbf{b}] = \left(\begin{array}{cccc|c} a_{11} & a_{12} & \cdots & a_{1n} & b_1 \\ a_{21} & a_{22} & \cdots & a_{2n} & b_2 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} & b_m \end{array}\right)

Row Echelon Form Definition

REF: (000000000)\text{REF: } \begin{pmatrix} \boxed{\ast} & \bullet & \bullet & \bullet & \bullet \\ 0 & \boxed{\ast} & \bullet & \bullet & \bullet \\ 0 & 0 & 0 & \boxed{\ast} & \bullet \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}

Reduced Row Echelon Form Definition

RREF: (100010000100000)\text{RREF: } \begin{pmatrix} \boxed{1} & 0 & \bullet & 0 & \bullet \\ 0 & \boxed{1} & \bullet & 0 & \bullet \\ 0 & 0 & 0 & \boxed{1} & \bullet \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}

RREF Uniqueness

RREF(A) is unique\text{RREF}(A) \text{ is unique}

Pivot Definition

pivot=leading nonzero entry of a row in echelon form\text{pivot} = \text{leading nonzero entry of a row in echelon form}

Elementary Row Operations

RiRj(swap)kRiRi,k0(scaling)Ri+cRjRi(addition)\begin{aligned} R_i &\leftrightarrow R_j \quad \text{(swap)} \\ kR_i &\to R_i, \quad k \neq 0 \quad \text{(scaling)} \\ R_i + cR_j &\to R_i \quad \text{(addition)} \end{aligned}

Row Equivalence Preserves Solutions

[Ab][Ab]    Sol(Ax=b)=Sol(Ax=b)[A \mid \mathbf{b}] \sim [A' \mid \mathbf{b}'] \;\Longrightarrow\; \text{Sol}(A\mathbf{x} = \mathbf{b}) = \text{Sol}(A'\mathbf{x} = \mathbf{b}')

Free Variables Count

(number of free variables)=nrank(A)\text{(number of free variables)} = n - \text{rank}(A)

Solvability Rank Criterion

Ax=b is consistent    rank(A)=rank([Ab])A\mathbf{x} = \mathbf{b} \text{ is consistent} \iff \text{rank}(A) = \text{rank}([A \mid \mathbf{b}])

Solution Structure Decomposition

x=xp+xh,xhNull(A)\mathbf{x} = \mathbf{x}_p + \mathbf{x}_h, \quad \mathbf{x}_h \in \text{Null}(A)

Homogeneous Solution Space Dimension

dimNull(A)=nrank(A)\dim \text{Null}(A) = n - \text{rank}(A)

Underdetermined Homogeneous Has Nontrivial

n>m    Ax=0 has a nontrivial solutionn > m \;\Longrightarrow\; A\mathbf{x} = \mathbf{0} \text{ has a nontrivial solution}

Vector Component Form

v=(v1,v2,,vn)Rn\mathbf{v} = (v_1, v_2, \ldots, v_n) \in \mathbb{R}^n

Standard Basis Decomposition

v=v1e1+v2e2++vnen\mathbf{v} = v_1 \mathbf{e}_1 + v_2 \mathbf{e}_2 + \cdots + v_n \mathbf{e}_n

Direction Cosines

cosα=v1v,cosβ=v2v,cosγ=v3v\cos\alpha = \frac{v_1}{\|\mathbf{v}\|}, \quad \cos\beta = \frac{v_2}{\|\mathbf{v}\|}, \quad \cos\gamma = \frac{v_3}{\|\mathbf{v}\|}

Direction Cosines Identity

cos2α+cos2β+cos2γ=1\cos^2\alpha + \cos^2\beta + \cos^2\gamma = 1

Vector Addition

a+b=(a1+b1, a2+b2, , an+bn)\mathbf{a} + \mathbf{b} = (a_1 + b_1,\ a_2 + b_2,\ \ldots,\ a_n + b_n)

Vector Subtraction

ab=a+(b)=(a1b1, a2b2, , anbn)\mathbf{a} - \mathbf{b} = \mathbf{a} + (-\mathbf{b}) = (a_1 - b_1,\ a_2 - b_2,\ \ldots,\ a_n - b_n)

Scalar Multiplication of Vectors

ca=(ca1, ca2, , can)c\mathbf{a} = (ca_1,\ ca_2,\ \ldots,\ ca_n)

Linear Combination

c1v1+c2v2++ckvkc_1 \mathbf{v}_1 + c_2 \mathbf{v}_2 + \cdots + c_k \mathbf{v}_k

Span

Span{v1,,vk}={c1v1++ckvkciR}\text{Span}\{\mathbf{v}_1, \ldots, \mathbf{v}_k\} = \{c_1 \mathbf{v}_1 + \cdots + c_k \mathbf{v}_k \mid c_i \in \mathbb{R}\}

Euclidean Norm

v=v12+v22++vn2=i=1nvi2\|\mathbf{v}\| = \sqrt{v_1^2 + v_2^2 + \cdots + v_n^2} = \sqrt{\sum_{i=1}^{n} v_i^2}

Distance Formula

d(a,b)=ab=(a1b1)2+(a2b2)2++(anbn)2d(\mathbf{a}, \mathbf{b}) = \|\mathbf{a} - \mathbf{b}\| = \sqrt{(a_1 - b_1)^2 + (a_2 - b_2)^2 + \cdots + (a_n - b_n)^2}

Vector Normalization

v^=vv\hat{\mathbf{v}} = \frac{\mathbf{v}}{\|\mathbf{v}\|}

Norm Scaling Property

cv=cv\|c\mathbf{v}\| = |c|\,\|\mathbf{v}\|

Triangle Inequality

a+ba+b\|\mathbf{a} + \mathbf{b}\| \leq \|\mathbf{a}\| + \|\mathbf{b}\|

Cauchy-Schwarz Inequality

abab|\mathbf{a} \cdot \mathbf{b}| \leq \|\mathbf{a}\|\,\|\mathbf{b}\|

Dot Product (Algebraic)

ab=a1b1+a2b2++anbn=i=1naibi\mathbf{a} \cdot \mathbf{b} = a_1 b_1 + a_2 b_2 + \cdots + a_n b_n = \sum_{i=1}^{n} a_i b_i

Dot Product (Geometric)

ab=abcosθ\mathbf{a} \cdot \mathbf{b} = \|\mathbf{a}\|\,\|\mathbf{b}\|\cos\theta

Angle Between Vectors

cosθ=abab\cos\theta = \frac{\mathbf{a} \cdot \mathbf{b}}{\|\mathbf{a}\|\,\|\mathbf{b}\|}

Self Dot Product

vv=v12+v22++vn2=v2\mathbf{v} \cdot \mathbf{v} = v_1^2 + v_2^2 + \cdots + v_n^2 = \|\mathbf{v}\|^2

Orthogonality Condition

ab=0    ab\mathbf{a} \cdot \mathbf{b} = 0 \iff \mathbf{a} \perp \mathbf{b}

Scalar Projection

compba=abb\text{comp}_{\mathbf{b}}\,\mathbf{a} = \frac{\mathbf{a} \cdot \mathbf{b}}{\|\mathbf{b}\|}

Vector Projection

projba=abb2b=abbbb\text{proj}_{\mathbf{b}}\,\mathbf{a} = \frac{\mathbf{a} \cdot \mathbf{b}}{\|\mathbf{b}\|^2}\,\mathbf{b} = \frac{\mathbf{a} \cdot \mathbf{b}}{\mathbf{b} \cdot \mathbf{b}}\,\mathbf{b}

Orthogonal Decomposition

a=projba+a,a=aprojba\mathbf{a} = \text{proj}_{\mathbf{b}}\,\mathbf{a} + \mathbf{a}_{\perp}, \quad \mathbf{a}_{\perp} = \mathbf{a} - \text{proj}_{\mathbf{b}}\,\mathbf{a}

Cross Product (Component Form)

a×b=(a2b3a3b2, a3b1a1b3, a1b2a2b1)\mathbf{a} \times \mathbf{b} = (a_2 b_3 - a_3 b_2,\ a_3 b_1 - a_1 b_3,\ a_1 b_2 - a_2 b_1)

Cross Product (Determinant Form)

a×b=ijka1a2a3b1b2b3\mathbf{a} \times \mathbf{b} = \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \end{vmatrix}

Cross Product Magnitude

a×b=absinθ\|\mathbf{a} \times \mathbf{b}\| = \|\mathbf{a}\|\,\|\mathbf{b}\|\sin\theta

Standard Basis Cross Products

i×j=k,j×k=i,k×i=j\mathbf{i} \times \mathbf{j} = \mathbf{k}, \quad \mathbf{j} \times \mathbf{k} = \mathbf{i}, \quad \mathbf{k} \times \mathbf{i} = \mathbf{j}

Cross Product Anti-Commutativity

a×b=(b×a)\mathbf{a} \times \mathbf{b} = -(\mathbf{b} \times \mathbf{a})

Parallelism Test (Cross Product)

a×b=0    ab\mathbf{a} \times \mathbf{b} = \mathbf{0} \iff \mathbf{a} \parallel \mathbf{b}

Vector Triple Product

a×(b×c)=(ac)b(ab)c\mathbf{a} \times (\mathbf{b} \times \mathbf{c}) = (\mathbf{a} \cdot \mathbf{c})\,\mathbf{b} - (\mathbf{a} \cdot \mathbf{b})\,\mathbf{c}

Lagrange Identity

(a×b)(c×d)=(ac)(bd)(ad)(bc)(\mathbf{a} \times \mathbf{b}) \cdot (\mathbf{c} \times \mathbf{d}) = (\mathbf{a} \cdot \mathbf{c})(\mathbf{b} \cdot \mathbf{d}) - (\mathbf{a} \cdot \mathbf{d})(\mathbf{b} \cdot \mathbf{c})

Scalar Triple Product

a(b×c)=a1a2a3b1b2b3c1c2c3\mathbf{a} \cdot (\mathbf{b} \times \mathbf{c}) = \begin{vmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{vmatrix}

Parallelogram Area

Area=a×b\text{Area} = \|\mathbf{a} \times \mathbf{b}\|

Parallelepiped Volume

V=a(b×c)V = |\mathbf{a} \cdot (\mathbf{b} \times \mathbf{c})|

Pyramid Volume

V=16a(b×c)V = \tfrac{1}{6}|\mathbf{a} \cdot (\mathbf{b} \times \mathbf{c})|

Vector Space Axioms

For all u,v,wV and all c,dF:(1) u+vV(2) u+v=v+u(3) (u+v)+w=u+(v+w)(4) 0V:v+0=v(5) vV:v+(v)=0(6) cvV(7) c(dv)=(cd)v(8) c(u+v)=cu+cv(9) (c+d)v=cv+dv(10) 1v=v\begin{aligned} &\text{For all } \mathbf{u}, \mathbf{v}, \mathbf{w} \in V \text{ and all } c, d \in \mathbb{F}: \\ &(1)\ \mathbf{u} + \mathbf{v} \in V \quad (2)\ \mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u} \\ &(3)\ (\mathbf{u} + \mathbf{v}) + \mathbf{w} = \mathbf{u} + (\mathbf{v} + \mathbf{w}) \\ &(4)\ \exists\, \mathbf{0} \in V: \mathbf{v} + \mathbf{0} = \mathbf{v} \\ &(5)\ \exists\, -\mathbf{v} \in V: \mathbf{v} + (-\mathbf{v}) = \mathbf{0} \\ &(6)\ c\mathbf{v} \in V \quad (7)\ c(d\mathbf{v}) = (cd)\mathbf{v} \\ &(8)\ c(\mathbf{u} + \mathbf{v}) = c\mathbf{u} + c\mathbf{v} \\ &(9)\ (c + d)\mathbf{v} = c\mathbf{v} + d\mathbf{v} \quad (10)\ 1\mathbf{v} = \mathbf{v} \end{aligned}

Scalar-Zero Property

0v=0,c0=00\mathbf{v} = \mathbf{0}, \quad c\mathbf{0} = \mathbf{0}

Negative One Scalar

(1)v=v(-1)\mathbf{v} = -\mathbf{v}

Subspace Test

WV is a subspace    {Wu,vWu+vWvW, cFcvWW \subseteq V \text{ is a subspace} \iff \begin{cases} W \neq \emptyset \\ \mathbf{u}, \mathbf{v} \in W \Rightarrow \mathbf{u} + \mathbf{v} \in W \\ \mathbf{v} \in W,\ c \in \mathbb{F} \Rightarrow c\mathbf{v} \in W \end{cases}

Subspace Test Combined

WV is a subspace    W and cu+dvW for all u,vW, c,dFW \subseteq V \text{ is a subspace} \iff W \neq \emptyset \text{ and } c\mathbf{u} + d\mathbf{v} \in W \text{ for all } \mathbf{u}, \mathbf{v} \in W,\ c, d \in \mathbb{F}

Span (Set Definition)

Span{v1,,vk}={c1v1+c2v2++ckvkciF}\text{Span}\{\mathbf{v}_1, \ldots, \mathbf{v}_k\} = \left\{c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \cdots + c_k\mathbf{v}_k \mid c_i \in \mathbb{F}\right\}

Span Membership Criterion

bSpan{v1,,vk}    Ac=b is consistent\mathbf{b} \in \text{Span}\{\mathbf{v}_1, \ldots, \mathbf{v}_k\} \iff A\mathbf{c} = \mathbf{b} \text{ is consistent}

Span Is Smallest Subspace

Span(K)=W subspaceKWW\text{Span}(K) = \bigcap_{\substack{W \text{ subspace} \\ K \subseteq W}} W

Linear Independence Equation

{v1,,vk} is independent    (c1v1++ckvk=0c1==ck=0)\{\mathbf{v}_1, \ldots, \mathbf{v}_k\} \text{ is independent} \iff \bigl(c_1\mathbf{v}_1 + \cdots + c_k\mathbf{v}_k = \mathbf{0} \Rightarrow c_1 = \cdots = c_k = 0\bigr)

Linear Independence Matrix Test

{v1,,vk}Rm is independent    Ac=0 has only the trivial solution\{\mathbf{v}_1, \ldots, \mathbf{v}_k\} \subset \mathbb{R}^m \text{ is independent} \iff A\mathbf{c} = \mathbf{0} \text{ has only the trivial solution}

Linear Independence Determinant Test

{v1,,vn}Rn is independent    det[v1  vn]0\{\mathbf{v}_1, \ldots, \mathbf{v}_n\} \subset \mathbb{R}^n \text{ is independent} \iff \det[\mathbf{v}_1\ \cdots\ \mathbf{v}_n] \neq 0

Max Independent Set Size

S>dimVS is dependent|S| > \dim V \Rightarrow S \text{ is dependent}

Wronskian Test

W(f1,,fn)(x)=det(f1(x)fn(x)f1(x)fn(x)f1(n1)(x)fn(n1)(x))W(f_1, \ldots, f_n)(x) = \det\begin{pmatrix} f_1(x) & \cdots & f_n(x) \\ f_1'(x) & \cdots & f_n'(x) \\ \vdots & \ddots & \vdots \\ f_1^{(n-1)}(x) & \cdots & f_n^{(n-1)}(x) \end{pmatrix}

Basis Definition

B={v1,,vn} is a basis for V    {B is linearly independentSpan(B)=V\mathcal{B} = \{\mathbf{v}_1, \ldots, \mathbf{v}_n\} \text{ is a basis for } V \iff \begin{cases} \mathcal{B} \text{ is linearly independent} \\ \text{Span}(\mathcal{B}) = V \end{cases}

Unique Basis Representation

vV, !(c1,,cn):v=c1v1+c2v2++cnvn\forall\, \mathbf{v} \in V,\ \exists!\, (c_1, \ldots, c_n): \mathbf{v} = c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \cdots + c_n\mathbf{v}_n

Coordinate Vector

[v]B=(c1c2cn)where v=c1v1++cnvn[\mathbf{v}]_\mathcal{B} = \begin{pmatrix} c_1 \\ c_2 \\ \vdots \\ c_n \end{pmatrix} \quad \text{where } \mathbf{v} = c_1\mathbf{v}_1 + \cdots + c_n\mathbf{v}_n

Standard Basis (Rn)

ei=(0,,0,1i-th,0,,0),i=1,,n\mathbf{e}_i = (0, \ldots, 0, \underset{i\text{-th}}{1}, 0, \ldots, 0), \quad i = 1, \ldots, n

Change of Basis Formula

[v]C=PCB[v]B[\mathbf{v}]_\mathcal{C} = P_{\mathcal{C} \leftarrow \mathcal{B}}\, [\mathbf{v}]_\mathcal{B}

Change of Basis Inverse

PBC=(PCB)1P_{\mathcal{B} \leftarrow \mathcal{C}} = \bigl(P_{\mathcal{C} \leftarrow \mathcal{B}}\bigr)^{-1}

Coordinate Map Linearity

[u+v]B=[u]B+[v]B,[cv]B=c[v]B[\mathbf{u} + \mathbf{v}]_\mathcal{B} = [\mathbf{u}]_\mathcal{B} + [\mathbf{v}]_\mathcal{B}, \qquad [c\mathbf{v}]_\mathcal{B} = c\,[\mathbf{v}]_\mathcal{B}

Dimension Definition

dim(V)=Bfor any basis B of V\dim(V) = |\mathcal{B}| \quad \text{for any basis } \mathcal{B} \text{ of } V

Subspace Dimension Inequality

WVdim(W)dim(V),with equality    W=VW \subseteq V \Rightarrow \dim(W) \leq \dim(V), \quad \text{with equality} \iff W = V

Dimension Sum Formula

dim(W1+W2)=dim(W1)+dim(W2)dim(W1W2)\dim(W_1 + W_2) = \dim(W_1) + \dim(W_2) - \dim(W_1 \cap W_2)

Direct Sum Criterion

V=W1W2    V=W1+W2 and W1W2={0}V = W_1 \oplus W_2 \iff V = W_1 + W_2 \text{ and } W_1 \cap W_2 = \{\mathbf{0}\}

Direct Sum Dimension

V=W1W2dim(V)=dim(W1)+dim(W2)V = W_1 \oplus W_2 \Rightarrow \dim(V) = \dim(W_1) + \dim(W_2)

Rank-Nullity Theorem (Matrix Form)

dim(ColA)+dim(NullA)=n\dim(\text{Col}\,A) + \dim(\text{Null}\,A) = n

Four Fundamental Subspaces Dimensions

dim(ColA)=rdim(RowA)=rdim(NullA)=nrdim(NullAT)=mr\begin{aligned} \dim(\text{Col}\,A) &= r & \dim(\text{Row}\,A) &= r \\ \dim(\text{Null}\,A) &= n - r & \dim(\text{Null}\,A^T) &= m - r \end{aligned}

Row Rank Equals Column Rank

dim(RowA)=dim(ColA)=rank(A)\dim(\text{Row}\,A) = \dim(\text{Col}\,A) = \text{rank}(A)

Matrix Equality

A=B    aij=bij for all i,jA = B \iff a_{ij} = b_{ij} \text{ for all } i, j

Matrix Addition

(A+B)ij=aij+bij(A + B)_{ij} = a_{ij} + b_{ij}

Matrix Subtraction

(AB)ij=aijbij(A - B)_{ij} = a_{ij} - b_{ij}

Scalar Multiplication of Matrices

(cA)ij=caij(cA)_{ij} = c \cdot a_{ij}

Matrix Multiplication

(AB)ij=k=1naikbkj(AB)_{ij} = \sum_{k=1}^{n} a_{ik}\, b_{kj}

Matrix-Vector Product (Column Form)

Ax=x1a1+x2a2++xnanA\mathbf{x} = x_1 \mathbf{a}_1 + x_2 \mathbf{a}_2 + \cdots + x_n \mathbf{a}_n

Matrix Multiplication Associativity

(AB)C=A(BC)(AB)C = A(BC)

Matrix Multiplication Distributivity

A(B+C)=AB+AC,(A+B)C=AC+BCA(B + C) = AB + AC, \qquad (A + B)C = AC + BC

Matrix Multiplication Non-Commutativity

ABBAin generalAB \neq BA \quad \text{in general}

Matrix Power

A0=I,Ak=AAAk factors,Ak=(A1)kA^0 = I, \qquad A^k = \underbrace{A \cdot A \cdots A}_{k \text{ factors}}, \qquad A^{-k} = (A^{-1})^k

Transpose Definition

(AT)ij=aji(A^T)_{ij} = a_{ji}

Transpose Involution

(AT)T=A(A^T)^T = A

Transpose of Sum

(A+B)T=AT+BT(A + B)^T = A^T + B^T

Transpose of Scalar Multiple

(cA)T=cAT(cA)^T = c\, A^T

Transpose of Product

(AB)T=BTAT(AB)^T = B^T A^T

Symmetric Matrix Definition

A=AT    aij=aji for all i,jA = A^T \iff a_{ij} = a_{ji} \text{ for all } i, j

Skew-Symmetric Matrix Definition

AT=A    aij=ajiA^T = -A \iff a_{ij} = -a_{ji}

Symmetric Skew Decomposition

A=12(A+AT)+12(AAT)A = \tfrac{1}{2}(A + A^T) + \tfrac{1}{2}(A - A^T)

Gram Matrix Symmetry

(ATA)T=ATA,(AAT)T=AAT(A^T A)^T = A^T A, \qquad (A A^T)^T = A A^T

Identity Matrix Definition

In=[δij],δij={1i=j0ijI_n = [\delta_{ij}], \qquad \delta_{ij} = \begin{cases} 1 & i = j \\ 0 & i \neq j \end{cases}

Identity Matrix Property

AI=IA=AAI = IA = A

Diagonal Matrix Definition

D=diag(d1,d2,,dn)    dij=0 for ijD = \operatorname{diag}(d_1, d_2, \ldots, d_n) \iff d_{ij} = 0 \text{ for } i \neq j

Diagonal Matrix Power

Dk=diag(d1k,d2k,,dnk)D^k = \operatorname{diag}(d_1^k, d_2^k, \ldots, d_n^k)

Diagonal Matrix Determinant

det(diag(d1,,dn))=d1d2dn\det\bigl(\operatorname{diag}(d_1, \ldots, d_n)\bigr) = d_1 d_2 \cdots d_n

Triangular Matrix Determinant

det(T)=t11t22tnn\det(T) = t_{11} \, t_{22} \cdots t_{nn}

Orthogonal Matrix Definition

QTQ=QQT=IQ^T Q = Q Q^T = I

Orthogonal Matrix Determinant

det(Q)=±1\det(Q) = \pm 1

Idempotent Matrix Definition

A2=AA^2 = A

Idempotent Rank Trace

A2=A    rank(A)=tr(A)A^2 = A \implies \operatorname{rank}(A) = \operatorname{tr}(A)

Nilpotent Matrix Definition

Ak=Ofor some k1A^k = O \quad \text{for some } k \geq 1

Neumann Series Nilpotent

Ak=O    (IA)1=I+A+A2++Ak1A^k = O \implies (I - A)^{-1} = I + A + A^2 + \cdots + A^{k-1}

Involutory Matrix Definition

A2=IA^2 = I

Cross Product Skew Matrix

a×b=[a]×b,[a]×=(0a3a2a30a1a2a10)\mathbf{a} \times \mathbf{b} = [\mathbf{a}]_\times \mathbf{b}, \qquad [\mathbf{a}]_\times = \begin{pmatrix} 0 & -a_3 & a_2 \\ a_3 & 0 & -a_1 \\ -a_2 & a_1 & 0 \end{pmatrix}

Inverse Definition

AA1=A1A=IA A^{-1} = A^{-1} A = I

Inverse 2x2 Formula

(abcd)1=1adbc(dbca)\begin{pmatrix} a & b \\ c & d \end{pmatrix}^{-1} = \frac{1}{ad - bc}\begin{pmatrix} d & -b \\ -c & a \end{pmatrix}

Inverse via Adjugate

A1=1det(A)adj(A)A^{-1} = \frac{1}{\det(A)}\, \operatorname{adj}(A)

Inverse via Row Reduction

[AI]  row ops  [IA1][A \mid I] \;\xrightarrow{\text{row ops}}\; [I \mid A^{-1}]

Inverse Involution

(A1)1=A(A^{-1})^{-1} = A

Inverse of Product

(AB)1=B1A1(AB)^{-1} = B^{-1} A^{-1}

Inverse of Transpose

(AT)1=(A1)T(A^T)^{-1} = (A^{-1})^T

Inverse of Scalar Multiple

(cA)1=1cA1(cA)^{-1} = \frac{1}{c}\, A^{-1}

Inverse of Power

(Ak)1=(A1)k=Ak(A^k)^{-1} = (A^{-1})^k = A^{-k}

Determinant of Inverse

det(A1)=1det(A)\det(A^{-1}) = \frac{1}{\det(A)}

Diagonal Matrix Inverse

diag(d1,,dn)1=diag ⁣(1d1,,1dn)\operatorname{diag}(d_1, \ldots, d_n)^{-1} = \operatorname{diag}\!\left(\frac{1}{d_1}, \ldots, \frac{1}{d_n}\right)

Orthogonal Matrix Inverse

Q1=QTQ^{-1} = Q^T

Solve System via Inverse

Ax=b    x=A1bA\mathbf{x} = \mathbf{b} \implies \mathbf{x} = A^{-1}\mathbf{b}

Invertible Matrix Theorem

For ARn×n, the following are equivalent:(1) A is invertible(2) det(A)0(3) rank(A)=n(4) columns of A are linearly independent(5) rows of A are linearly independent(6) columns of A span Rn(7) columns form a basis of Rn(8) Ax=0 has only the trivial solution(9) Ax=b has a unique solution for every b(10) Null(A)={0}(11) rref(A)=I(12) A is a product of elementary matrices(13) 0 is not an eigenvalue of A\begin{aligned} \text{For } A \in \mathbb{R}^{n \times n}, \text{ the following are equivalent:} & \\ (1)\ A \text{ is invertible} \quad (2)\ \det(A) \neq 0 \quad (3)\ \operatorname{rank}(A) = n & \\ (4)\ \text{columns of } A \text{ are linearly independent} & \\ (5)\ \text{rows of } A \text{ are linearly independent} & \\ (6)\ \text{columns of } A \text{ span } \mathbb{R}^n \quad (7)\ \text{columns form a basis of } \mathbb{R}^n & \\ (8)\ A\mathbf{x} = \mathbf{0} \text{ has only the trivial solution} & \\ (9)\ A\mathbf{x} = \mathbf{b} \text{ has a unique solution for every } \mathbf{b} & \\ (10)\ \operatorname{Null}(A) = \{\mathbf{0}\} & \\ (11)\ \operatorname{rref}(A) = I \quad (12)\ A \text{ is a product of elementary matrices} & \\ (13)\ 0 \text{ is not an eigenvalue of } A & \end{aligned}

Singular Matrix Definition

A singular    det(A)=0    rank(A)<nA \text{ singular} \iff \det(A) = 0 \iff \operatorname{rank}(A) < n

Rank Bounds

0rank(A)min(m,n)0 \leq \operatorname{rank}(A) \leq \min(m, n)

Rank of Transpose

rank(AT)=rank(A)\operatorname{rank}(A^T) = \operatorname{rank}(A)

Rank Product Inequality

rank(AB)min(rank(A),rank(B))\operatorname{rank}(AB) \leq \min\bigl(\operatorname{rank}(A), \operatorname{rank}(B)\bigr)

Sylvester Rank Inequality

rank(A)+rank(B)nrank(AB)\operatorname{rank}(A) + \operatorname{rank}(B) - n \leq \operatorname{rank}(AB)

Rank Sum Inequality

rank(A+B)rank(A)+rank(B)\operatorname{rank}(A + B) \leq \operatorname{rank}(A) + \operatorname{rank}(B)

Rank Invariance Invertible

rank(PAQ)=rank(A)\operatorname{rank}(PAQ) = \operatorname{rank}(A)

Gram Rank Identity

rank(ATA)=rank(AAT)=rank(A)\operatorname{rank}(A^T A) = \operatorname{rank}(A A^T) = \operatorname{rank}(A)

Rank-One Outer Product

A=uvT    rank(A)=1(u,v0)A = \mathbf{u}\mathbf{v}^T \implies \operatorname{rank}(A) = 1 \quad (\mathbf{u}, \mathbf{v} \neq \mathbf{0})

Trace Definition

tr(A)=i=1naii=a11+a22++ann\operatorname{tr}(A) = \sum_{i=1}^{n} a_{ii} = a_{11} + a_{22} + \cdots + a_{nn}

Trace Linearity

tr(cA+dB)=ctr(A)+dtr(B)\operatorname{tr}(cA + dB) = c\operatorname{tr}(A) + d\operatorname{tr}(B)

Trace of Transpose

tr(AT)=tr(A)\operatorname{tr}(A^T) = \operatorname{tr}(A)

Trace Cyclic Property

tr(AB)=tr(BA)\operatorname{tr}(AB) = \operatorname{tr}(BA)

Trace Sum of Eigenvalues

tr(A)=i=1nλi\operatorname{tr}(A) = \sum_{i=1}^{n} \lambda_i

Trace Similarity Invariance

tr(P1AP)=tr(A)\operatorname{tr}(P^{-1}AP) = \operatorname{tr}(A)

Trace of Commutator

tr(ABBA)=0\operatorname{tr}(AB - BA) = 0

Trace Symmetric Skew Product

tr(SK)=0(ST=S,  KT=K)\operatorname{tr}(SK) = 0 \quad (S^T = S, \; K^T = -K)

Trace Orthonormal Basis

tr(A)=i=1nqiTAqi\operatorname{tr}(A) = \sum_{i=1}^{n} \mathbf{q}_i^T A \mathbf{q}_i

Frobenius Inner Product

A,BF=tr(ATB)=i,jaijbij\langle A, B \rangle_F = \operatorname{tr}(A^T B) = \sum_{i,j} a_{ij} b_{ij}

Frobenius Norm

AF=tr(ATA)=i,jaij2\|A\|_F = \sqrt{\operatorname{tr}(A^T A)} = \sqrt{\sum_{i,j} a_{ij}^2}

Determinant 2x2

det(abcd)=adbc\det\begin{pmatrix} a & b \\ c & d \end{pmatrix} = ad - bc

Determinant 3x3

det(A)=a11(a22a33a23a32)a12(a21a33a23a31)+a13(a21a32a22a31)\det(A) = a_{11}(a_{22}a_{33} - a_{23}a_{32}) - a_{12}(a_{21}a_{33} - a_{23}a_{31}) + a_{13}(a_{21}a_{32} - a_{22}a_{31})

Determinant Recursive Definition

det(A)={a11n=1j=1n(1)1+ja1jM1jn2\det(A) = \begin{cases} a_{11} & n = 1 \\ \displaystyle\sum_{j=1}^{n} (-1)^{1+j} \, a_{1j} \, M_{1j} & n \geq 2 \end{cases}

Determinant Permutation Formula

det(A)=σSnsgn(σ)aσ(1),1aσ(2),2aσ(n),n\det(A) = \sum_{\sigma \in S_n} \operatorname{sgn}(\sigma) \, a_{\sigma(1),1} \, a_{\sigma(2),2} \cdots a_{\sigma(n),n}

Minor Definition

Mij=det ⁣(A(i,j))M_{ij} = \det\!\left(A^{(i,j)}\right)

Cofactor Definition

Cij=(1)i+jMijC_{ij} = (-1)^{i+j} \, M_{ij}

Cofactor Matrix Definition

cof(A)=[Cij]n×n\operatorname{cof}(A) = \bigl[C_{ij}\bigr]_{n \times n}

Adjugate Definition

adj(A)=cof(A)T\operatorname{adj}(A) = \operatorname{cof}(A)^T

Laplace Row Expansion

det(A)=j=1naijCijfor any fixed row i\det(A) = \sum_{j=1}^{n} a_{ij} \, C_{ij} \qquad \text{for any fixed row } i

Laplace Column Expansion

det(A)=i=1naijCijfor any fixed column j\det(A) = \sum_{i=1}^{n} a_{ij} \, C_{ij} \qquad \text{for any fixed column } j

Determinant Row Swap

det(B)=det(A)\det(B) = -\det(A)

Determinant Row Scaling

det(B)=kdet(A)\det(B) = k \, \det(A)

Determinant Row Addition

det(B)=det(A)\det(B) = \det(A)

Determinant of Transpose

det(AT)=det(A)\det(A^T) = \det(A)

Determinant of Product

det(AB)=det(A)det(B)\det(AB) = \det(A) \, \det(B)

Determinant of Scalar Multiple

det(kA)=kndet(A)\det(kA) = k^n \, \det(A)

Determinant of Power

det(Ak)=(det(A))k\det(A^k) = \bigl(\det(A)\bigr)^k

Determinant of Identity

det(In)=1\det(I_n) = 1

Block Triangular Determinant

det(A11A120A22)=det(A11)det(A22)\det\begin{pmatrix} A_{11} & A_{12} \\ 0 & A_{22} \end{pmatrix} = \det(A_{11}) \, \det(A_{22})

Vandermonde Determinant

det(V)=1i<jn(xjxi)\det(V) = \prod_{1 \leq i < j \leq n} (x_j - x_i)

Adjugate Identity

Aadj(A)=adj(A)A=det(A)IA \cdot \operatorname{adj}(A) = \operatorname{adj}(A) \cdot A = \det(A) \, I

Cramers Rule

xi=det(Ai)det(A)x_i = \frac{\det(A_i)}{\det(A)}

Determinant Signed Area 2D

signed area(u,v)=det(uv)=adbc\text{signed area}(\mathbf{u}, \mathbf{v}) = \det\begin{pmatrix} \mathbf{u} & \mathbf{v} \end{pmatrix} = ad - bc

Determinant Signed Volume 3D

signed volume(a,b,c)=det(abc)=a(b×c)\text{signed volume}(\mathbf{a}, \mathbf{b}, \mathbf{c}) = \det\begin{pmatrix} \mathbf{a} & \mathbf{b} & \mathbf{c} \end{pmatrix} = \mathbf{a} \cdot (\mathbf{b} \times \mathbf{c})

Determinant Volume Scaling Factor

vol(A(S))=det(A)vol(S)\operatorname{vol}\bigl(A(S)\bigr) = |\det(A)| \cdot \operatorname{vol}(S)

Triangle Area via Determinant

Area=12det(x2x1x3x1y2y1y3y1)\text{Area} = \frac{1}{2} \left| \det\begin{pmatrix} x_2 - x_1 & x_3 - x_1 \\ y_2 - y_1 & y_3 - y_1 \end{pmatrix} \right|

Tetrahedron Volume via Determinant

V=16det(e1e2e3)V = \frac{1}{6} \left| \det\begin{pmatrix} \mathbf{e}_1 & \mathbf{e}_2 & \mathbf{e}_3 \end{pmatrix} \right|

Determinant Product of Eigenvalues

det(A)=λ1λ2λn\det(A) = \lambda_1 \, \lambda_2 \cdots \lambda_n

Linear Equation Standard Form

a1x1+a2x2++anxn=ba_1 x_1 + a_2 x_2 + \cdots + a_n x_n = b

Linear System Matrix Form

Ax=bA\mathbf{x} = \mathbf{b}

Vector Equation Form

x1a1+x2a2++xnan=bx_1 \mathbf{a}_1 + x_2 \mathbf{a}_2 + \cdots + x_n \mathbf{a}_n = \mathbf{b}

Augmented Matrix Construction

[Ab]=(a11a12a1nb1a21a22a2nb2am1am2amnbm)[A \mid \mathbf{b}] = \left(\begin{array}{cccc|c} a_{11} & a_{12} & \cdots & a_{1n} & b_1 \\ a_{21} & a_{22} & \cdots & a_{2n} & b_2 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} & b_m \end{array}\right)

Row Echelon Form Definition

REF: (000000000)\text{REF: } \begin{pmatrix} \boxed{\ast} & \bullet & \bullet & \bullet & \bullet \\ 0 & \boxed{\ast} & \bullet & \bullet & \bullet \\ 0 & 0 & 0 & \boxed{\ast} & \bullet \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}

Reduced Row Echelon Form Definition

RREF: (100010000100000)\text{RREF: } \begin{pmatrix} \boxed{1} & 0 & \bullet & 0 & \bullet \\ 0 & \boxed{1} & \bullet & 0 & \bullet \\ 0 & 0 & 0 & \boxed{1} & \bullet \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix}

RREF Uniqueness

RREF(A) is unique\text{RREF}(A) \text{ is unique}

Pivot Definition

pivot=leading nonzero entry of a row in echelon form\text{pivot} = \text{leading nonzero entry of a row in echelon form}

Elementary Row Operations

RiRj(swap)kRiRi,k0(scaling)Ri+cRjRi(addition)\begin{aligned} R_i &\leftrightarrow R_j \quad \text{(swap)} \\ kR_i &\to R_i, \quad k \neq 0 \quad \text{(scaling)} \\ R_i + cR_j &\to R_i \quad \text{(addition)} \end{aligned}

Row Equivalence Preserves Solutions

[Ab][Ab]    Sol(Ax=b)=Sol(Ax=b)[A \mid \mathbf{b}] \sim [A' \mid \mathbf{b}'] \;\Longrightarrow\; \text{Sol}(A\mathbf{x} = \mathbf{b}) = \text{Sol}(A'\mathbf{x} = \mathbf{b}')

Free Variables Count

(number of free variables)=nrank(A)\text{(number of free variables)} = n - \text{rank}(A)

Solvability Rank Criterion

Ax=b is consistent    rank(A)=rank([Ab])A\mathbf{x} = \mathbf{b} \text{ is consistent} \iff \text{rank}(A) = \text{rank}([A \mid \mathbf{b}])

Solution Structure Decomposition

x=xp+xh,xhNull(A)\mathbf{x} = \mathbf{x}_p + \mathbf{x}_h, \quad \mathbf{x}_h \in \text{Null}(A)

Homogeneous Solution Space Dimension

dimNull(A)=nrank(A)\dim \text{Null}(A) = n - \text{rank}(A)

Underdetermined Homogeneous Has Nontrivial

n>m    Ax=0 has a nontrivial solutionn > m \;\Longrightarrow\; A\mathbf{x} = \mathbf{0} \text{ has a nontrivial solution}
Learn More

Linear Algebra Terms and Definitions

Discover essential linear algebra definitions that form the mathematical foundation for understanding vectors, matrices, and their relationships. This guide breaks down key terms from basic vector concepts like magnitude and direction to advanced matrix classifications and properties. Each definition includes precise mathematical notation, clear explanations, and visual examples to help grasp the concept. Whether you're learning about vector spaces, exploring matrix types like triangular and symmetric matrices, or studying transformations, this organized reference makes complex linear algebra terminology accessible. The page serves as both a learning tool and a quick reference for students and practitioners, featuring interactive mathematical notation and practical examples throughout.

Vector

An ordered list of $n$ real numbers: $\mathbf{v} = (v_1, v_2, \ldots, v_n) \in \mathbb{R}^n$

Scalar

An element of the underlying field — in standard linear algebra, a real number $c \in \mathbb{R}$

Magnitude (Norm)

The length of a [vector](!/linear-algebra/definitions#vector), measured as its distance from the origin: $$\|\mathbf{v}\| = \sqrt{v_1^2 + v_2^2 + \cdots + v_n^2}$$

Unit Vector

A vector $\hat{\mathbf{u}}$ with $\|\hat{\mathbf{u}}\| = 1$

Dot Product

An operation that takes two [vectors](!/linear-algebra/definitions#vector) and returns a [scalar](!/linear-algebra/definitions#scalar), computed by summing the products of corresponding components: $$\mathbf{u} \cdot \mathbf{v} = u_1 v_1 + u_2 v_2 + \cdots + u_n v_n = \|\mathbf{u}\|\,\|\mathbf{v}\|\cos\theta$$

Cross Product

A binary operation on two vectors in $\mathbb{R}^3$ that produces a vector perpendicular to both inputs: $$\mathbf{u} \times \mathbf{v} = \begin{pmatrix} u_2 v_3 - u_3 v_2 \\ u_3 v_1 - u_1 v_3 \\ u_1 v_2 - u_2 v_1 \end{pmatrix}$$

Linear Combination

A sum of [vectors](!/linear-algebra/definitions#vector), each multiplied by a [scalar](!/linear-algebra/definitions#scalar) coefficient: $$c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \cdots + c_k\mathbf{v}_k$$

Vector Space

A set $V$ equipped with vector addition and scalar multiplication satisfying the [vector space axioms](!/linear-algebra/vector-spaces/axioms)

Subspace

A nonempty subset $W \subseteq V$ that is itself a [vector space](!/linear-algebra/definitions#vector_space) under the same operations

Span

The set of all [linear combinations](!/linear-algebra/definitions#linear_combination) of a given collection of vectors: $$\text{Span}\{\mathbf{v}_1, \ldots, \mathbf{v}_k\} = \{c_1\mathbf{v}_1 + \cdots + c_k\mathbf{v}_k \mid c_i \in \mathbb{R}\}$$

Linear Independence

Vectors $\mathbf{v}_1, \ldots, \mathbf{v}_k$ are linearly independent if the only solution to $$c_1\mathbf{v}_1 + \cdots + c_k\mathbf{v}_k = \mathbf{0}$$ is $c_1 = c_2 = \cdots = c_k = 0$

Basis

A set $\{\mathbf{v}_1, \ldots, \mathbf{v}_n\}$ that is [linearly independent](!/linear-algebra/definitions#linear_independence) and [spans](!/linear-algebra/definitions#span) the entire [vector space](!/linear-algebra/definitions#vector_space)

Dimension

The number of vectors in any [basis](!/linear-algebra/definitions#basis) of a [vector space](!/linear-algebra/definitions#vector_space) $V$, denoted $\dim(V)$

Column Space

The set of all vectors expressible as $A\mathbf{x}$ — equivalently, the [span](!/linear-algebra/definitions#span) of the columns of $A$: $$\text{Col}(A) = \{A\mathbf{x} \mid \mathbf{x} \in \mathbb{R}^n\}$$

Null Space (Kernel)

The set of all solutions to the [homogeneous system](!/linear-algebra/definitions#homogeneous_system) $A\mathbf{x} = \mathbf{0}$: $$\text{Nul}(A) = \{\mathbf{x} \in \mathbb{R}^n \mid A\mathbf{x} = \mathbf{0}\}$$

Row Space

The [span](!/linear-algebra/definitions#span) of the rows of a [matrix](!/linear-algebra/definitions#matrix), equivalently the [column space](!/linear-algebra/definitions#column_space) of its transpose: $$\text{Row}(A) = \text{Col}(A^T)$$

Left Null Space

The [null space](!/linear-algebra/definitions#null_space) of the transpose $A^T$ — the set of all vectors $\mathbf{y}$ satisfying $A^T\mathbf{y} = \mathbf{0}$: $$\text{Nul}(A^T) = \{\mathbf{y} \in \mathbb{R}^m \mid A^T\mathbf{y} = \mathbf{0}\}$$

Matrix

A rectangular array of numbers with $m$ rows and $n$ columns: $A \in \mathbb{R}^{m \times n}$

Square Matrix

A [matrix](!/linear-algebra/definitions#matrix) with equal numbers of rows and columns: $A \in \mathbb{R}^{n \times n}$

Identity Matrix

The [square matrix](!/linear-algebra/definitions#square_matrix) with $1$s on the main diagonal and $0$s elsewhere, denoted $I_n$: $$I_n = \begin{pmatrix} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \end{pmatrix}$$

Symmetric Matrix

A [square matrix](!/linear-algebra/definitions#square_matrix) satisfying $A = A^T$

Inverse Matrix

A [square matrix](!/linear-algebra/definitions#square_matrix) $A^{-1}$ such that $AA^{-1} = A^{-1}A = I$

Singular Matrix

A [square matrix](!/linear-algebra/definitions#square_matrix) $A$ with $\det(A) = 0$

Rank

The number of [linearly independent](!/linear-algebra/definitions#linear_independence) columns (equivalently, rows) in a [matrix](!/linear-algebra/definitions#matrix): $$\text{rank}(A) = \dim(\text{Col}(A)) = \dim(\text{Row}(A))$$

Trace

The sum of the main diagonal entries of a [square matrix](!/linear-algebra/definitions#square_matrix): $$\text{tr}(A) = a_{11} + a_{22} + \cdots + a_{nn} = \sum_{i=1}^{n} a_{ii}$$

Diagonal Matrix

A [square matrix](!/linear-algebra/definitions#square_matrix) where $a_{ij} = 0$ for all $i \neq j$

Positive Definite Matrix

A [symmetric matrix](!/linear-algebra/definitions#symmetric_matrix) $A$ satisfying $\mathbf{x}^T A \mathbf{x} > 0$ for all nonzero $\mathbf{x}$

Determinant

A scalar $\det(A) \in \mathbb{R}$ assigned to every [square matrix](!/linear-algebra/definitions#square_matrix), defined recursively via [cofactor](!/linear-algebra/definitions#cofactor) expansion

Minor

The [determinant](!/linear-algebra/definitions#determinant) of the submatrix obtained by deleting row $i$ and column $j$ from a [matrix](!/linear-algebra/definitions#matrix): $$M_{ij} = \det(\hat{A}_{ij})$$

Cofactor

A signed [minor](!/linear-algebra/definitions#minor), with sign determined by the position $(i,j)$: $$C_{ij} = (-1)^{i+j} M_{ij}$$

Cofactor Matrix (Adjugate)

The transpose of the matrix of [cofactors](!/linear-algebra/definitions#cofactor) of $A$: $$\text{adj}(A) = C^T$$

System of Linear Equations

A collection of equations $A\mathbf{x} = \mathbf{b}$ where $A$ is an $m \times n$ [matrix](!/linear-algebra/definitions#matrix) and $\mathbf{b} \in \mathbb{R}^m$

Augmented Matrix

The [matrix](!/linear-algebra/definitions#matrix) formed by appending the right-hand side vector $\mathbf{b}$ as an additional column to the coefficient matrix $A$, written $[A \mid \mathbf{b}]$

Row Echelon Form

A matrix where: • all zero rows are at the bottom • each leading entry ([pivot](!/linear-algebra/definitions#pivot)) is to the right of the pivot in the row above • all entries below each pivot are zero

Reduced Row Echelon Form

[Row echelon form](!/linear-algebra/definitions#row_echelon_form) with the additional requirements: • every [pivot](!/linear-algebra/definitions#pivot) is $1$ • each pivot is the only nonzero entry in its column

Pivot

The first nonzero entry in each row of a matrix in [row echelon form](!/linear-algebra/definitions#row_echelon_form)

Homogeneous System

A [system of linear equations](!/linear-algebra/definitions#system_of_linear_equations) in which every equation equals zero: $A\mathbf{x} = \mathbf{0}$

Linear Transformation

A function $T: V \to W$ between [vector spaces](!/linear-algebra/definitions#vector_space) that preserves addition and scalar multiplication: $$T(\mathbf{u} + \mathbf{v}) = T(\mathbf{u}) + T(\mathbf{v})$$ $$T(c\mathbf{u}) = cT(\mathbf{u})$$

Image (Range)

The set of all output vectors of a [linear transformation](!/linear-algebra/definitions#linear_transformation): $$\text{Im}(T) = \{T(\mathbf{v}) \mid \mathbf{v} \in V\}$$

Matrix Representation

A [matrix](!/linear-algebra/definitions#matrix) $A$ such that $T(\mathbf{v}) = A[\mathbf{v}]_{\mathcal{B}}$ for a chosen [basis](!/linear-algebra/definitions#basis) $\mathcal{B}$

Change of Basis Matrix

A matrix $P$ that converts coordinates from one [basis](!/linear-algebra/definitions#basis) to another: $[\mathbf{v}]_{\mathcal{B}'} = P^{-1}[\mathbf{v}]_{\mathcal{B}}$

Similar Matrices

Matrices $A$ and $B$ are similar if $B = P^{-1}AP$ for some invertible matrix $P$

Eigenvalue

A scalar $\lambda$ such that $A\mathbf{v} = \lambda\mathbf{v}$ for some nonzero [vector](!/linear-algebra/definitions#vector) $\mathbf{v}$

Eigenvector

A nonzero vector $\mathbf{v}$ such that $A\mathbf{v} = \lambda\mathbf{v}$ for some scalar $\lambda$

Eigenspace

The set of all [eigenvectors](!/linear-algebra/definitions#eigenvector) for a given [eigenvalue](!/linear-algebra/definitions#eigenvalue) $\lambda$, together with the zero vector — equivalently, the [null space](!/linear-algebra/definitions#null_space) of $(A - \lambda I)$: $$E_\lambda = \text{Nul}(A - \lambda I)$$

Characteristic Polynomial

The polynomial whose roots are the [eigenvalues](!/linear-algebra/definitions#eigenvalue) of $A$, obtained by computing: $$p(\lambda) = \det(A - \lambda I)$$

Algebraic Multiplicity

The multiplicity of $\lambda$ as a root of the [characteristic polynomial](!/linear-algebra/definitions#characteristic_polynomial)

Geometric Multiplicity

The [dimension](!/linear-algebra/definitions#dimension) of the [eigenspace](!/linear-algebra/definitions#eigenspace) associated with an [eigenvalue](!/linear-algebra/definitions#eigenvalue) $\lambda$: $$\text{geo. mult.}(\lambda) = \dim(E_\lambda) = \dim(\text{Nul}(A - \lambda I))$$

Singular Value

A nonnegative scalar measuring how much a matrix stretches space along each principal direction, derived from the [eigenvalues](!/linear-algebra/definitions#eigenvalue) of $A^TA$: $$\sigma_i = \sqrt{\lambda_i(A^TA)}$$

Inner Product

A function $\langle \cdot, \cdot \rangle: V \times V \to \mathbb{R}$ satisfying symmetry, linearity, and positive-definiteness

Orthogonal Vectors

Vectors $\mathbf{u}$ and $\mathbf{v}$ are orthogonal if $\langle \mathbf{u}, \mathbf{v} \rangle = 0$

Orthogonal Set

A set of vectors $\{\mathbf{v}_1, \ldots, \mathbf{v}_k\}$ where $\langle \mathbf{v}_i, \mathbf{v}_j \rangle = 0$ for all $i \neq j$

Orthonormal Set

An [orthogonal set](!/linear-algebra/definitions#orthogonal_set) where every vector is a [unit vector](!/linear-algebra/definitions#unit_vector): $\langle \mathbf{v}_i, \mathbf{v}_j \rangle = \delta_{ij}$

Orthogonal Complement

The set of all vectors in $V$ that are [orthogonal](!/linear-algebra/definitions#orthogonal_vectors) to every vector in a [subspace](!/linear-algebra/definitions#subspace) $W$: $$W^\perp = \{\mathbf{v} \in V \mid \langle \mathbf{v}, \mathbf{w} \rangle = 0 \text{ for all } \mathbf{w} \in W\}$$

Orthogonal Matrix

A [square matrix](!/linear-algebra/definitions#square_matrix) $Q$ satisfying $Q^TQ = QQ^T = I$

Vector

An ordered list of $n$ real numbers: $\mathbf{v} = (v_1, v_2, \ldots, v_n) \in \mathbb{R}^n$

Scalar

An element of the underlying field — in standard linear algebra, a real number $c \in \mathbb{R}$

Magnitude (Norm)

The length of a [vector](!/linear-algebra/definitions#vector), measured as its distance from the origin: $$\|\mathbf{v}\| = \sqrt{v_1^2 + v_2^2 + \cdots + v_n^2}$$

Unit Vector

A vector $\hat{\mathbf{u}}$ with $\|\hat{\mathbf{u}}\| = 1$

Dot Product

An operation that takes two [vectors](!/linear-algebra/definitions#vector) and returns a [scalar](!/linear-algebra/definitions#scalar), computed by summing the products of corresponding components: $$\mathbf{u} \cdot \mathbf{v} = u_1 v_1 + u_2 v_2 + \cdots + u_n v_n = \|\mathbf{u}\|\,\|\mathbf{v}\|\cos\theta$$

Cross Product

A binary operation on two vectors in $\mathbb{R}^3$ that produces a vector perpendicular to both inputs: $$\mathbf{u} \times \mathbf{v} = \begin{pmatrix} u_2 v_3 - u_3 v_2 \\ u_3 v_1 - u_1 v_3 \\ u_1 v_2 - u_2 v_1 \end{pmatrix}$$

Linear Combination

A sum of [vectors](!/linear-algebra/definitions#vector), each multiplied by a [scalar](!/linear-algebra/definitions#scalar) coefficient: $$c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \cdots + c_k\mathbf{v}_k$$

Vector Space

A set $V$ equipped with vector addition and scalar multiplication satisfying the [vector space axioms](!/linear-algebra/vector-spaces/axioms)

Subspace

A nonempty subset $W \subseteq V$ that is itself a [vector space](!/linear-algebra/definitions#vector_space) under the same operations

Span

The set of all [linear combinations](!/linear-algebra/definitions#linear_combination) of a given collection of vectors: $$\text{Span}\{\mathbf{v}_1, \ldots, \mathbf{v}_k\} = \{c_1\mathbf{v}_1 + \cdots + c_k\mathbf{v}_k \mid c_i \in \mathbb{R}\}$$

Linear Independence

Vectors $\mathbf{v}_1, \ldots, \mathbf{v}_k$ are linearly independent if the only solution to $$c_1\mathbf{v}_1 + \cdots + c_k\mathbf{v}_k = \mathbf{0}$$ is $c_1 = c_2 = \cdots = c_k = 0$

Basis

A set $\{\mathbf{v}_1, \ldots, \mathbf{v}_n\}$ that is [linearly independent](!/linear-algebra/definitions#linear_independence) and [spans](!/linear-algebra/definitions#span) the entire [vector space](!/linear-algebra/definitions#vector_space)

Dimension

The number of vectors in any [basis](!/linear-algebra/definitions#basis) of a [vector space](!/linear-algebra/definitions#vector_space) $V$, denoted $\dim(V)$

Column Space

The set of all vectors expressible as $A\mathbf{x}$ — equivalently, the [span](!/linear-algebra/definitions#span) of the columns of $A$: $$\text{Col}(A) = \{A\mathbf{x} \mid \mathbf{x} \in \mathbb{R}^n\}$$

Null Space (Kernel)

The set of all solutions to the [homogeneous system](!/linear-algebra/definitions#homogeneous_system) $A\mathbf{x} = \mathbf{0}$: $$\text{Nul}(A) = \{\mathbf{x} \in \mathbb{R}^n \mid A\mathbf{x} = \mathbf{0}\}$$

Row Space

The [span](!/linear-algebra/definitions#span) of the rows of a [matrix](!/linear-algebra/definitions#matrix), equivalently the [column space](!/linear-algebra/definitions#column_space) of its transpose: $$\text{Row}(A) = \text{Col}(A^T)$$

Left Null Space

The [null space](!/linear-algebra/definitions#null_space) of the transpose $A^T$ — the set of all vectors $\mathbf{y}$ satisfying $A^T\mathbf{y} = \mathbf{0}$: $$\text{Nul}(A^T) = \{\mathbf{y} \in \mathbb{R}^m \mid A^T\mathbf{y} = \mathbf{0}\}$$

Matrix

A rectangular array of numbers with $m$ rows and $n$ columns: $A \in \mathbb{R}^{m \times n}$

Square Matrix

A [matrix](!/linear-algebra/definitions#matrix) with equal numbers of rows and columns: $A \in \mathbb{R}^{n \times n}$

Identity Matrix

The [square matrix](!/linear-algebra/definitions#square_matrix) with $1$s on the main diagonal and $0$s elsewhere, denoted $I_n$: $$I_n = \begin{pmatrix} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \end{pmatrix}$$

Symmetric Matrix

A [square matrix](!/linear-algebra/definitions#square_matrix) satisfying $A = A^T$

Inverse Matrix

A [square matrix](!/linear-algebra/definitions#square_matrix) $A^{-1}$ such that $AA^{-1} = A^{-1}A = I$

Singular Matrix

A [square matrix](!/linear-algebra/definitions#square_matrix) $A$ with $\det(A) = 0$

Rank

The number of [linearly independent](!/linear-algebra/definitions#linear_independence) columns (equivalently, rows) in a [matrix](!/linear-algebra/definitions#matrix): $$\text{rank}(A) = \dim(\text{Col}(A)) = \dim(\text{Row}(A))$$

Trace

The sum of the main diagonal entries of a [square matrix](!/linear-algebra/definitions#square_matrix): $$\text{tr}(A) = a_{11} + a_{22} + \cdots + a_{nn} = \sum_{i=1}^{n} a_{ii}$$

Diagonal Matrix

A [square matrix](!/linear-algebra/definitions#square_matrix) where $a_{ij} = 0$ for all $i \neq j$

Positive Definite Matrix

A [symmetric matrix](!/linear-algebra/definitions#symmetric_matrix) $A$ satisfying $\mathbf{x}^T A \mathbf{x} > 0$ for all nonzero $\mathbf{x}$

Determinant

A scalar $\det(A) \in \mathbb{R}$ assigned to every [square matrix](!/linear-algebra/definitions#square_matrix), defined recursively via [cofactor](!/linear-algebra/definitions#cofactor) expansion

Minor

The [determinant](!/linear-algebra/definitions#determinant) of the submatrix obtained by deleting row $i$ and column $j$ from a [matrix](!/linear-algebra/definitions#matrix): $$M_{ij} = \det(\hat{A}_{ij})$$

Cofactor

A signed [minor](!/linear-algebra/definitions#minor), with sign determined by the position $(i,j)$: $$C_{ij} = (-1)^{i+j} M_{ij}$$

Cofactor Matrix (Adjugate)

The transpose of the matrix of [cofactors](!/linear-algebra/definitions#cofactor) of $A$: $$\text{adj}(A) = C^T$$

System of Linear Equations

A collection of equations $A\mathbf{x} = \mathbf{b}$ where $A$ is an $m \times n$ [matrix](!/linear-algebra/definitions#matrix) and $\mathbf{b} \in \mathbb{R}^m$

Augmented Matrix

The [matrix](!/linear-algebra/definitions#matrix) formed by appending the right-hand side vector $\mathbf{b}$ as an additional column to the coefficient matrix $A$, written $[A \mid \mathbf{b}]$

Row Echelon Form

A matrix where: • all zero rows are at the bottom • each leading entry ([pivot](!/linear-algebra/definitions#pivot)) is to the right of the pivot in the row above • all entries below each pivot are zero

Reduced Row Echelon Form

[Row echelon form](!/linear-algebra/definitions#row_echelon_form) with the additional requirements: • every [pivot](!/linear-algebra/definitions#pivot) is $1$ • each pivot is the only nonzero entry in its column

Pivot

The first nonzero entry in each row of a matrix in [row echelon form](!/linear-algebra/definitions#row_echelon_form)

Homogeneous System

A [system of linear equations](!/linear-algebra/definitions#system_of_linear_equations) in which every equation equals zero: $A\mathbf{x} = \mathbf{0}$

Linear Transformation

A function $T: V \to W$ between [vector spaces](!/linear-algebra/definitions#vector_space) that preserves addition and scalar multiplication: $$T(\mathbf{u} + \mathbf{v}) = T(\mathbf{u}) + T(\mathbf{v})$$ $$T(c\mathbf{u}) = cT(\mathbf{u})$$

Image (Range)

The set of all output vectors of a [linear transformation](!/linear-algebra/definitions#linear_transformation): $$\text{Im}(T) = \{T(\mathbf{v}) \mid \mathbf{v} \in V\}$$

Matrix Representation

A [matrix](!/linear-algebra/definitions#matrix) $A$ such that $T(\mathbf{v}) = A[\mathbf{v}]_{\mathcal{B}}$ for a chosen [basis](!/linear-algebra/definitions#basis) $\mathcal{B}$

Change of Basis Matrix

A matrix $P$ that converts coordinates from one [basis](!/linear-algebra/definitions#basis) to another: $[\mathbf{v}]_{\mathcal{B}'} = P^{-1}[\mathbf{v}]_{\mathcal{B}}$

Similar Matrices

Matrices $A$ and $B$ are similar if $B = P^{-1}AP$ for some invertible matrix $P$

Eigenvalue

A scalar $\lambda$ such that $A\mathbf{v} = \lambda\mathbf{v}$ for some nonzero [vector](!/linear-algebra/definitions#vector) $\mathbf{v}$

Eigenvector

A nonzero vector $\mathbf{v}$ such that $A\mathbf{v} = \lambda\mathbf{v}$ for some scalar $\lambda$

Eigenspace

The set of all [eigenvectors](!/linear-algebra/definitions#eigenvector) for a given [eigenvalue](!/linear-algebra/definitions#eigenvalue) $\lambda$, together with the zero vector — equivalently, the [null space](!/linear-algebra/definitions#null_space) of $(A - \lambda I)$: $$E_\lambda = \text{Nul}(A - \lambda I)$$

Characteristic Polynomial

The polynomial whose roots are the [eigenvalues](!/linear-algebra/definitions#eigenvalue) of $A$, obtained by computing: $$p(\lambda) = \det(A - \lambda I)$$

Algebraic Multiplicity

The multiplicity of $\lambda$ as a root of the [characteristic polynomial](!/linear-algebra/definitions#characteristic_polynomial)

Geometric Multiplicity

The [dimension](!/linear-algebra/definitions#dimension) of the [eigenspace](!/linear-algebra/definitions#eigenspace) associated with an [eigenvalue](!/linear-algebra/definitions#eigenvalue) $\lambda$: $$\text{geo. mult.}(\lambda) = \dim(E_\lambda) = \dim(\text{Nul}(A - \lambda I))$$

Singular Value

A nonnegative scalar measuring how much a matrix stretches space along each principal direction, derived from the [eigenvalues](!/linear-algebra/definitions#eigenvalue) of $A^TA$: $$\sigma_i = \sqrt{\lambda_i(A^TA)}$$

Inner Product

A function $\langle \cdot, \cdot \rangle: V \times V \to \mathbb{R}$ satisfying symmetry, linearity, and positive-definiteness

Orthogonal Vectors

Vectors $\mathbf{u}$ and $\mathbf{v}$ are orthogonal if $\langle \mathbf{u}, \mathbf{v} \rangle = 0$

Orthogonal Set

A set of vectors $\{\mathbf{v}_1, \ldots, \mathbf{v}_k\}$ where $\langle \mathbf{v}_i, \mathbf{v}_j \rangle = 0$ for all $i \neq j$

Orthonormal Set

An [orthogonal set](!/linear-algebra/definitions#orthogonal_set) where every vector is a [unit vector](!/linear-algebra/definitions#unit_vector): $\langle \mathbf{v}_i, \mathbf{v}_j \rangle = \delta_{ij}$

Orthogonal Complement

The set of all vectors in $V$ that are [orthogonal](!/linear-algebra/definitions#orthogonal_vectors) to every vector in a [subspace](!/linear-algebra/definitions#subspace) $W$: $$W^\perp = \{\mathbf{v} \in V \mid \langle \mathbf{v}, \mathbf{w} \rangle = 0 \text{ for all } \mathbf{w} \in W\}$$

Orthogonal Matrix

A [square matrix](!/linear-algebra/definitions#square_matrix) $Q$ satisfying $Q^TQ = QQ^T = I$
Learn More

Matrices

Explore matrices in linear algebra through our detailed guide.Starting with matrix definitions and notations, the page explains matrix structure, elements, and indexing. You will learn to distinguish between different matrix types - from basic row and column matrices to more complex square matrices. The guide also covers essential matrix properties and dives into special cases of square matrices like diagonal and triangular forms. Each topic features clear mathematical notation and visual examples to reinforce your understanding of these fundamental concepts.
  • Definitions and Notations - Explains how matrices are written using different types of brackets (square, parentheses, vertical bars) and introduces basic matrix notation conventions.
  • Elements, Structure and Indexing - Covers how matrix elements are organized in rows and columns, explains the 1-based indexing system, and demonstrates how to reference specific elements using row and column indices.
  • Types of Matrices - Describes different classifications of matrices based on their shape (column, row, rectangular, and square matrices) and content type (numeric, variable/symbolic, mixed, and zero matrices).
  • Matrix Properties - Introduces essential characteristics like size/dimension, rank, determinant, eigenvalues/eigenvectors, and trace, explaining their importance in matrix operations and transformations.
  • Square Matrices and Special Cases - Focuses on unique types of square matrices, including those with special diagonal patterns (diagonal, upper triangular, lower triangular) and element relationships (symmetric, skew-symmetric, identity, scalar).
Learn More

Linear Algebra Symbols Reference

Our Linear Algebra Symbols page presents a well-organized collection of notation fundamental to matrix theory and vector spaces. This reference serves as a valuable resource for students and practitioners working with linear systems.
The page features symbols categorized by their mathematical applications, including matrix operations (A⊤, det(A), tr(A)), vector spaces (ℝⁿ, ⟨v,w⟩, ∥v∥), and eigenvalue concepts (Av=λv). It covers advanced topics like matrix decompositions (LU, QR, SVD) and linear transformations, alongside practical notation for representing matrices and vectors in LaTeX.
Every symbol includes its proper mathematical notation, corresponding LaTeX code for typesetting, and a brief explanation of its mathematical significance—making this an indispensable reference for anyone working with linear algebra in academic research or applications.RetryClaude can make mistakes. Please double-check responses.
Learn More

Tools

Determinant Visual Calculator with Steps

Use visualinteractive Determinant Calculator with step-by-step explanations

Try it
Determinant Visual Calculator with Steps