Visual Tools
Calculators
Tables
Mathematical Keyboard
Converters
Other Tools

Linear Algebra



Introduction to Linear Algebra

Linear algebra is a field of mathematics that focuses on studying vectors, matrices, and the relationships between them, forming the mathematical framework for analyzing structures and transformations in multidimensional spaces. It introduces powerful tools to understand and solve problems where quantities interact linearly, making it fundamental to numerous disciplines.

This field begins with vectors—quantities that have both magnitude and direction—and their operations, such as addition and scaling. It extends to matrices, which are grid-like arrangements of numbers used to represent systems of equations or transformations. Learning how to manipulate matrices and understand their properties is a key part of linear algebra.

Students also explore vector spaces, the environments in which vectors live, and subspaces, which reveal structure and constraints within these spaces. Concepts like linear independence, span, and basis give insight into how vectors relate and interact. The study of linear transformations, which describe how vectors change under operations like rotations or scaling, helps build a deeper understanding of the subject.

To help students navigate these foundational concepts, we created a dedicated Matrix Theory section that provides an in-depth exploration of matrices - from basic definitions and notations to various matrix types and properties. This interactive guide covers matrix structure, indexing, special cases of square matrices, and key matrix properties, with clear mathematical notation and visual examples throughout. The section serves as both a comprehensive learning resource and a quick reference for understanding these essential building blocks of linear algebra.

Eigenvalues and eigenvectors, pivotal concepts in linear algebra, allow students to uncover hidden properties of transformations. Techniques like solving systems of equations, matrix decomposition, and understanding projections or orthogonality are practical outcomes of this study.

Ultimately, linear algebra provides a foundation for solving abstract and applied problems, developing skills to think logically, recognize patterns, and simplify complex systems. It equips students with a versatile toolkit for further studies in mathematics, sciences, engineering, and beyond.

Linear Algebra Formulas

Navigate through an essential collection of linear algebra formulas that power mathematical analysis and transformations. This guide presents key formulas across vector operations, matrix calculations, eigenvalues, and linear transformations - each equipped with clear notation, detailed explanations, and practical examples. You will find precise mathematical representations, component breakdowns, and specific use cases for over 15 fundamental formula categories. The organized structure helps you quickly locate and understand the tools you need, whether for solving equations, analyzing transformations, or applying linear algebra concepts in real-world scenarios. Perfect for students needing formula clarification, researchers requiring quick mathematical reference, or practitioners applying linear algebra in their work.


Vector Addition

u+v=[u1+v1u2+v2un+vn]\mathbf{u} + \mathbf{v} = \begin{bmatrix} u_1 + v_1 \\ u_2 + v_2 \\ \vdots \\ u_n + v_n \end{bmatrix}

Scalar Multiplication

cv=[cv1cv2cvn]c\mathbf{v} = \begin{bmatrix} c \cdot v_1 \\ c \cdot v_2 \\ \vdots \\ c \cdot v_n \end{bmatrix}

Dot Product (Inner Product)

uv=u1v1+u2v2++unvn\mathbf{u} \cdot \mathbf{v} = u_1v_1 + u_2v_2 + \dots + u_nv_n

Cross Product

u×v=[u2v3u3v2u3v1u1v3u1v2u2v1]\mathbf{u} \times \mathbf{v} = \begin{bmatrix} u_2v_3 - u_3v_2 \\ u_3v_1 - u_1v_3 \\ u_1v_2 - u_2v_1 \end{bmatrix}

Matrix Addition

A+B=[a11+b11a1n+b1nam1+bm1amn+bmn]\mathbf{A} + \mathbf{B} = \begin{bmatrix} a_{11} + b_{11} & \dots & a_{1n} + b_{1n} \\ \vdots & \ddots & \vdots \\ a_{m1} + b_{m1} & \dots & a_{mn} + b_{mn} \end{bmatrix}

Scalar Multiplication of a Matrix

cA=[ca11ca1ncam1camn]c\mathbf{A} = \begin{bmatrix} c \cdot a_{11} & \dots & c \cdot a_{1n} \\ \vdots & \ddots & \vdots \\ c \cdot a_{m1} & \dots & c \cdot a_{mn} \end{bmatrix}

Matrix Multiplication

(AB)ij=k=1naikbkj(\mathbf{AB})_{ij} = \sum_{k=1}^{n} a_{ik}b_{kj}

Determinant of a 2x2 Matrix

det(A)=adbc\det(\mathbf{A}) = ad - bc

Inverse of a 2x2 Matrix

A1=1det(A)[dbca]\mathbf{A}^{-1} = \frac{1}{\det(\mathbf{A})} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}

Eigenvalues and Eigenvectors

Ax=λx\mathbf{A}\mathbf{x} = \lambda\mathbf{x}

Cramer's Rule

xi=det(Ai)det(A)x_i = \frac{\det(\mathbf{A}_i)}{\det(\mathbf{A})}

Rank of a Matrix

Therankisthemaximumnumberoflinearlyindependentrowsorcolumns.The rank is the maximum number of linearly independent rows or columns.

Linear Transformation

T(cu+dv)=cT(u)+dT(v)T(c\mathbf{u} + d\mathbf{v}) = cT(\mathbf{u}) + dT(\mathbf{v})

Orthogonality

uv=0\mathbf{u} \cdot \mathbf{v} = 0

Norm of a Vector

v=v12+v22++vn2\|\mathbf{v}\| = \sqrt{v_1^2 + v_2^2 + \dots + v_n^2}

Unit Vector

u=vv\mathbf{u} = \frac{\mathbf{v}}{\|\mathbf{v}\|}

Projection of a Vector

projvu=(uvv2)v\text{proj}_{\mathbf{v}} \mathbf{u} = \left( \frac{\mathbf{u} \cdot \mathbf{v}}{\|\mathbf{v}\|^2} \right) \mathbf{v}

Vector Addition

u+v=[u1+v1u2+v2un+vn]\mathbf{u} + \mathbf{v} = \begin{bmatrix} u_1 + v_1 \\ u_2 + v_2 \\ \vdots \\ u_n + v_n \end{bmatrix}

Scalar Multiplication

cv=[cv1cv2cvn]c\mathbf{v} = \begin{bmatrix} c \cdot v_1 \\ c \cdot v_2 \\ \vdots \\ c \cdot v_n \end{bmatrix}

Dot Product (Inner Product)

uv=u1v1+u2v2++unvn\mathbf{u} \cdot \mathbf{v} = u_1v_1 + u_2v_2 + \dots + u_nv_n

Cross Product

u×v=[u2v3u3v2u3v1u1v3u1v2u2v1]\mathbf{u} \times \mathbf{v} = \begin{bmatrix} u_2v_3 - u_3v_2 \\ u_3v_1 - u_1v_3 \\ u_1v_2 - u_2v_1 \end{bmatrix}

Matrix Addition

A+B=[a11+b11a1n+b1nam1+bm1amn+bmn]\mathbf{A} + \mathbf{B} = \begin{bmatrix} a_{11} + b_{11} & \dots & a_{1n} + b_{1n} \\ \vdots & \ddots & \vdots \\ a_{m1} + b_{m1} & \dots & a_{mn} + b_{mn} \end{bmatrix}

Scalar Multiplication of a Matrix

cA=[ca11ca1ncam1camn]c\mathbf{A} = \begin{bmatrix} c \cdot a_{11} & \dots & c \cdot a_{1n} \\ \vdots & \ddots & \vdots \\ c \cdot a_{m1} & \dots & c \cdot a_{mn} \end{bmatrix}

Matrix Multiplication

(AB)ij=k=1naikbkj(\mathbf{AB})_{ij} = \sum_{k=1}^{n} a_{ik}b_{kj}

Determinant of a 2x2 Matrix

det(A)=adbc\det(\mathbf{A}) = ad - bc

Inverse of a 2x2 Matrix

A1=1det(A)[dbca]\mathbf{A}^{-1} = \frac{1}{\det(\mathbf{A})} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}

Eigenvalues and Eigenvectors

Ax=λx\mathbf{A}\mathbf{x} = \lambda\mathbf{x}

Cramer's Rule

xi=det(Ai)det(A)x_i = \frac{\det(\mathbf{A}_i)}{\det(\mathbf{A})}

Rank of a Matrix

Therankisthemaximumnumberoflinearlyindependentrowsorcolumns.The rank is the maximum number of linearly independent rows or columns.

Linear Transformation

T(cu+dv)=cT(u)+dT(v)T(c\mathbf{u} + d\mathbf{v}) = cT(\mathbf{u}) + dT(\mathbf{v})

Orthogonality

uv=0\mathbf{u} \cdot \mathbf{v} = 0

Norm of a Vector

v=v12+v22++vn2\|\mathbf{v}\| = \sqrt{v_1^2 + v_2^2 + \dots + v_n^2}

Unit Vector

u=vv\mathbf{u} = \frac{\mathbf{v}}{\|\mathbf{v}\|}

Projection of a Vector

projvu=(uvv2)v\text{proj}_{\mathbf{v}} \mathbf{u} = \left( \frac{\mathbf{u} \cdot \mathbf{v}}{\|\mathbf{v}\|^2} \right) \mathbf{v}
Go to Page

Linear Algebra Terms and Definitions

Discover essential linear algebra definitions that form the mathematical foundation for understanding vectors, matrices, and their relationships. This guide breaks down key terms from basic vector concepts like magnitude and direction to advanced matrix classifications and properties. Each definition includes precise mathematical notation, clear explanations, and visual examples to help grasp the concept. Whether you're learning about vector spaces, exploring matrix types like triangular and symmetric matrices, or studying transformations, this organized reference makes complex linear algebra terminology accessible. The page serves as both a learning tool and a quick reference for students and practitioners, featuring interactive mathematical notation and practical examples throughout.

Vector

A mathematical object that has both magnitude ($|\vec{v}|$) and direction in space.

Components

The individual elements of a vector, e.g., (v1, v2, ..., vn).

Vector Magnitude (Norm)

Length of a vector, denoted as $|\vec{v}|$ or $\|\vec{v}\|$

Unit Vector

Vector with magnitude of 1: $|\vec{u}| = 1$

Zero Vector (Null Vector)

A vector where all components are zero: $\vec{0}$ or $\mathbf{0}$

Column Vector

A $n \times 1$ matrix (vertical array of numbers): $$\vec{v} = \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix}$$

Row Vector

A $1 \times n$ matrix (horizontal array of numbers): $$\vec{v} = \begin{bmatrix} v_1 & v_2 & \cdots & v_n \end{bmatrix}$$

Vector Addition

For vectors $u,v$ in $\mathbb{R}^n$: $(u + v)_i = u_i + v_i$

Scalar Multiplication

For scalar $c$ and vector $v$: $(cv)_i = cv_i$

Linear Combination

A vector $\vec{v}$ is a linear combination of vectors $\{\vec{v_1}, \vec{v_2}, ..., \vec{v_n}\}$ if it can be written as $\vec{v} = c_1\vec{v_1} + c_2\vec{v_2} + ... + c_n\vec{v_n}$ for some scalars $c_1, c_2, ..., c_n$

Dot Product

For vectors $u,v$ in $\mathbb{R}^n$: $u\cdot v = \sum_{i=1}^n u_iv_i$

Cross Product

For $u,v$ in $\mathbb{R}^3$: $u\times v = \|u\|\|v\|\sin\theta\, \mathbf{n}$

Vector Projection

Orthogonal projection of vector $\vec{v}$ onto vector $\vec{u}$ is the vector component of $\vec{v}$ parallel to $\vec{u}$, given by: $\text{proj}_{\vec{u}}\vec{v} = \frac{\vec{v} \cdot \vec{u}}{|\vec{u}|^2}\vec{u}$

Linearly Independent Vectors

A set of vectors $\{\vec{v_1}, \vec{v_2}, ..., \vec{v_n}\}$ is linearly independent if the equation $c_1\vec{v_1} + c_2\vec{v_2} + ... + c_n\vec{v_n} = \vec{0}$ is satisfied only when all $c_i = 0$

Linearly Dependent Vectors

A set of vectors $\{\vec{v_1}, \vec{v_2}, ..., \vec{v_n}\}$ is linearly dependent if there exist scalars $c_1, c_2, ..., c_n$, not all zero, such that $c_1\vec{v_1} + c_2\vec{v_2} + ... + c_n\vec{v_n} = \vec{0}$

Vector Space

A set $V$ with vectors $\vec{u}, \vec{v} \in V$ and scalars $c$ is a vector space if it's closed under addition ($\vec{u} + \vec{v} \in V$) and scalar multiplication ($c\vec{v} \in V$), and satisfies the vector space axioms

Vector Subspace

A subset $W$ of a vector space $V$ is a subspace if it's closed under addition and scalar multiplication: for all $\vec{u}, \vec{v} \in W$ and scalar $c$, both $\vec{u} + \vec{v} \in W$ and $c\vec{v} \in W$

Span

The span of vectors $\{\vec{v_1}, \vec{v_2}, ..., \vec{v_n}\}$ is the set of all their linear combinations: $\text{span}\{\vec{v_1}, \vec{v_2}, ..., \vec{v_n}\} = \{c_1\vec{v_1} + c_2\vec{v_2} + ... + c_n\vec{v_n} | c_i \in \mathbb{R}\}$

Basis

A basis of a vector space $V$ is a linearly independent set of vectors that spans $V$. For any vector $\vec{v} \in V$, there exists a unique representation $\vec{v} = c_1\vec{v_1} + c_2\vec{v_2} + ... + c_n\vec{v_n}$

Dimension

The dimension of a vector space $V$ is the number of vectors in any basis of $V$, denoted $\dim(V)$

Orthogonal Vectors

Two vectors $\vec{u}$ and $\vec{v}$ are orthogonal if their dot product is zero: $\vec{u} \cdot \vec{v} = 0$

Orthonormal Vectors

A set of vectors is orthonormal if they are orthogonal to each other and each has unit length: $\vec{u_i} \cdot \vec{u_j} = \delta_{ij}$ where $\delta_{ij}$ is the Kronecker delta

Gram-Schmidt Process

Produces orthonormal basis $\{u_1,\ldots,u_n\}$ from linearly independent vectors $\{v_1,\ldots,v_n\}$

Direction Cosines

For vector $v$, cosines with axes: $\cos\alpha = \frac{v_x}{\|v\|}, \cos\beta = \frac{v_y}{\|v\|}, \cos\gamma = \frac{v_z}{\|v\|}$

Linear Transformation

Function $T: V \to W$ where $T(au + bv) = aT(u) + bT(v)$

Matrix Representation

For transformation $T$ with basis $\{v_1,\ldots,v_n\}$, $[T]_{\mathcal{B}} = [T(v_1)\cdots T(v_n)]$

Gradient Vector

For scalar function $f(x_1,\ldots,x_n)$: $\nabla f = \left(\frac{\partial f}{\partial x_1},\ldots,\frac{\partial f}{\partial x_n}\right)$

Position Vector

Vector $\vec{r} = (x,y,z)$ from origin to point $P(x,y,z)$

Matrix

A rectangular array of elements $a_{ij}$ in $m$ rows and $n$ columns

Row Matrix

Matrix of size $1 \times n$ (single row)

Column Matrix

Matrix of size $m \times 1$ (single column)

Square Matrix

A [matrix](!/linear-algebra/definitions#matrix) with an equal number of rows and columns, often associated with special properties like determinants and eigenvalues.

Zero Matrix

A [matrix](!/linear-algebra/definitions#matrix) with all elements are equal to zero. Also called **null matrix**.

Main Diagonal

In a [square matrix](!/linear-algebra/definitions#square_matrix), the main diagonal (or principal diagonal, or leading diagonal) consists of elements where row index equals column index.

Triangular Matrix

A square matrix where all the elements either above or below the [main diagonal](!/linear-algebra/definitions#main_diagonal) are zero.

Upper Triangular Matrix

A [square matrix](!/linear-algebra/definitions#square_matrix) with zeros below the [main diagonal](!/linear-algebra/definitions#main_diagonal). All elements $a_{ij}=0$ where $i > j$.

Lower Triangular Matrix

A square [matrix](!/linear-algebra/definitions#matrix) with zeros above the [main diagonal](!/linear-algebra/definitions#main_diagonal). All elements $a_{ij}=0$ where $i < j$.

Identity Matrix

A square matrix with 1s on the [main diagonal](!/linear-algebra/definitions#main_diagonal) and 0s elsewhere.

Anti-symmetric(Skew-symmetric) Matrix

A square matrix that equals the negative of its transpose: $A = -A^T$. All diagonal elements must be zero.

Diagonal Matrix

A square matrix with non-zero elements only on the [main diagonal](!/linear-algebra/definitions#main_diagonal).

Symmetric Matrix

A matrix equal to its transpose, $A = A^T$.

Transposition

An operation that flips a matrix over its main diagonal, switching rows and columns: $(A^T)_{ij} = A_{ji}$

Matrix Addition

For matrices $A,B$ of same size, $(A+B)_{ij} = A_{ij} + B_{ij}$

Scalar Addition (Matrix)

For scalar $c$ and matrix $A$, $(c+A)_{ij} = c + A_{ij}$

Scalar Multiplication(Matrix)

For scalar $c$ and matrix $A$, $(cA)_{ij} = cA_{ij}$

Matrix Multiplication

For matrices $A_{m\times n}, B_{n\times p}$, $(AB)_{ij} = \sum_{k=1}^n a_{ik}b_{kj}$

Determinant

For square matrix $A$, denoted $\det(A)$ or $|A|$

Inverse Matrix

For square matrix $A$, its inverse $A^{-1}$ satisfies $AA^{-1} = A^{-1}A = I$

Rank

Number of linearly independent rows/columns, denoted $\text{rank}(A)$

Trace

Sum of main diagonal elements: $\text{tr}(A) = \sum_{i=1}^n a_{ii}$

Echelon Form

Matrix where each row's leading non-zero entry (pivot) is to the right of pivots in rows above

Reduced Row Echelon Form

Row echelon form where each pivot is 1 and is the only non-zero entry in its column

Elementary Matrix

Matrix representing one elementary row operation on identity matrix

Orthogonal Matrix

Square matrix $A$ where $A^T = A^{-1}$, equivalently $AA^T = A^TA = I$

Scalar Matrix

A [square matrix](!/linear-algebra/definitions#square_matrix) where all element are equal to zero except those on main diagonal which are equal to constant number.

Adjoint

For matrix $A$, adjoint (adj$(A)$) is transpose of cofactor matrix

Matrix Size

Matrix with $m$ rows and $n$ columns denoted as $A_{m\times n}$ or $A \in \mathbb{R}^{m\times n}$

Eigenvalues

Scalar $\lambda$ satisfying $Av = \lambda v$ for nonzero vector $v$ (eigenvector)

Eigenvectors

Nonzero vector $v$ satisfying $Av = \lambda v$ for eigenvalue $\lambda$

Singular Matrix

Square matrix with $\det(A) = 0$

Augmented Matrix

Matrix $[A|b]$ representing system $Ax = b$

LU Decomposition

Matrix $A = LU$ where $L$ is lower triangular and $U$ is upper triangular

QR Decomposition

Matrix $A = QR$ where $Q$ is orthogonal ($Q^TQ = I$) and $R$ is upper triangular

Positive Definite Matrix

Symmetric matrix $A$ where $x^TAx > 0$ for all nonzero $x$

Diagonalization

Matrix $A = PDP^{-1}$ where $D$ is diagonal matrix of eigenvalues

Block Matrix

Matrix partitioned into submatrices $A_{ij}$

Sparse Matrix

Matrix with mostly zero elements, typically $O(n)$ nonzero elements

Vector

A mathematical object that has both magnitude ($|\vec{v}|$) and direction in space.

Components

The individual elements of a vector, e.g., (v1, v2, ..., vn).

Vector Magnitude (Norm)

Length of a vector, denoted as $|\vec{v}|$ or $\|\vec{v}\|$

Unit Vector

Vector with magnitude of 1: $|\vec{u}| = 1$

Zero Vector (Null Vector)

A vector where all components are zero: $\vec{0}$ or $\mathbf{0}$

Column Vector

A $n \times 1$ matrix (vertical array of numbers): $$\vec{v} = \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix}$$

Row Vector

A $1 \times n$ matrix (horizontal array of numbers): $$\vec{v} = \begin{bmatrix} v_1 & v_2 & \cdots & v_n \end{bmatrix}$$

Vector Addition

For vectors $u,v$ in $\mathbb{R}^n$: $(u + v)_i = u_i + v_i$

Scalar Multiplication

For scalar $c$ and vector $v$: $(cv)_i = cv_i$

Linear Combination

A vector $\vec{v}$ is a linear combination of vectors $\{\vec{v_1}, \vec{v_2}, ..., \vec{v_n}\}$ if it can be written as $\vec{v} = c_1\vec{v_1} + c_2\vec{v_2} + ... + c_n\vec{v_n}$ for some scalars $c_1, c_2, ..., c_n$

Dot Product

For vectors $u,v$ in $\mathbb{R}^n$: $u\cdot v = \sum_{i=1}^n u_iv_i$

Cross Product

For $u,v$ in $\mathbb{R}^3$: $u\times v = \|u\|\|v\|\sin\theta\, \mathbf{n}$

Vector Projection

Orthogonal projection of vector $\vec{v}$ onto vector $\vec{u}$ is the vector component of $\vec{v}$ parallel to $\vec{u}$, given by: $\text{proj}_{\vec{u}}\vec{v} = \frac{\vec{v} \cdot \vec{u}}{|\vec{u}|^2}\vec{u}$

Linearly Independent Vectors

A set of vectors $\{\vec{v_1}, \vec{v_2}, ..., \vec{v_n}\}$ is linearly independent if the equation $c_1\vec{v_1} + c_2\vec{v_2} + ... + c_n\vec{v_n} = \vec{0}$ is satisfied only when all $c_i = 0$

Linearly Dependent Vectors

A set of vectors $\{\vec{v_1}, \vec{v_2}, ..., \vec{v_n}\}$ is linearly dependent if there exist scalars $c_1, c_2, ..., c_n$, not all zero, such that $c_1\vec{v_1} + c_2\vec{v_2} + ... + c_n\vec{v_n} = \vec{0}$

Vector Space

A set $V$ with vectors $\vec{u}, \vec{v} \in V$ and scalars $c$ is a vector space if it's closed under addition ($\vec{u} + \vec{v} \in V$) and scalar multiplication ($c\vec{v} \in V$), and satisfies the vector space axioms

Vector Subspace

A subset $W$ of a vector space $V$ is a subspace if it's closed under addition and scalar multiplication: for all $\vec{u}, \vec{v} \in W$ and scalar $c$, both $\vec{u} + \vec{v} \in W$ and $c\vec{v} \in W$

Span

The span of vectors $\{\vec{v_1}, \vec{v_2}, ..., \vec{v_n}\}$ is the set of all their linear combinations: $\text{span}\{\vec{v_1}, \vec{v_2}, ..., \vec{v_n}\} = \{c_1\vec{v_1} + c_2\vec{v_2} + ... + c_n\vec{v_n} | c_i \in \mathbb{R}\}$

Basis

A basis of a vector space $V$ is a linearly independent set of vectors that spans $V$. For any vector $\vec{v} \in V$, there exists a unique representation $\vec{v} = c_1\vec{v_1} + c_2\vec{v_2} + ... + c_n\vec{v_n}$

Dimension

The dimension of a vector space $V$ is the number of vectors in any basis of $V$, denoted $\dim(V)$

Orthogonal Vectors

Two vectors $\vec{u}$ and $\vec{v}$ are orthogonal if their dot product is zero: $\vec{u} \cdot \vec{v} = 0$

Orthonormal Vectors

A set of vectors is orthonormal if they are orthogonal to each other and each has unit length: $\vec{u_i} \cdot \vec{u_j} = \delta_{ij}$ where $\delta_{ij}$ is the Kronecker delta

Gram-Schmidt Process

Produces orthonormal basis $\{u_1,\ldots,u_n\}$ from linearly independent vectors $\{v_1,\ldots,v_n\}$

Direction Cosines

For vector $v$, cosines with axes: $\cos\alpha = \frac{v_x}{\|v\|}, \cos\beta = \frac{v_y}{\|v\|}, \cos\gamma = \frac{v_z}{\|v\|}$

Linear Transformation

Function $T: V \to W$ where $T(au + bv) = aT(u) + bT(v)$

Matrix Representation

For transformation $T$ with basis $\{v_1,\ldots,v_n\}$, $[T]_{\mathcal{B}} = [T(v_1)\cdots T(v_n)]$

Gradient Vector

For scalar function $f(x_1,\ldots,x_n)$: $\nabla f = \left(\frac{\partial f}{\partial x_1},\ldots,\frac{\partial f}{\partial x_n}\right)$

Position Vector

Vector $\vec{r} = (x,y,z)$ from origin to point $P(x,y,z)$

Matrix

A rectangular array of elements $a_{ij}$ in $m$ rows and $n$ columns

Row Matrix

Matrix of size $1 \times n$ (single row)

Column Matrix

Matrix of size $m \times 1$ (single column)

Square Matrix

A [matrix](!/linear-algebra/definitions#matrix) with an equal number of rows and columns, often associated with special properties like determinants and eigenvalues.

Zero Matrix

A [matrix](!/linear-algebra/definitions#matrix) with all elements are equal to zero. Also called **null matrix**.

Main Diagonal

In a [square matrix](!/linear-algebra/definitions#square_matrix), the main diagonal (or principal diagonal, or leading diagonal) consists of elements where row index equals column index.

Triangular Matrix

A square matrix where all the elements either above or below the [main diagonal](!/linear-algebra/definitions#main_diagonal) are zero.

Upper Triangular Matrix

A [square matrix](!/linear-algebra/definitions#square_matrix) with zeros below the [main diagonal](!/linear-algebra/definitions#main_diagonal). All elements $a_{ij}=0$ where $i > j$.

Lower Triangular Matrix

A square [matrix](!/linear-algebra/definitions#matrix) with zeros above the [main diagonal](!/linear-algebra/definitions#main_diagonal). All elements $a_{ij}=0$ where $i < j$.

Identity Matrix

A square matrix with 1s on the [main diagonal](!/linear-algebra/definitions#main_diagonal) and 0s elsewhere.

Anti-symmetric(Skew-symmetric) Matrix

A square matrix that equals the negative of its transpose: $A = -A^T$. All diagonal elements must be zero.

Diagonal Matrix

A square matrix with non-zero elements only on the [main diagonal](!/linear-algebra/definitions#main_diagonal).

Symmetric Matrix

A matrix equal to its transpose, $A = A^T$.

Transposition

An operation that flips a matrix over its main diagonal, switching rows and columns: $(A^T)_{ij} = A_{ji}$

Matrix Addition

For matrices $A,B$ of same size, $(A+B)_{ij} = A_{ij} + B_{ij}$

Scalar Addition (Matrix)

For scalar $c$ and matrix $A$, $(c+A)_{ij} = c + A_{ij}$

Scalar Multiplication(Matrix)

For scalar $c$ and matrix $A$, $(cA)_{ij} = cA_{ij}$

Matrix Multiplication

For matrices $A_{m\times n}, B_{n\times p}$, $(AB)_{ij} = \sum_{k=1}^n a_{ik}b_{kj}$

Determinant

For square matrix $A$, denoted $\det(A)$ or $|A|$

Inverse Matrix

For square matrix $A$, its inverse $A^{-1}$ satisfies $AA^{-1} = A^{-1}A = I$

Rank

Number of linearly independent rows/columns, denoted $\text{rank}(A)$

Trace

Sum of main diagonal elements: $\text{tr}(A) = \sum_{i=1}^n a_{ii}$

Echelon Form

Matrix where each row's leading non-zero entry (pivot) is to the right of pivots in rows above

Reduced Row Echelon Form

Row echelon form where each pivot is 1 and is the only non-zero entry in its column

Elementary Matrix

Matrix representing one elementary row operation on identity matrix

Orthogonal Matrix

Square matrix $A$ where $A^T = A^{-1}$, equivalently $AA^T = A^TA = I$

Scalar Matrix

A [square matrix](!/linear-algebra/definitions#square_matrix) where all element are equal to zero except those on main diagonal which are equal to constant number.

Adjoint

For matrix $A$, adjoint (adj$(A)$) is transpose of cofactor matrix

Matrix Size

Matrix with $m$ rows and $n$ columns denoted as $A_{m\times n}$ or $A \in \mathbb{R}^{m\times n}$

Eigenvalues

Scalar $\lambda$ satisfying $Av = \lambda v$ for nonzero vector $v$ (eigenvector)

Eigenvectors

Nonzero vector $v$ satisfying $Av = \lambda v$ for eigenvalue $\lambda$

Singular Matrix

Square matrix with $\det(A) = 0$

Augmented Matrix

Matrix $[A|b]$ representing system $Ax = b$

LU Decomposition

Matrix $A = LU$ where $L$ is lower triangular and $U$ is upper triangular

QR Decomposition

Matrix $A = QR$ where $Q$ is orthogonal ($Q^TQ = I$) and $R$ is upper triangular

Positive Definite Matrix

Symmetric matrix $A$ where $x^TAx > 0$ for all nonzero $x$

Diagonalization

Matrix $A = PDP^{-1}$ where $D$ is diagonal matrix of eigenvalues

Block Matrix

Matrix partitioned into submatrices $A_{ij}$

Sparse Matrix

Matrix with mostly zero elements, typically $O(n)$ nonzero elements
Go to Page

Matrices

Explore matrices in linear algebra through our detailed guide.Starting with matrix definitions and notations, the page explains matrix structure, elements, and indexing. You will learn to distinguish between different matrix types - from basic row and column matrices to more complex square matrices. The guide also covers essential matrix properties and dives into special cases of square matrices like diagonal and triangular forms. Each topic features clear mathematical notation and visual examples to reinforce your understanding of these fundamental concepts.
  • Definitions and Notations - Explains how matrices are written using different types of brackets (square, parentheses, vertical bars) and introduces basic matrix notation conventions.
  • Elements, Structure and Indexing - Covers how matrix elements are organized in rows and columns, explains the 1-based indexing system, and demonstrates how to reference specific elements using row and column indices.
  • Types of Matrices - Describes different classifications of matrices based on their shape (column, row, rectangular, and square matrices) and content type (numeric, variable/symbolic, mixed, and zero matrices).
  • Matrix Properties - Introduces essential characteristics like size/dimension, rank, determinant, eigenvalues/eigenvectors, and trace, explaining their importance in matrix operations and transformations.
  • Square Matrices and Special Cases - Focuses on unique types of square matrices, including those with special diagonal patterns (diagonal, upper triangular, lower triangular) and element relationships (symmetric, skew-symmetric, identity, scalar).
Go to Page