Linear Algebra Terms and Definitions
Essential concepts defining matrices, their structure, and basic classifications.
Go to Matrices Basic Terms section →Practical uses of matrices in solving systems and representing data.
Go to Matrix Applications section →Basic arithmetic operations with matrices including addition, multiplication, and scalar operations.
Go to Matrix Operations section →Key characteristics of matrices like determinant, rank, trace, and eigenvalues that describe their behavior.
Go to Matrix Properties section →Operations that convert matrices into special forms or decompose them into simpler components.
Go to Matrix Transformations section →Matrices with unique properties like diagonal, triangular, orthogonal forms that have specific applications.
Go to Special Matrices section →Core vector arithmetic and geometric operations including addition, multiplication, dot/cross products that form the foundation for manipulating vectors.
Go to Vector Operations section →Primary vector concepts covering structure, magnitude, direction and fundamental types like unit and zero vectors.
Go to Vectors section →Practical uses of vectors in describing physical quantities, gradients, and positions.
Go to Vectors Applications section →Fundamental concepts that define vectors and their components, including basic properties and representations in space.
Go to Vectors Basic Terms section →Geometric meaning and visualization of vectors through angles, directions, and spatial relationships.
Go to Vectors Geometric Interpretations section →Concepts related to perpendicular vectors and methods to create orthogonal/orthonormal vector sets.
Go to Vectors Orthogonality section →Operations that change vectors while preserving certain properties, including linear transformations and their matrix representations.
Go to Vectors Transformations section →Vectors
Vector
Definition:
A mathematical object that has both magnitude (∣v∣) and direction in space.
v=v1v2⋮vnorv=[v1v2⋯vn]
Each element vi represents displacement along the corresponding axis
2D vector: v=[23] represents 2 units along x-axis, 3 along y-axis
3D vector: v=1−24 represents displacements along x, y, and z axes
3D vector: v=1−24 represents displacements along x, y, and z axes
Denoted by arrow overhead (v) or bold (v)
Vector Magnitude (Norm)
Definition:
Length of a vector, denoted as ∣v∣ or ∥v∥
For an n-dimensional vector:
∣v∣=i=1∑nvi2
2D vector: ∣v∣=v12+v22
3D vector: ∣v∣=v12+v22+v32
3D vector: ∣v∣=v12+v22+v32
Always non-negative: ∣v∣≥0, equals zero only for zero vector
For v=[34]:
∣v∣=32+42=25=5
∣v∣=32+42=25=5
Unit Vector
Definition:
Vector with magnitude of 1: ∣u∣=1
Obtained by normalizing a vector:
u=∣v∣v
Standard unit vectors in R3:
i^=100,j^=010,k^=001
Normalizing v=[34]:
u=51[34]=[0.60.8]
Unit vectors preserve direction while normalizing magnitude
Zero Vector (Null Vector)
Definition:
A vector where all components are zero: 0 or 0
Column vector form:
00⋮0
Row vector form:
[00⋯0]
Key property: 0+v=v for any vector v
Column Vector
Definition:
A n×1 matrix (vertical array of numbers):
v=v1v2⋮vn
Examples:
2−14xyz
Can be transposed to get a row vector:
vT=[v1v2⋯vn]
Row Vector
Definition:
A 1×n matrix (horizontal array of numbers):
v=[v1v2⋯vn]
Examples:
[2−14][xyz]
Can be transposed to get a column vector:
vT=v1v2⋮vn
Linear Combination
Definition:
A vector v is a linear combination of vectors {v1,v2,...,vn} if it can be written as v=c1v1+c2v2+...+cnvn for some scalars c1,c2,...,cn
The set of all possible linear combinations forms:
- A line if one nonzero vector
- A plane if two linearly independent vectors
- A space if three linearly independent vectors
- A line if one nonzero vector
- A plane if two linearly independent vectors
- A space if three linearly independent vectors
Vector v=[57] as linear combination:
v=2[12]+3[11]
Vector Projection
Definition:
Orthogonal projection of vector v onto vector u is the vector component of v parallel to u, given by: projuv=∣u∣2v⋅uu
For vectors v and u:
projuv=u⋅uv⋅uu=(v⋅u^)u^
The projection decomposes v into:
- Parallel component: projuv (along u)
- Perpendicular component: v−projuv
- Parallel component: projuv (along u)
- Perpendicular component: v−projuv
For v=[34] onto u=[10]:
projuv=[30]
Linearly Independent Vectors
Definition:
A set of vectors {v1,v2,...,vn} is linearly independent if the equation c1v1+c2v2+...+cnvn=0 is satisfied only when all ci=0
In R2:
- Two vectors are linearly independent if neither is a scalar multiple of the other
- In R3, three vectors are linearly independent if none lies in the plane formed by the other two
- Two vectors are linearly independent if neither is a scalar multiple of the other
- In R3, three vectors are linearly independent if none lies in the plane formed by the other two
Linearly independent vectors:
Linearly dependent vectors:
v1=[10],v2=[01]
Linearly dependent vectors:
v1=[12],v2=[24]
Linearly Dependent Vectors
Definition:
A set of vectors {v1,v2,...,vn} is linearly dependent if there exist scalars c1,c2,...,cn, not all zero, such that c1v1+c2v2+...+cnvn=0
In R2:
- Two vectors are linearly dependent if one is a scalar multiple of the other
- In R3, three vectors are linearly dependent if one lies in the plane formed by the other two
- Two vectors are linearly dependent if one is a scalar multiple of the other
- In R3, three vectors are linearly dependent if one lies in the plane formed by the other two
v1=[24],v2=[12]
Here 2v2=v1, making them linearly dependent
Vector Space
Definition:
A set V with vectors u,v∈V and scalars c is a vector space if it's closed under addition (u+v∈V) and scalar multiplication (cv∈V), and satisfies the vector space axioms
For all u,v,w∈V and scalars c,d:
- Commutativity: u+v=v+u
- Associativity: (u+v)+w=u+(v+w)
- Zero vector: ∃0 such that v+0=v
- Additive inverse: ∃−v such that v+(−v)=0
- Distributivity: c(u+v)=cu+cv
- Commutativity: u+v=v+u
- Associativity: (u+v)+w=u+(v+w)
- Zero vector: ∃0 such that v+0=v
- Additive inverse: ∃−v such that v+(−v)=0
- Distributivity: c(u+v)=cu+cv
Common vector spaces:
- Rn: n-dimensional real vectors
- Matrices of fixed size
- Polynomials of degree ≤ n
- Rn: n-dimensional real vectors
- Matrices of fixed size
- Polynomials of degree ≤ n
Vector Subspace
Definition:
A subset W of a vector space V is a subspace if it's closed under addition and scalar multiplication: for all u,v∈W and scalar c, both u+v∈W and cv∈W
Any subspace must:
- Contain zero vector
- Be closed under linear combinations
- Form a vector space itself
- Contain zero vector
- Be closed under linear combinations
- Form a vector space itself
Common subspaces of R3:
- Any plane through origin
- Any line through origin
- The zero subspace {0}
- Any plane through origin
- Any line through origin
- The zero subspace {0}
Span
Definition:
The span of vectors {v1,v2,...,vn} is the set of all their linear combinations: span{v1,v2,...,vn}={c1v1+c2v2+...+cnvn∣ci∈R}
Span represents:
- A line through origin (one vector)
- A plane through origin (two linearly independent vectors)
- All of R3 (three linearly independent vectors)
- A line through origin (one vector)
- A plane through origin (two linearly independent vectors)
- All of R3 (three linearly independent vectors)
span{[10],[01]}=R2
span{[12],[24]}=line through origin
Basis
Definition:
A basis of a vector space V is a linearly independent set of vectors that spans V. For any vector v∈V, there exists a unique representation v=c1v1+c2v2+...+cnvn
- Linearly independent and spans entire space
- Number of vectors = dimension of space
Standard basis for R3:
⎩⎨⎧100,010,001⎭⎬⎫
Dimension
Definition:
The dimension of a vector space V is the number of vectors in any basis of V, denoted dim(V)
- Equal to maximum number of linearly independent vectors
- dim(Rn)=n
- dim(R2)=2 (plane)
- dim(R3)=3 (space)
- dim({0})=0 (zero space)
- dim(line)=1
Orthogonal Vectors
Definition:
Two vectors u and v are orthogonal if their dot product is zero: u⋅v=0
Orthogonal vectors are perpendicular to each other, forming a 90° angle
[10],[01]
u⋅v=1(0)+0(1)=0
Orthonormal Vectors
Definition:
A set of vectors is orthonormal if they are orthogonal to each other and each has unit length: ui⋅uj=δij where δij is the Kronecker delta
Orthogonal to each other
Each vector has magnitude equal to 1
Form an orthonormal basis if they span the space
Each vector has magnitude equal to 1
Form an orthonormal basis if they span the space
Standard basis vectors are orthonormal:
i^=100,j^=010,k^=001
Vectors Basic Terms
Components
Definition:
The individual elements of a vector, e.g., (v1, v2, ..., vn).
Vector Operations
Vector Addition
Definition:
For vectors u,v in Rn: (u+v)i=ui+vi
u+v=v+u
u+(v+w)=(u+v)+w
u+0=u
u+(v+w)=(u+v)+w
u+0=u
123+456=579
Scalar Multiplication
Definition:
For scalar c and vector v: (cv)i=cvi
c(u+v)=cu+cv
(c+d)v=cv+dv
(cd)v=c(dv)
(c+d)v=cv+dv
(cd)v=c(dv)
32−14=6−312
Dot Product
Definition:
For vectors u,v in Rn: u⋅v=∑i=1nuivi
u⋅v=v⋅u
u⋅u=∥u∥2
(cu)⋅v=c(u⋅v)
u⋅u=∥u∥2
(cu)⋅v=c(u⋅v)
u⋅v=∥u∥∥v∥cosθ
where θ is angle between vectors123⋅456=1(4)+2(5)+3(6)=32
Cross Product
Definition:
For u,v in R3: u×v=∥u∥∥v∥sinθn
u1u2u3×v1v2v3=u2v3−u3v2u3v1−u1v3u1v2−u2v1
u×v=−(v×u)
u×u=0
∥u×v∥=∥u∥∥v∥sinθ
u×u=0
∥u×v∥=∥u∥∥v∥sinθ
Normal vectors
Torque calculation
Area of parallelogram: ∥u×v∥
Torque calculation
Area of parallelogram: ∥u×v∥
Vectors Orthogonality
Gram-Schmidt Process
Definition:
Produces orthonormal basis {u1,…,un} from linearly independent vectors {v1,…,vn}
u1=∥v1∥v1, uk=∥vk−∑i=1k−1projui(vk)∥vk−∑i=1k−1projui(vk)
For v1=110,v2=101:
u1=21110,u2=611−12
Vectors Geometric Interpretations
Direction Cosines
Definition:
For vector v, cosines with axes: cosα=∥v∥vx,cosβ=∥v∥vy,cosγ=∥v∥vz
cos2α+cos2β+cos2γ=1
Vector 111 has direction cosines:
cosα=cosβ=cosγ=31
Vectors Transformations
Linear Transformation
Definition:
Function T:V→W where T(au+bv)=aT(u)+bT(v)
Preserves linear combinations
Can be represented by matrix multiplication
T(0)=0
Can be represented by matrix multiplication
T(0)=0
For transformation T:Rn→Rm:
T(x)=Ax
where A is m×n matrixRotation
Scaling
Projection
Reflection
Scaling
Projection
Reflection
Matrix Representation
Definition:
For transformation T with basis {v1,…,vn}, [T]B=[T(v1)⋯T(vn)]
[T(v)]B=[T]B[v]B
[T]C=P−1[T]BP for change of basis P
[T]C=P−1[T]BP for change of basis P
Rotation by θ in R2:
[cosθsinθ−sinθcosθ]
Vectors Applications
Gradient Vector
Definition:
For scalar function f(x1,…,xn): ∇f=(∂x1∂f,…,∂xn∂f)
Points in direction of steepest increase
Perpendicular to level curves/surfaces
∇(fg)=f∇g+g∇f
Perpendicular to level curves/surfaces
∇(fg)=f∇g+g∇f
For f(x,y)=x2+xy+y2:
∇f=(2x+y,x+2y)
Position Vector
Definition:
Vector r=(x,y,z) from origin to point P(x,y,z)
Length gives distance from origin: ∥r∥
Direction gives orientation in space
Direction gives orientation in space
Velocity: v=dtdr
Acceleration: a=dt2d2r
Acceleration: a=dt2d2r
Point P(3,4,5) has position vector:
r=3i^+4j^+5k^=345
Matrices Basic Terms
Matrix
Definition:
A rectangular array of elements aij in m rows and n columns
A=[aij]m×n=a11a21⋮am1a12a22⋮am2⋯⋯⋱⋯a1na2n⋮amn
Size: m×n
Elements: aij where i is row, j is column
Elements: aij where i is row, j is column
Row Matrix
Definition:
Matrix of size 1×n (single row)
A=[a1a2⋯an]
Row vector representation
Linear combinations
Linear combinations
Column Matrix
Definition:
Matrix of size m×1 (single column)
A=a1a2⋮am
Vector representation
Solutions to linear systems
Solutions to linear systems
Square Matrix
Definition:
A matrix with an equal number of rows and columns, often associated with special properties like determinants and eigenvalues.
A square matrix of size n×n with arbitrary elements:
A=a11a21⋮an1a12a22⋮an2⋯⋯⋱⋯a1na2n⋮ann
Zero Matrix
Definition:
A matrix with all elements are equal to zero. Also called null matrix.
Zero matrix can have any dimensions m×n:
O=00⋮000⋮0⋯⋯⋱⋯00⋮0
Square zero matrix has equal number of rows and columns:
On=00⋮000⋮0⋯⋯⋱⋯00⋮0
Main Diagonal
Definition:
In a square matrix, the main diagonal (or principal diagonal, or leading diagonal) consists of elements where row index equals column index.
For an n×n matrix A=[aij], main diagonal contains elements where i=j:
A=a11a21⋮an1a12a22⋮an2⋯⋯⋱⋯a1na2n⋮ann
Here a11,a22,...,ann form the main diagonalIn 2×2 matrix, elements a11 and a22 form main diagonal:
[a11a21a12a22]
In 3×3 matrix, elements a11, a22, and a33 form main diagonal:
a11a21a31a12a22a32a13a23a33
Special Matrices
Triangular Matrix
Definition:
A square matrix where all the elements either above or below the main diagonal are zero.
2×2 examples:
Upper triangular:
Upper triangular:
[1023]
Lower triangular: [1403]
3×3 examples:
Upper triangular:
Upper triangular:
100240356
Lower triangular: 147058009
Upper triangular matrix:
U=u1100⋮0u12u220⋮0u13u23u33⋮0⋯⋯⋯⋱⋯u1nu2nu3n⋮unn
Lower triangular: L=l11l21l31⋮ln10l22l32⋮ln200l33⋮ln3⋯⋯⋯⋱⋯000⋮lnn
Upper Triangular Matrix
Definition:
A square matrix with zeros below the main diagonal. All elements aij=0 where i>j.
General form of upper triangular matrix:
U=u1100⋮0u12u220⋮0u13u23u33⋮0⋯⋯⋯⋱⋯u1nu2nu3n⋮unn
where diagonal elements uii can be any real numbers2×2 example:
[4023]
3×3 example:
100540326
Lower Triangular Matrix
Definition:
A square matrix with zeros above the main diagonal. All elements aij=0 where i<j.
General form of lower triangular matrix:
L=l11l21l31⋮ln10l22l32⋮ln200l33⋮ln3⋯⋯⋯⋱⋯000⋮lnn
where diagonal elements lii can be any real numbers2×2 example:
[3204]
3×3 example:
147025003
Identity Matrix
Definition:
A square matrix with 1s on the main diagonal and 0s elsewhere.
A square matrix of size n×n that has 1s on the main diagonal and 0s elsewhere:
In=10⋮001⋮0⋯⋯⋱⋯00⋮1
Identity matrix of size 2x2:
I2=[1001]
Identity matrix of size 3x3:
I3=100010001
Identity matrix of size 4x4:
I4=1000010000100001
Anti-symmetric(Skew-symmetric) Matrix
Definition:
A square matrix that equals the negative of its transpose: A=−AT. All diagonal elements must be zero.
General form:
A=0−a12−a13⋮−a1na120−a23⋮−a2na13a230⋮−a3n⋯⋯⋯⋱⋯a1na2na3n⋮0
where aij=−aji for all i,j2×2 example:
[0−220]
3×3 example:
0−3−130−2120
Diagonal Matrix
Definition:
A square matrix with non-zero elements only on the main diagonal.
A diagonal matrix of size n×n contains arbitrary values on main diagonal:
Dn=d10⋮00d2⋮0⋯⋯⋱⋯00⋮dn
Symmetric Matrix
Definition:
A matrix equal to its transpose, A=AT.
A symmetric matrix elements mirror across main diagonal:
A=a11a12a13⋮a1na12a22a23⋮a2na13a23a33⋮a3n⋯⋯⋯⋱⋯a1na2na3n⋮ann
where a12=a21,a13=a31,...,aij=ajiElementary Matrix
Definition:
Matrix representing one elementary row operation on identity matrix
E=010100001
swaps rows 1 and 2E=200010001
multiplies row 1 by 2E=130010001
adds 3 times row 1 to row 2Orthogonal Matrix
Definition:
Square matrix A where AT=A−1, equivalently AAT=ATA=I
det(A)=±1ATA=AAT=I(AB)T(AB)=I for orthogonal A,B
Rotation matrices:
[cosθsinθ−sinθcosθ]
Reflection matrices: [cos2θsin2θsin2θ−cos2θ]
Preserve lengths and angles
Used in rotations and reflections
Important in orthogonal transformations
Used in rotations and reflections
Important in orthogonal transformations
Scalar Matrix
Definition:
A square matrix where all element are equal to zero except those on main diagonal which are equal to constant number.
A scalar matrix is a diagonal matrix where all diagonal entries are equal:
λI=λ0⋮00λ⋮0⋯⋯⋱⋯00⋮λ
where λ is any real numberA 2×2 scalar matrix example:
[3003]
A 3×3 scalar matrix example:
500050005
Singular Matrix
Definition:
Square matrix with det(A)=0
rank(A)<n for n×n matrix
System Ax=b has no unique solution
Non-invertible: A−1 doesn't exist
System Ax=b has no unique solution
Non-invertible: A−1 doesn't exist
[1224]
has dependent rows/columnsPositive Definite Matrix
Definition:
Symmetric matrix A where xTAx>0 for all nonzero x
λi>0 for all eigenvalues
A−1 exists and is positive definite
xTAx>0 for all x=0
A−1 exists and is positive definite
xTAx>0 for all x=0
[2−1−12]
with eigenvalues 1, 3det(Ak)>0 for all leading principal minors
Cholesky decomposition exists: A=LLT
Cholesky decomposition exists: A=LLT
Block Matrix
Definition:
Matrix partitioned into submatrices Aij
[A11A21A12A22]
where each Aij is a matrix(AB)ij=∑kAikBkj
[ACBD]−1=[PRQS] (if exists)
[ACBD]−1=[PRQS] (if exists)
Sparse Matrix
Definition:
Matrix with mostly zero elements, typically O(n) nonzero elements
Store only nonzero elements with indices
Compressed Row Storage (CRS)
Compressed Column Storage (CCS)
Compressed Row Storage (CRS)
Compressed Column Storage (CCS)
1005000023000040
Network adjacency matrices
Finite element analysis
Large system solving
Finite element analysis
Large system solving
Matrix Operations
Transposition
Definition:
An operation that flips a matrix over its main diagonal, switching rows and columns: (AT)ij=Aji
For matrix A of size n×m, its transpose AT is of size m×n:
A=[a11a21a12a22a13a23]→AT=a11a12a13a21a22a23
(AT)T=A(AB)T=BTAT(A+B)T=AT+BT
Matrix Addition
Definition:
For matrices A,B of same size, (A+B)ij=Aij+Bij
[1324]+[5768]=[610812]
Commutative: A+B=B+A Associative: (A+B)+C=A+(B+C)
Scalar Addition (Matrix)
Definition:
For scalar c and matrix A, (c+A)ij=c+Aij
2+[1234]=[3456]
Commutative: c+A=A+c
Scalar Multiplication(Matrix)
Definition:
For scalar c and matrix A, (cA)ij=cAij
2[1234]=[2468]
Associative: a(bA)=(ab)A Distributive: a(A+B)=aA+aB(a+b)A=aA+bA
Matrix Multiplication
Definition:
For matrices Am×n,Bn×p, (AB)ij=∑k=1naikbkj
[1324][5768]=[1(5)+2(7)3(5)+4(7)1(6)+2(8)3(6)+4(8)]=[19432250]
Not commutative: AB=BA Associative: (AB)C=A(BC) Distributive: A(B+C)=AB+AC
Matrix Properties
Determinant
Definition:
For square matrix A, denoted det(A) or ∣A∣
acbd=ad−bc
a11a21a31a12a22a32a13a23a33=a11a22a33+a12a23a31+a13a21a32−a13a22a31−a11a23a32−a12a21a33
For square matrices: det(AB)=det(A)det(B)det(AT)=det(A)A invertible ⟺det(A)=0
Inverse Matrix
Definition:
For square matrix A, its inverse A−1 satisfies AA−1=A−1A=I
For A=[acbd], if det(A)=0:
A−1=ad−bc1[d−c−ba]
(A−1)−1=A(AB)−1=B−1A−1det(A−1)=det(A)1
Rank
Definition:
Number of linearly independent rows/columns, denoted rank(A)
rank(A)≤min(m,n) for Am×nrank(A)=rank(AT)rank(AB)≤min(rank(A),rank(B))
Matrix has maximum possible rank: rank(A)=min(m,n)
Trace
Definition:
Sum of main diagonal elements: tr(A)=∑i=1naii
tr[2413]=2+3=5
tr(A+B)=tr(A)+tr(B)tr(AB)=tr(BA)tr(cA)=ctr(A)
Matrix Size
Definition:
Matrix with m rows and n columns denoted as Am×n or A∈Rm×n
Square matrix: m=n
Rectangular matrix: m=n
Column vector: n=1
Row vector: m=1
Rectangular matrix: m=n
Column vector: n=1
Row vector: m=1
For multiplication AB: Am×nBn×p=Cm×p
Eigenvalues
Definition:
Scalar λ satisfying Av=λv for nonzero vector v (eigenvector)
Found by solving characteristic equation:
det(A−λI)=0
tr(A)=∑λidet(A)=∏λiλ(A−1)=λ(A)1 if A invertible
For A=[3012]:
det[3−λ012−λ]=0⟹λ=2,3
Eigenvectors
Definition:
Nonzero vector v satisfying Av=λv for eigenvalue λ
v1,v2 with distinct λ1,λ2 are linearly independent
kv is also eigenvector if v is eigenvector
kv is also eigenvector if v is eigenvector
For A=[3012] with λ=3:
[001−1][xy]=[00]⟹v=[10]
Matrix Transformations
Echelon Form
Definition:
Matrix where each row's leading non-zero entry (pivot) is to the right of pivots in rows above
100240356
Leading entry in any row is to the right of leading entries above All zero rows are at the bottom Each leading entry has zeros below it
Reduced Row Echelon Form
Definition:
Row echelon form where each pivot is 1 and is the only non-zero entry in its column
100010001abc
Leading 1s (pivots) are the only non-zero entries in their columns
Each pivot is 1
Unique for each matrix
Used in solving systems of equations
Each pivot is 1
Unique for each matrix
Used in solving systems of equations
Adjoint
Definition:
For matrix A, adjoint (adj(A)) is transpose of cofactor matrix
For A=[acbd]:
adj(A)=[d−b−ca]
A adj(A)=adj(A)A=det(A)IA−1=det(A)1adj(A) when det(A)=0
LU Decomposition
Definition:
Matrix A=LU where L is lower triangular and U is upper triangular
[2817]=[1401][2013]
Solve Ax=b efficiently
Compute determinant as product of diagonal entries
Factorize once, solve for multiple b
Compute determinant as product of diagonal entries
Factorize once, solve for multiple b
QR Decomposition
Definition:
Matrix A=QR where Q is orthogonal (QTQ=I) and R is upper triangular
QTQ=QQT=I
Rii≥0 for standard QR
Rii≥0 for standard QR
Solve least squares problems
Compute eigenvalues
Solve systems of equations
Compute eigenvalues
Solve systems of equations
Diagonalization
Definition:
Matrix A=PDP−1 where D is diagonal matrix of eigenvalues
Matrix A is diagonalizable if:
n linearly independent eigenvectors exist
Geometric multiplicity equals algebraic multiplicity
n linearly independent eigenvectors exist
Geometric multiplicity equals algebraic multiplicity
D=λ10⋮0λ2⋮⋯⋯⋱
where P columns are eigenvectorsCompute An easily: An=PDnP−1
Solve systems of differential equations
Solve systems of differential equations
Matrix Applications
Augmented Matrix
Definition:
Matrix [A∣b] representing system Ax=b
System: x+2y=5
3x−y=1 becomes:
3x−y=1 becomes:
[132−151]
Solve systems using row operations
Find inverse matrix
Gaussian elimination
Find inverse matrix
Gaussian elimination