Explore Probability formulas with explanations and examples
Simple Probability P ( A ) = Number of favorable outcomes Total number of possible outcomes P(A) = \frac{\text{Number of favorable outcomes}}{\text{Total number of possible outcomes}} P ( A ) = Total number of possible outcomes Number of favorable outcomes Probability Range of an Event 0 ≤ P ( A ) ≤ 1 0 \leq P(A) \leq 1 0 ≤ P ( A ) ≤ 1 Complement Rule P ( A ′ ) + P ( A ) = 1 P(A') + P(A) = 1 P ( A ′ ) + P ( A ) = 1 Conditional Probability Basic Formula P ( A ∣ B ) = P ( A ∩ B ) P ( B ) P(A \mid B) = \frac{P(A \cap B)}{P(B)} P ( A ∣ B ) = P ( B ) P ( A ∩ B ) Bayes' Theorem P ( A ∣ B ) = P ( B ∣ A ) × P ( A ) P ( B ) P(A \mid B) = \frac{P(B \mid A) \times P(A)}{P(B)} P ( A ∣ B ) = P ( B ) P ( B ∣ A ) × P ( A ) Probability of Both Events Occurring (Multiplication Rule) P ( A ∩ B ) = P ( A ) × P ( B ) P(A \cap B) = P(A) \times P(B) P ( A ∩ B ) = P ( A ) × P ( B ) Probability of Either Event Occurring (Addition Rule) P ( A ∪ B ) = P ( A ) + P ( B ) − P ( A ∩ B ) P(A \cup B) = P(A) + P(B) - P(A \cap B) P ( A ∪ B ) = P ( A ) + P ( B ) − P ( A ∩ B ) Probability of At Least One Event Not Occurring P ( ¬ A ∩ ¬ B ) = P ( ¬ A ) × P ( ¬ B ) P(\neg A \cap \neg B) = P(\neg A) \times P(\neg B) P ( ¬ A ∩ ¬ B ) = P ( ¬ A ) × P ( ¬ B ) Probability of Exactly One Event Occurring P ( exactly one of A or B ) = P ( A ∩ ¬ B ) + P ( ¬ A ∩ B ) P(\text{exactly one of } A \text{ or } B) = P(A \cap \neg B) + P(\neg A \cap B) P ( exactly one of A or B ) = P ( A ∩ ¬ B ) + P ( ¬ A ∩ B ) General Formula for Multiple Independent Events P ( A ∩ B ∩ C ) = P ( A ) × P ( B ) × P ( C ) P(A \cap B \cap C) = P(A) \times P(B) \times P(C) P ( A ∩ B ∩ C ) = P ( A ) × P ( B ) × P ( C ) Probability of Both Disjoint Events Occurring P ( A ∩ B ) = 0 P(A \cap B) = 0 P ( A ∩ B ) = 0 Probability of Either Disjoint Event Occurring (Addition Rule) P ( A ∪ B ) = P ( A ) + P ( B ) P(A \cup B) = P(A) + P(B) P ( A ∪ B ) = P ( A ) + P ( B ) Probability of Neither Disjoint Event Occurring P ( ¬ A ∩ ¬ B ) = P ( ¬ A ) × P ( ¬ B ) P(\neg A \cap \neg B) = P(\neg A) \times P(\neg B) P ( ¬ A ∩ ¬ B ) = P ( ¬ A ) × P ( ¬ B ) Conditional Probability for Disjoint Events P ( A ∣ B ) = 0 a n d P ( B ∣ A ) = 0 P(A \mid B) = 0 and P(B \mid A) = 0 P ( A ∣ B ) = 0 an d P ( B ∣ A ) = 0 Generalization to Multiple Disjoint Events P ( A ∪ B ∪ C ∪ … ) = P ( A ) + P ( B ) + P ( C ) + … P(A \cup B \cup C \cup \ldots) = P(A) + P(B) + P(C) + \ldots P ( A ∪ B ∪ C ∪ … ) = P ( A ) + P ( B ) + P ( C ) + … Probability Mass Function (PMF) P ( X = k ) = ( n k ) p k ( 1 − p ) n − k P(X = k) = \binom{n}{k} p^{k} (1 - p)^{n - k} P ( X = k ) = ( k n ) p k ( 1 − p ) n − k Cumulative Distribution Function (CDF) P ( X ≤ k ) = ∑ i = 0 k ( n i ) p i ( 1 − p ) n − i P(X \leq k) = \sum_{i=0}^{k} \binom{n}{i} p^{i} (1 - p)^{n - i} P ( X ≤ k ) = ∑ i = 0 k ( i n ) p i ( 1 − p ) n − i Mean (Expected Value) μ = E [ X ] = n p \mu = E[X] = n p μ = E [ X ] = n p Variance σ 2 = Var ( X ) = n p ( 1 − p ) \sigma^2 = \operatorname{Var}(X) = n p (1 - p) σ 2 = Var ( X ) = n p ( 1 − p ) Standard Deviation σ = n p ( 1 − p ) \sigma = \sqrt{n p (1 - p)} σ = n p ( 1 − p ) Probability Mass Function (PMF) P ( X = k ) = e − λ λ k k ! P(X = k) = \frac{e^{-\lambda} \lambda^{k}}{k!} P ( X = k ) = k ! e − λ λ k Cumulative Distribution Function (CDF) P ( X ≤ k ) = e − λ ∑ i = 0 k λ i i ! P(X \leq k) = e^{-\lambda} \sum_{i=0}^{k} \frac{\lambda^{i}}{i!} P ( X ≤ k ) = e − λ ∑ i = 0 k i ! λ i Mean (Expected Value) μ = E [ X ] = λ \mu = E[X] = \lambda μ = E [ X ] = λ Variance σ 2 = Var ( X ) = λ \sigma^2 = \operatorname{Var}(X) = \lambda σ 2 = Var ( X ) = λ Standard Deviation σ = λ \sigma = \sqrt{\lambda} σ = λ Probability Mass Function (PMF) P ( X = k ) = ( 1 − p ) k − 1 p P(X = k) = (1 - p)^{k - 1} p P ( X = k ) = ( 1 − p ) k − 1 p Cumulative Distribution Function (CDF) P ( X ≤ k ) = 1 − ( 1 − p ) k P(X \leq k) = 1 - (1 - p)^{k} P ( X ≤ k ) = 1 − ( 1 − p ) k Mean (Expected Value) μ = E [ X ] = 1 p \mu = E[X] = \frac{1}{p} μ = E [ X ] = p 1 Variance σ 2 = Var ( X ) = 1 − p p 2 \sigma^2 = \operatorname{Var}(X) = \frac{1 - p}{p^{2}} σ 2 = Var ( X ) = p 2 1 − p Standard Deviation σ = 1 − p p 2 \sigma = \sqrt{\frac{1 - p}{p^{2}}} σ = p 2 1 − p Probability Mass Function (PMF) P ( X = k ) = ( k − 1 r − 1 ) p r ( 1 − p ) k − r P(X = k) = \binom{k - 1}{r - 1} p^{r} (1 - p)^{k - r} P ( X = k ) = ( r − 1 k − 1 ) p r ( 1 − p ) k − r Cumulative Distribution Function (CDF) P ( X ≤ k ) = I p ( r , k − r + 1 ) P(X \leq k) = I_{p}(r, k - r + 1) P ( X ≤ k ) = I p ( r , k − r + 1 ) Mean (Expected Value) μ = E [ X ] = r p \mu = E[X] = \frac{r}{p} μ = E [ X ] = p r Variance σ 2 = Var ( X ) = r ( 1 − p ) p 2 \sigma^2 = \operatorname{Var}(X) = \frac{r (1 - p)}{p^{2}} σ 2 = Var ( X ) = p 2 r ( 1 − p ) Standard Deviation σ = r ( 1 − p ) p 2 \sigma = \sqrt{\frac{r (1 - p)}{p^{2}}} σ = p 2 r ( 1 − p ) Probability Mass Function (PMF) P ( X = k ) = ( K k ) ( N − K n − k ) ( N n ) P(X = k) = \frac{\binom{K}{k} \binom{N - K}{n - k}}{\binom{N}{n}} P ( X = k ) = ( n N ) ( k K ) ( n − k N − K ) Mean (Expected Value) μ = E [ X ] = n ( K N ) \mu = E[X] = n \left( \frac{K}{N} \right) μ = E [ X ] = n ( N K ) Variance σ 2 = Var ( X ) = n ( K N ) ( N − K N ) ( N − n N − 1 ) \sigma^2 = \operatorname{Var}(X) = n \left( \frac{K}{N} \right) \left( \frac{N - K}{N} \right) \left( \frac{N - n}{N - 1} \right) σ 2 = Var ( X ) = n ( N K ) ( N N − K ) ( N − 1 N − n ) Standard Deviation σ = n ( K N ) ( N − K N ) ( N − n N − 1 ) \sigma = \sqrt{n \left( \frac{K}{N} \right) \left( \frac{N - K}{N} \right) \left( \frac{N - n}{N - 1} \right)} σ = n ( N K ) ( N N − K ) ( N − 1 N − n ) Probability of At Least $k$ Successes P ( X ≥ k ) = 1 − ∑ i = 0 k − 1 P ( X = i ) P(X \geq k) = 1 - \sum_{i=0}^{k - 1} P(X = i) P ( X ≥ k ) = 1 − ∑ i = 0 k − 1 P ( X = i ) Probability Mass Function (PMF) P ( X 1 = x 1 , … , X k = x k ) = n ! x 1 ! x 2 ! … x k ! p 1 x 1 p 2 x 2 … p k x k P(X_1 = x_1, \dots, X_k = x_k) = \frac{n!}{x_1! x_2! \dots x_k!} p_1^{x_1} p_2^{x_2} \dots p_k^{x_k} P ( X 1 = x 1 , … , X k = x k ) = x 1 ! x 2 ! … x k ! n ! p 1 x 1 p 2 x 2 … p k x k Mean (Expected Value) of Each Outcome E [ X i ] = n p i E[X_i] = n p_i E [ X i ] = n p i Variance of Each Outcome Var ( X i ) = n p i ( 1 − p i ) \operatorname{Var}(X_i) = n p_i (1 - p_i) Var ( X i ) = n p i ( 1 − p i ) Covariance Between Outcomes Cov ( X i , X j ) = − n p i p j \operatorname{Cov}(X_i, X_j) = -n p_i p_j Cov ( X i , X j ) = − n p i p j Correlation Coefficient Between Outcomes ρ i j = Cov ( X i , X j ) Var ( X i ) Var ( X j ) = − p i p j p i ( 1 − p i ) p j ( 1 − p j ) \rho_{ij} = \frac{\operatorname{Cov}(X_i, X_j)}{\sqrt{\operatorname{Var}(X_i) \operatorname{Var}(X_j)}} = \frac{-p_i p_j}{\sqrt{p_i (1 - p_i) p_j (1 - p_j)}} ρ ij = Var ( X i ) Var ( X j ) Cov ( X i , X j ) = p i ( 1 − p i ) p j ( 1 − p j ) − p i p j Probability Mass Function (PMF) P ( X = k ) = 1 b − a + 1 P(X = k) = \frac{1}{b - a + 1} P ( X = k ) = b − a + 1 1 Mean (Expected Value) μ = E [ X ] = a + b 2 \mu = E[X] = \frac{a + b}{2} μ = E [ X ] = 2 a + b Variance σ 2 = Var ( X ) = ( b − a + 1 ) 2 − 1 12 \sigma^2 = \operatorname{Var}(X) = \frac{(b - a + 1)^2 - 1}{12} σ 2 = Var ( X ) = 12 ( b − a + 1 ) 2 − 1 Standard Deviation σ = ( b − a + 1 ) 2 − 1 12 \sigma = \sqrt{\frac{(b - a + 1)^2 - 1}{12}} σ = 12 ( b − a + 1 ) 2 − 1 Cumulative Distribution Function (CDF) P ( X ≤ k ) = k − a + 1 b − a + 1 f o r k = a , a + 1 , … , b P(X \leq k) = \frac{k - a + 1}{b - a + 1} for k = a, a+1, \dots, b P ( X ≤ k ) = b − a + 1 k − a + 1 f or k = a , a + 1 , … , b Probability Mass Function (PMF) P ( X = k ) = ( k − 1 r − 1 ) ( N − k K − r ) ( N K ) P(X = k) = \frac{\binom{k - 1}{r - 1} \binom{N - k}{K - r}}{\binom{N}{K}} P ( X = k ) = ( K N ) ( r − 1 k − 1 ) ( K − r N − k ) Mean (Expected Value) μ = E [ X ] = r ( N + 1 ) K + 1 \mu = E[X] = \frac{r(N + 1)}{K + 1} μ = E [ X ] = K + 1 r ( N + 1 ) Variance σ 2 = r ( N + 1 ) ( N − K ) ( N − r ) ( K + 1 ) 2 ( K + 2 ) \sigma^2 = \frac{r (N + 1)(N - K)(N - r)}{(K + 1)^{2} (K + 2)} σ 2 = ( K + 1 ) 2 ( K + 2 ) r ( N + 1 ) ( N − K ) ( N − r ) Standard Deviation σ = r ( N + 1 ) ( N − K ) ( N − r ) ( K + 1 ) 2 ( K + 2 ) \sigma = \sqrt{\frac{r (N + 1)(N - K)(N - r)}{(K + 1)^{2} (K + 2)}} σ = ( K + 1 ) 2 ( K + 2 ) r ( N + 1 ) ( N − K ) ( N − r ) Cumulative Distribution Function (CDF) P ( X ≤ k ) = 1 − ( N − r k − r ) ( r − 1 r − 1 ) ( N k ) P(X \leq k) = 1 - \frac{\binom{N - r}{k - r} \binom{r - 1}{r - 1}}{\binom{N}{k}} P ( X ≤ k ) = 1 − ( k N ) ( k − r N − r ) ( r − 1 r − 1 ) Probability Mass Function (PMF) P ( X = k ) = − 1 ln ( 1 − p ) p k k P(X = k) = -\frac{1}{\ln(1 - p)} \frac{p^{k}}{k} P ( X = k ) = − l n ( 1 − p ) 1 k p k Mean (Expected Value) μ = E [ X ] = − p ( 1 − p ) ln ( 1 − p ) \mu = E[X] = \frac{-p}{(1 - p) \ln(1 - p)} μ = E [ X ] = ( 1 − p ) l n ( 1 − p ) − p Variance σ 2 = Var ( X ) = − p ( p + ln ( 1 − p ) ) ( 1 − p ) 2 [ ln ( 1 − p ) ] 2 \sigma^2 = \operatorname{Var}(X) = \frac{-p (p + \ln(1 - p))}{(1 - p)^{2} [\ln(1 - p)]^{2}} σ 2 = Var ( X ) = ( 1 − p ) 2 [ l n ( 1 − p ) ] 2 − p ( p + l n ( 1 − p )) Standard Deviation σ = Var ( X ) \sigma = \sqrt{\operatorname{Var}(X)} σ = Var ( X ) Generating Function G X ( s ) = ln ( 1 − p s ) ln ( 1 − p ) G_X(s) = \frac{\ln(1 - p s)}{\ln(1 - p)} G X ( s ) = l n ( 1 − p ) l n ( 1 − p s ) Simple Probability P ( A ) = Number of favorable outcomes Total number of possible outcomes P(A) = \frac{\text{Number of favorable outcomes}}{\text{Total number of possible outcomes}} P ( A ) = Total number of possible outcomes Number of favorable outcomes Probability Range of an Event 0 ≤ P ( A ) ≤ 1 0 \leq P(A) \leq 1 0 ≤ P ( A ) ≤ 1 Complement Rule P ( A ′ ) + P ( A ) = 1 P(A') + P(A) = 1 P ( A ′ ) + P ( A ) = 1 Conditional Probability Basic Formula P ( A ∣ B ) = P ( A ∩ B ) P ( B ) P(A \mid B) = \frac{P(A \cap B)}{P(B)} P ( A ∣ B ) = P ( B ) P ( A ∩ B ) Bayes' Theorem P ( A ∣ B ) = P ( B ∣ A ) × P ( A ) P ( B ) P(A \mid B) = \frac{P(B \mid A) \times P(A)}{P(B)} P ( A ∣ B ) = P ( B ) P ( B ∣ A ) × P ( A ) Probability of Both Events Occurring (Multiplication Rule) P ( A ∩ B ) = P ( A ) × P ( B ) P(A \cap B) = P(A) \times P(B) P ( A ∩ B ) = P ( A ) × P ( B ) Probability of Either Event Occurring (Addition Rule) P ( A ∪ B ) = P ( A ) + P ( B ) − P ( A ∩ B ) P(A \cup B) = P(A) + P(B) - P(A \cap B) P ( A ∪ B ) = P ( A ) + P ( B ) − P ( A ∩ B ) Probability of At Least One Event Not Occurring P ( ¬ A ∩ ¬ B ) = P ( ¬ A ) × P ( ¬ B ) P(\neg A \cap \neg B) = P(\neg A) \times P(\neg B) P ( ¬ A ∩ ¬ B ) = P ( ¬ A ) × P ( ¬ B ) Probability of Exactly One Event Occurring P ( exactly one of A or B ) = P ( A ∩ ¬ B ) + P ( ¬ A ∩ B ) P(\text{exactly one of } A \text{ or } B) = P(A \cap \neg B) + P(\neg A \cap B) P ( exactly one of A or B ) = P ( A ∩ ¬ B ) + P ( ¬ A ∩ B ) General Formula for Multiple Independent Events P ( A ∩ B ∩ C ) = P ( A ) × P ( B ) × P ( C ) P(A \cap B \cap C) = P(A) \times P(B) \times P(C) P ( A ∩ B ∩ C ) = P ( A ) × P ( B ) × P ( C ) Probability of Both Disjoint Events Occurring P ( A ∩ B ) = 0 P(A \cap B) = 0 P ( A ∩ B ) = 0 Probability of Either Disjoint Event Occurring (Addition Rule) P ( A ∪ B ) = P ( A ) + P ( B ) P(A \cup B) = P(A) + P(B) P ( A ∪ B ) = P ( A ) + P ( B ) Probability of Neither Disjoint Event Occurring P ( ¬ A ∩ ¬ B ) = P ( ¬ A ) × P ( ¬ B ) P(\neg A \cap \neg B) = P(\neg A) \times P(\neg B) P ( ¬ A ∩ ¬ B ) = P ( ¬ A ) × P ( ¬ B ) Conditional Probability for Disjoint Events P ( A ∣ B ) = 0 a n d P ( B ∣ A ) = 0 P(A \mid B) = 0 and P(B \mid A) = 0 P ( A ∣ B ) = 0 an d P ( B ∣ A ) = 0 Generalization to Multiple Disjoint Events P ( A ∪ B ∪ C ∪ … ) = P ( A ) + P ( B ) + P ( C ) + … P(A \cup B \cup C \cup \ldots) = P(A) + P(B) + P(C) + \ldots P ( A ∪ B ∪ C ∪ … ) = P ( A ) + P ( B ) + P ( C ) + … Probability Mass Function (PMF) P ( X = k ) = ( n k ) p k ( 1 − p ) n − k P(X = k) = \binom{n}{k} p^{k} (1 - p)^{n - k} P ( X = k ) = ( k n ) p k ( 1 − p ) n − k Cumulative Distribution Function (CDF) P ( X ≤ k ) = ∑ i = 0 k ( n i ) p i ( 1 − p ) n − i P(X \leq k) = \sum_{i=0}^{k} \binom{n}{i} p^{i} (1 - p)^{n - i} P ( X ≤ k ) = ∑ i = 0 k ( i n ) p i ( 1 − p ) n − i Mean (Expected Value) μ = E [ X ] = n p \mu = E[X] = n p μ = E [ X ] = n p Variance σ 2 = Var ( X ) = n p ( 1 − p ) \sigma^2 = \operatorname{Var}(X) = n p (1 - p) σ 2 = Var ( X ) = n p ( 1 − p ) Standard Deviation σ = n p ( 1 − p ) \sigma = \sqrt{n p (1 - p)} σ = n p ( 1 − p ) Probability Mass Function (PMF) P ( X = k ) = e − λ λ k k ! P(X = k) = \frac{e^{-\lambda} \lambda^{k}}{k!} P ( X = k ) = k ! e − λ λ k Cumulative Distribution Function (CDF) P ( X ≤ k ) = e − λ ∑ i = 0 k λ i i ! P(X \leq k) = e^{-\lambda} \sum_{i=0}^{k} \frac{\lambda^{i}}{i!} P ( X ≤ k ) = e − λ ∑ i = 0 k i ! λ i Mean (Expected Value) μ = E [ X ] = λ \mu = E[X] = \lambda μ = E [ X ] = λ Variance σ 2 = Var ( X ) = λ \sigma^2 = \operatorname{Var}(X) = \lambda σ 2 = Var ( X ) = λ Standard Deviation σ = λ \sigma = \sqrt{\lambda} σ = λ Probability Mass Function (PMF) P ( X = k ) = ( 1 − p ) k − 1 p P(X = k) = (1 - p)^{k - 1} p P ( X = k ) = ( 1 − p ) k − 1 p Cumulative Distribution Function (CDF) P ( X ≤ k ) = 1 − ( 1 − p ) k P(X \leq k) = 1 - (1 - p)^{k} P ( X ≤ k ) = 1 − ( 1 − p ) k Mean (Expected Value) μ = E [ X ] = 1 p \mu = E[X] = \frac{1}{p} μ = E [ X ] = p 1 Variance σ 2 = Var ( X ) = 1 − p p 2 \sigma^2 = \operatorname{Var}(X) = \frac{1 - p}{p^{2}} σ 2 = Var ( X ) = p 2 1 − p Standard Deviation σ = 1 − p p 2 \sigma = \sqrt{\frac{1 - p}{p^{2}}} σ = p 2 1 − p Probability Mass Function (PMF) P ( X = k ) = ( k − 1 r − 1 ) p r ( 1 − p ) k − r P(X = k) = \binom{k - 1}{r - 1} p^{r} (1 - p)^{k - r} P ( X = k ) = ( r − 1 k − 1 ) p r ( 1 − p ) k − r Cumulative Distribution Function (CDF) P ( X ≤ k ) = I p ( r , k − r + 1 ) P(X \leq k) = I_{p}(r, k - r + 1) P ( X ≤ k ) = I p ( r , k − r + 1 ) Mean (Expected Value) μ = E [ X ] = r p \mu = E[X] = \frac{r}{p} μ = E [ X ] = p r Variance σ 2 = Var ( X ) = r ( 1 − p ) p 2 \sigma^2 = \operatorname{Var}(X) = \frac{r (1 - p)}{p^{2}} σ 2 = Var ( X ) = p 2 r ( 1 − p ) Standard Deviation σ = r ( 1 − p ) p 2 \sigma = \sqrt{\frac{r (1 - p)}{p^{2}}} σ = p 2 r ( 1 − p ) Probability Mass Function (PMF) P ( X = k ) = ( K k ) ( N − K n − k ) ( N n ) P(X = k) = \frac{\binom{K}{k} \binom{N - K}{n - k}}{\binom{N}{n}} P ( X = k ) = ( n N ) ( k K ) ( n − k N − K ) Mean (Expected Value) μ = E [ X ] = n ( K N ) \mu = E[X] = n \left( \frac{K}{N} \right) μ = E [ X ] = n ( N K ) Variance σ 2 = Var ( X ) = n ( K N ) ( N − K N ) ( N − n N − 1 ) \sigma^2 = \operatorname{Var}(X) = n \left( \frac{K}{N} \right) \left( \frac{N - K}{N} \right) \left( \frac{N - n}{N - 1} \right) σ 2 = Var ( X ) = n ( N K ) ( N N − K ) ( N − 1 N − n ) Standard Deviation σ = n ( K N ) ( N − K N ) ( N − n N − 1 ) \sigma = \sqrt{n \left( \frac{K}{N} \right) \left( \frac{N - K}{N} \right) \left( \frac{N - n}{N - 1} \right)} σ = n ( N K ) ( N N − K ) ( N − 1 N − n ) Probability of At Least $k$ Successes P ( X ≥ k ) = 1 − ∑ i = 0 k − 1 P ( X = i ) P(X \geq k) = 1 - \sum_{i=0}^{k - 1} P(X = i) P ( X ≥ k ) = 1 − ∑ i = 0 k − 1 P ( X = i ) Probability Mass Function (PMF) P ( X 1 = x 1 , … , X k = x k ) = n ! x 1 ! x 2 ! … x k ! p 1 x 1 p 2 x 2 … p k x k P(X_1 = x_1, \dots, X_k = x_k) = \frac{n!}{x_1! x_2! \dots x_k!} p_1^{x_1} p_2^{x_2} \dots p_k^{x_k} P ( X 1 = x 1 , … , X k = x k ) = x 1 ! x 2 ! … x k ! n ! p 1 x 1 p 2 x 2 … p k x k Mean (Expected Value) of Each Outcome E [ X i ] = n p i E[X_i] = n p_i E [ X i ] = n p i Variance of Each Outcome Var ( X i ) = n p i ( 1 − p i ) \operatorname{Var}(X_i) = n p_i (1 - p_i) Var ( X i ) = n p i ( 1 − p i ) Covariance Between Outcomes Cov ( X i , X j ) = − n p i p j \operatorname{Cov}(X_i, X_j) = -n p_i p_j Cov ( X i , X j ) = − n p i p j Correlation Coefficient Between Outcomes ρ i j = Cov ( X i , X j ) Var ( X i ) Var ( X j ) = − p i p j p i ( 1 − p i ) p j ( 1 − p j ) \rho_{ij} = \frac{\operatorname{Cov}(X_i, X_j)}{\sqrt{\operatorname{Var}(X_i) \operatorname{Var}(X_j)}} = \frac{-p_i p_j}{\sqrt{p_i (1 - p_i) p_j (1 - p_j)}} ρ ij = Var ( X i ) Var ( X j ) Cov ( X i , X j ) = p i ( 1 − p i ) p j ( 1 − p j ) − p i p j Probability Mass Function (PMF) P ( X = k ) = 1 b − a + 1 P(X = k) = \frac{1}{b - a + 1} P ( X = k ) = b − a + 1 1 Mean (Expected Value) μ = E [ X ] = a + b 2 \mu = E[X] = \frac{a + b}{2} μ = E [ X ] = 2 a + b Variance σ 2 = Var ( X ) = ( b − a + 1 ) 2 − 1 12 \sigma^2 = \operatorname{Var}(X) = \frac{(b - a + 1)^2 - 1}{12} σ 2 = Var ( X ) = 12 ( b − a + 1 ) 2 − 1 Standard Deviation σ = ( b − a + 1 ) 2 − 1 12 \sigma = \sqrt{\frac{(b - a + 1)^2 - 1}{12}} σ = 12 ( b − a + 1 ) 2 − 1 Cumulative Distribution Function (CDF) P ( X ≤ k ) = k − a + 1 b − a + 1 f o r k = a , a + 1 , … , b P(X \leq k) = \frac{k - a + 1}{b - a + 1} for k = a, a+1, \dots, b P ( X ≤ k ) = b − a + 1 k − a + 1 f or k = a , a + 1 , … , b Probability Mass Function (PMF) P ( X = k ) = ( k − 1 r − 1 ) ( N − k K − r ) ( N K ) P(X = k) = \frac{\binom{k - 1}{r - 1} \binom{N - k}{K - r}}{\binom{N}{K}} P ( X = k ) = ( K N ) ( r − 1 k − 1 ) ( K − r N − k ) Mean (Expected Value) μ = E [ X ] = r ( N + 1 ) K + 1 \mu = E[X] = \frac{r(N + 1)}{K + 1} μ = E [ X ] = K + 1 r ( N + 1 ) Variance σ 2 = r ( N + 1 ) ( N − K ) ( N − r ) ( K + 1 ) 2 ( K + 2 ) \sigma^2 = \frac{r (N + 1)(N - K)(N - r)}{(K + 1)^{2} (K + 2)} σ 2 = ( K + 1 ) 2 ( K + 2 ) r ( N + 1 ) ( N − K ) ( N − r ) Standard Deviation σ = r ( N + 1 ) ( N − K ) ( N − r ) ( K + 1 ) 2 ( K + 2 ) \sigma = \sqrt{\frac{r (N + 1)(N - K)(N - r)}{(K + 1)^{2} (K + 2)}} σ = ( K + 1 ) 2 ( K + 2 ) r ( N + 1 ) ( N − K ) ( N − r ) Cumulative Distribution Function (CDF) P ( X ≤ k ) = 1 − ( N − r k − r ) ( r − 1 r − 1 ) ( N k ) P(X \leq k) = 1 - \frac{\binom{N - r}{k - r} \binom{r - 1}{r - 1}}{\binom{N}{k}} P ( X ≤ k ) = 1 − ( k N ) ( k − r N − r ) ( r − 1 r − 1 ) Probability Mass Function (PMF) P ( X = k ) = − 1 ln ( 1 − p ) p k k P(X = k) = -\frac{1}{\ln(1 - p)} \frac{p^{k}}{k} P ( X = k ) = − l n ( 1 − p ) 1 k p k Mean (Expected Value) μ = E [ X ] = − p ( 1 − p ) ln ( 1 − p ) \mu = E[X] = \frac{-p}{(1 - p) \ln(1 - p)} μ = E [ X ] = ( 1 − p ) l n ( 1 − p ) − p Variance σ 2 = Var ( X ) = − p ( p + ln ( 1 − p ) ) ( 1 − p ) 2 [ ln ( 1 − p ) ] 2 \sigma^2 = \operatorname{Var}(X) = \frac{-p (p + \ln(1 - p))}{(1 - p)^{2} [\ln(1 - p)]^{2}} σ 2 = Var ( X ) = ( 1 − p ) 2 [ l n ( 1 − p ) ] 2 − p ( p + l n ( 1 − p )) Standard Deviation σ = Var ( X ) \sigma = \sqrt{\operatorname{Var}(X)} σ = Var ( X ) Generating Function G X ( s ) = ln ( 1 − p s ) ln ( 1 − p ) G_X(s) = \frac{\ln(1 - p s)}{\ln(1 - p)} G X ( s ) = l n ( 1 − p ) l n ( 1 − p s )