Skip to main content

Mathematical Expectation

  • Understand the concept of expectation in probability theory.
  • Compute the expectation of discrete and continuous random variables.
  • Learn the properties of expectation, including linearity.
  • Explore the expectation of functions of random variables.
  • Visualize expectation using probability distributions.

Definition of Expectation

The expected value (or mean) of a random variable is a measure of its central tendency.

  • For a discrete random variable \(X\) with probability mass function (PMF) \( P(X = x) \), expectation is defined as: \[ E[X] = \sum x P(X = x) \]
  • For a continuous random variable \(X\) with probability density function (PDF) \( f(x) \), expectation is given by: \[ E[X] = \int_{-\infty}^{\infty} x f(x) dx \]

Properties of Expectation

Expectation has several useful properties:

Property Formula Description
Linearity \( E[aX + bY] = aE[X] + bE[Y] \) Expectation is linear.
Expectation of a Constant \( E[c] = c \) The expectation of a constant is the constant itself.
Expectation of an Indicator Function \( E[I_A] = P(A) \) For an indicator function \( I_A \), expectation equals the probability of the event.

Proof

By definition, expectation is:

\[ E[X] = \sum x P(X = x) \]

For two random variables \(X\) and \(Y\), and constants \(a, b\):

\[ E[aX + bY] = \sum (aX + bY) P(X, Y) \]

Expanding the summation:

\[ E[aX + bY] = a \sum X P(X) + b \sum Y P(Y) = aE[X] + bE[Y] \]

Expectation of Functions of a Random Variable

For a function \( g(X) \) of a random variable \( X \), expectation is:

\[ E[g(X)] = \sum g(x) P(X = x) \quad \text{(Discrete)} \] \[ E[g(X)] = \int g(x) f(x) dx \quad \text{(Continuous)} \]

Special cases:

  • \( E[X^2] \) is used to compute variance.
  • \( E[e^X] \) is important in financial models.

Expectation and Variance

The variance measures how much \( X \) deviates from its expected value:

\[ Var(X) = E[X^2] - (E[X])^2 \]

Higher moments such as skewness and kurtosis measure asymmetry and tail behavior.

Visualization of Expectation

Examples

Example 1: A fair die is rolled. Compute \( E[X] \) and \( Var(X) \).

\[ E[X] = \sum x P(X = x) = \frac{1+2+3+4+5+6}{6} = 3.5 \] \[ Var(X) = \frac{1^2 + 2^2 + 3^2 + 4^2 + 5^2 + 6^2}{6} - (3.5)^2 = \frac{91}{6} - 12.25 = 2.92 \]

Exercises

  • Question 1: Compute \( E[X] \) for a Bernoulli random variable \( X \) with \( P(X=1) = p \).
  • Question 2: If \( X \) follows an exponential distribution with \( \lambda = 0.5 \), compute \( E[X] \).
  • Question 3: Compute \( Var(X) \) for a Poisson random variable with \( \lambda = 4 \).
  • Question 4: If \( E[X] = 5 \) and \( E[Y] = 2 \), compute \( E[3X - 2Y] \).
  • Question 5: A uniform random variable \( X \) is defined over \( [0, 10] \). Compute \( E[X] \).
  • Answer 1: \( E[X] = p \).
  • Answer 2: \( E[X] = \frac{1}{\lambda} = 2 \).
  • Answer 3: \( Var(X) = \lambda = 4 \).
  • Answer 4: \( E[3X - 2Y] = 3(5) - 2(2) = 11 \).
  • Answer 5: \( E[X] = \frac{a + b}{2} = \frac{0 + 10}{2} = 5 \).

This Week's Best Picks from Amazon

Please see more curated items that we picked from Amazon here .

Popular posts from this blog

LU Decomposition

LU Decomposition: A Step-by-Step Guide LU Decomposition, also known as LU Factorization, is a method of decomposing a square matrix into two triangular matrices: a lower triangular matrix L and an upper triangular matrix U . This is useful for solving linear equations, computing determinants, and inverting matrices efficiently. What is LU Decomposition? LU Decomposition expresses a matrix A as: \[ A = LU \] where: L is a lower triangular matrix with ones on the diagonal. U is an upper triangular matrix. Step-by-Step Process Consider the matrix: \[ A = \begin{bmatrix} 2 & 3 & 1 \\ 4 & 7 & 3 \\ 6 & 18 & 5 \end{bmatrix} \] Step 1: Initialize L as an Identity Matrix Start with an identity matrix for \( L \): \[ L = \begin{bmatrix} 1 & 0 & 0 \\ 0 ...

Gaussian Elimination: A Step-by-Step Guide

Gaussian Elimination: A Step-by-Step Guide Gaussian Elimination is a systematic method for solving systems of linear equations. It works by transforming a given system into an equivalent one in row echelon form using a sequence of row operations. Once in this form, the system can be solved efficiently using back-substitution . What is Gaussian Elimination? Gaussian elimination consists of two main stages: Forward Elimination: Convert the system into an upper triangular form. Back-Substitution: Solve for unknowns starting from the last equation. Definition of a Pivot A pivot is the first nonzero entry in a row when moving from left to right. Pivots are used to eliminate the elements below them, transforming the system into an upper triangular form. Step-by-Step Example Consider the system of equations: \[ \begin{aligned} 2x + 3y - z &= 5 \\ 4x + y...

Vector Spaces and Linear Transformation

Vector Spaces and Linear Transformations A vector space is a set of vectors that satisfies specific properties under vector addition and scalar multiplication. Definition of a Vector Space A set \( V \) is called a vector space over a field \( \mathbb{R} \) (real numbers) if it satisfies the following properties: Closure under addition: If \( \mathbf{u}, \mathbf{v} \in V \), then \( \mathbf{u} + \mathbf{v} \in V \). Closure under scalar multiplication: If \( \mathbf{v} \in V \) and \( c \in \mathbb{R} \), then \( c\mathbf{v} \in V \). Associativity: \( (\mathbf{u} + \mathbf{v}) + \mathbf{w} = \mathbf{u} + (\mathbf{v} + \mathbf{w}) \). Commutativity: \( \mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u} \). Existence of a zero vector: There exists a vector \( \mathbf{0} \) such that \( \mathbf{v} + \mathbf{0} = \mathbf{v} \). Existence of additive inverses: For eac...