Probability Theory on Coin Toss Space

Outline

  • Finite probability spaces

  • Random variables, distribution, expectation

  • Conditional Expectations 

  • Martingales

  • Markov Processes

Finite Probability Spaces

Definition

A finite probability space is used to model a situation in which a random experiment with finitely many possible outcomes is conducted. We tossed a coin a finite number of times. For example, we toss the coin twice, the set of all possible outcomes is

\Omega = \{HH,HT,TH,TT\}
Ω={HH,HT,TH,TT}\Omega = \{HH,HT,TH,TT\}

Suppose that on each toss the probability of a head is p and the probability of a tail is q = 1 - p. We assume the tosses are independent, and so the probabilities of the individual elements. The subsets of fi are called events. For example, the event "The first toss is a head" is .The probability of an event by summing the probabilities of the elements in the event, i.e.,
 

{\mathbb P}(\text{First toss is a head})= {\mathbb P}(HT)+ {\mathbb P}(HH)
P(First toss is a head)=P(HT)+P(HH){\mathbb P}(\text{First toss is a head})= {\mathbb P}(HT)+ {\mathbb P}(HH)

Definition

Definition:  A finite probability space consists of a sample space and a probability measure    .  The sample space is a nonempty finite set and the probability measure is a function that assigns to each element a number in [0,1] so that

 

An event $A$ is a subset of $\Omega$ and the probability of $A$ is defined by

 

 

\mathbb P
P\mathbb P
\displaystyle\sum_{\omega \in \Omega} {\mathbb P} (\omega)=1
ωΩP(ω)=1\displaystyle\sum_{\omega \in \Omega} {\mathbb P} (\omega)=1
{\mathbb P}(A) =\displaystyle\sum_{\omega \in A} {\mathbb P} (\omega)
P(A)=ωAP(ω){\mathbb P}(A) =\displaystyle\sum_{\omega \in A} {\mathbb P} (\omega)

Random Variables, Distributions, and Expectations

Definition Let $(\Omega,{\mathbb P})$ be a finite probability space. A random variable is a real-valued function defined on $\Omega$.

Definition Let $X$ be a random variable defined on a finite probability space $(\Omega,{\mathbb P})$. The expectation (or expected valued of X) is defined to by

{\mathbb E}[X] = \sum_{\omega\in\Omega} X_\omega {\mathbb P}(w)
E[X]=ωΩXωP(w){\mathbb E}[X] = \sum_{\omega\in\Omega} X_\omega {\mathbb P}(w)

The variance of $X$ is defined by

\text{Var}[X] = {\mathbb E}[(X-{\mathbb E}[X])^2]
Var[X]=E[(XE[X])2]\text{Var}[X] = {\mathbb E}[(X-{\mathbb E}[X])^2]

Properties: The expectation and variance satisfies

{\mathbb E}[c_1 X+ c_2Y] = c_1{\mathbb E}[X]+ c_2{\mathbb E}[Y]
E[c1X+c2Y]=c1E[X]+c2E[Y]{\mathbb E}[c_1 X+ c_2Y] = c_1{\mathbb E}[X]+ c_2{\mathbb E}[Y]
\text{Var}[ X] = {\mathbb E}[X^2]- {\mathbb E}[X]^2
Var[X]=E[X2]E[X]2\text{Var}[ X] = {\mathbb E}[X^2]- {\mathbb E}[X]^2

Theorem (Jensen inequality): Let $X$ be a random variable on a finite probability space and $\phi$ a convex function. Then

{\mathbb E}[\phi(X)]\geq \phi({\mathbb E}[X])
E[ϕ(X)]ϕ(E[X]){\mathbb E}[\phi(X)]\geq \phi({\mathbb E}[X])

Current deck

By abdellah Chkifa

Current deck

  • 622