1. Geometric random variable
P ( X = k ) = ( 1 − p ) k − 1 p , k = 1 , 2 , 3 , ⋯ \begin{equation*}
P(X=k)=(1-p)^{k-1} p, \quad k=1,2,3, \cdots \tag{1}
\end{equation*} P ( X = k ) = ( 1 − p ) k − 1 p , k = 1 , 2 , 3 , ⋯ ( 1 )
Mean of X = E ( X ) = 1 p X=E(X)=\frac{1}{p} X = E ( X ) = p 1
Memoryless Property (or Markovian property)
P ( X = m + n ∣ X > m ) = P ( X = n ) , m , n ≥ 1 \begin{equation*}
P(X=m+n \mid X>m)=P(X=n), \quad m, n \geq 1 \tag{2}
\end{equation*} P ( X = m + n ∣ X > m ) = P ( X = n ) , m , n ≥ 1 ( 2 )
2. Exponential Random Variable
The cumulative distribution function (cds)
F X ( x ) = P ( X ≤ x ) = { 1 − e − λ x , 0 ≤ x < ∞ 0 , x < 0 (3) F_{X}(x)=P(X \leq x)= \begin{cases}1-e^{-\lambda x}, & 0 \leq x<\infty \tag{3}\\ 0, & x<0\end{cases} F X ( x ) = P ( X ≤ x ) = { 1 − e − λ x , 0 , 0 ≤ x < ∞ x < 0 ( 3 )
P ( X > x ) = 1 − P ( X ≤ x ) = e − λ x , x ≥ 0. P(X>x)=1-P(X \leq x)=e^{-\lambda x}, \quad x \geq 0 . P ( X > x ) = 1 − P ( X ≤ x ) = e − λ x , x ≥ 0.
The probability density function (pdf)
f X ( x ) = { λ e − λ x , x > 0 0 , x ≤ 0 (4) f_{X}(x)= \begin{cases}\lambda e^{-\lambda x}, & x>0 \tag{4}\\ 0, & x \leq 0\end{cases} f X ( x ) = { λ e − λ x , 0 , x > 0 x ≤ 0 ( 4 )
Mean of X = E ( X ) = 1 λ X=E(X)=\frac{1}{\lambda} X = E ( X ) = λ 1
3. Stochastic Processes
A stochastic process is a collection of random variables { X ( t ) : t ∈ T } \{X(t): t \in T\} { X ( t ) : t ∈ T } . Note that X ( t ) X(t) X ( t ) is a r.v. for each t ∈ T t \in T t ∈ T .
T T T is called the index set of the process, a t ∈ T t \in T t ∈ T is called an index. Very often, the index t t t is interpreted as time.
If T T T is a countable set, then the stochastic process is called a discrete time process.
If T T T is an interval of R \mathbb{R} R , then the stochastic process is called a continuous time process.
The set of all values that X ( t ) X(t) X ( t ) may assume is called the state space, denoted by S S S .
S S S can be a discrete state space (countable). In this case, the stochastic process is called a chain. Eg, a machine has only 2 states, working or failed.
S S S can be a continuous state space. Eg, X ( t ) X(t) X ( t ) represents the distance covered by a robot arm in time t t t .
There are 4 different types of stochastic processes:
discrete time, discrete state space processes (discrete time chain)
discrete time, continuous state space processes
continuous time, discrete state space processes (continuous time chain)
continuous time, continuous state space processes
4. Poisson Process
Poisson random variable
P ( X ( t ) = k ) = e − λ t ( λ t ) k k ! , k = 0 , 1 , 2 , ⋯ \begin{equation*}
P(X(t)=k)=\frac{e^{-\lambda t}(\lambda t)^{k}}{k!}, \quad k=0,1,2, \cdots \tag{5}
\end{equation*} P ( X ( t ) = k ) = k ! e − λ t ( λ t ) k , k = 0 , 1 , 2 , ⋯ ( 5 )
A collection of Poisson random variables { X ( t ) \{X(t) { X ( t ) : t ≥ 0 } t \geq 0\} t ≥ 0 } is a Poisson process. λ \lambda λ is a parameter and is called the rate of the Poisson process.
Superposition of Poisson Processes
λ = Σ i = 1 n λ i \lambda = \Sigma_{i=1}^{n} \lambda_{i} λ = Σ i = 1 n λ i
Decomposition of a Poisson Process
5. Discrete Time Markov Chain (DTMC)
Markov property
P ( X n = j ∣ X n − 1 = i , X n − 2 = i 2 , X n − 3 = i 3 , ⋯ , X 0 = i n ) P\left(X_{n}=j \mid X_{n-1}=i, X_{n-2}=i_{2}, X_{n-3}=i_{3}, \cdots, X_{0}=i_{n}\right) P ( X n = j ∣ X n − 1 = i , X n − 2 = i 2 , X n − 3 = i 3 , ⋯ , X 0 = i n )
= P ( X n = j ∣ X n − 1 = i ) \begin{equation*}
=P\left(X_{n}=j \mid X_{n-1}=i\right) \tag{6}
\end{equation*} = P ( X n = j ∣ X n − 1 = i ) ( 6 )
One-Step Transition Probability
p i j ≡ p i j ( 1 ) = P ( X n = j ∣ X n − 1 = i ) , n ≥ 1 \begin{equation*}
p_{i j} \equiv p_{i j}(1)=P\left(X_{n}=j \mid X_{n-1}=i\right), \quad n \geq 1 \tag{7}
\end{equation*} p ij ≡ p ij ( 1 ) = P ( X n = j ∣ X n − 1 = i ) , n ≥ 1 ( 7 )
Transition Probability Matrix (TPM)
P = [ p i j ] = [ p 00 p 01 p 02 ⋯ p 10 p 11 p 12 ⋯ p 20 p 21 p 22 ⋯ ⋅ ⋅ ⋅ ⋯ ] 0 ≤ p i j ≤ 1 , i , j ∈ N ∑ j p i j = 1 , i ∈ N \begin{align*}
P=\left[p_{i j}\right] & =\left[\begin{array}{cccc}
p_{00} & p_{01} & p_{02} & \cdots \\
p_{10} & p_{11} & p_{12} & \cdots \\
p_{20} & p_{21} & p_{22} & \cdots \\
\cdot & \cdot & \cdot & \cdots
\end{array}\right] \tag{8}\\
0 \leq p_{i j} \leq 1, & i, j \in N \\
\sum_{j} p_{i j} & =1, \quad i \in N \tag{9}
\end{align*} P = [ p ij ] 0 ≤ p ij ≤ 1 , j ∑ p ij = p 00 p 10 p 20 ⋅ p 01 p 11 p 21 ⋅ p 02 p 12 p 22 ⋅ ⋯ ⋯ ⋯ ⋯ i , j ∈ N = 1 , i ∈ N ( 8 ) ( 9 )
Sojourn Time
Given a state i i i of a DTMC { X n ∈ S : n ∈ N } \left\{X_{n} \in S: n \in N\right\} { X n ∈ S : n ∈ N } , the sojourn time T i T_{i} T i of the state i i i is the discrete r.v. that gives the no. of time steps for which the DTMC resides in state i i i before transiting to a different state.
P ( T i > n ) = P ( X n = i ∣ X n − 1 = i ) P ( X n − 1 = i ∣ X n − 2 = i ) ⋯ ⋯ P ( X 2 = i ∣ X 1 = i ) P ( X 1 = i ∣ X 0 = i ) = p i i n \begin{aligned}
& P\left(T_{i}>n\right) \\
& =P\left(X_{n}=i \mid X_{n-1}=i\right) P\left(X_{n-1}=i \mid X_{n-2}=i\right) \cdots \\
& \quad \cdots P\left(X_{2}=i \mid X_{1}=i\right) P\left(X_{1}=i \mid X_{0}=i\right)
=p_{i i}^{n}
\end{aligned} P ( T i > n ) = P ( X n = i ∣ X n − 1 = i ) P ( X n − 1 = i ∣ X n − 2 = i ) ⋯ ⋯ P ( X 2 = i ∣ X 1 = i ) P ( X 1 = i ∣ X 0 = i ) = p ii n
P ( T i = n ) = ∑ k = n ∞ P ( T i = k ) − ∑ k = n + 1 ∞ P ( T i = k ) = P ( T i > n − 1 ) − P ( T i > n ) = p i i n − 1 − p i i n = p i i n − 1 ( 1 − p i i ) . \begin{aligned}
P\left(T_{i}=n\right) & =\sum_{k=n}^{\infty} P\left(T_{i}=k\right)-\sum_{k=n+1}^{\infty} P\left(T_{i}=k\right) \\
& =P\left(T_{i}>n-1\right)-P\left(T_{i}>n\right) \\
& =p_{i i}^{n-1}-p_{i i}^{n}=p_{i i}^{n-1}\left(1-p_{i i}\right) .
\end{aligned} P ( T i = n ) = k = n ∑ ∞ P ( T i = k ) − k = n + 1 ∑ ∞ P ( T i = k ) = P ( T i > n − 1 ) − P ( T i > n ) = p ii n − 1 − p ii n = p ii n − 1 ( 1 − p ii ) .
Thus, T i T_{i} T i is a geometric r.v. with success probability ( 1 − p i i ) \left(1-p_{i i}\right) ( 1 − p ii ) .
E ( T i ) = 1 1 − p i i E\left(T_{i}\right)=\frac{1}{1-p_{i i}} E ( T i ) = 1 − p ii 1
6. Chapman-Kolmogorov Equation (C-K eqn)
p i j ( m + n ) = ∑ k ∈ S p i k ( m ) p k j ( n ) p_{i j}(m+n)=\sum_{k \in S} p_{i k}(m) p_{k j}(n) p ij ( m + n ) = k ∈ S ∑ p ik ( m ) p kj ( n )
⟹ P ( m + n ) = P ( m ) P ( n ) . \Longrightarrow P(m+n)=P(m) P(n) . ⟹ P ( m + n ) = P ( m ) P ( n ) .
Let P ( n ) = [ p i j ( n ) ] P(n)=\left[p_{i j}(n)\right] P ( n ) = [ p ij ( n ) ] be the matrix of n n n -step transition probabilities. Note that P ( 0 ) = I P(0)=I P ( 0 ) = I .
P ( n ) = P ( n − 1 ) ⋅ P , n ≥ 1 \begin{equation*}
P(n)=P(n-1) \cdot P, \quad n \geq 1
\end{equation*} P ( n ) = P ( n − 1 ) ⋅ P , n ≥ 1
State Probabilities
p j ( n ) = P ( X n = j ) , n = 0 , 1 , 2 , ⋯ , j = 0 , 1 , 2 , ⋯ \begin{equation*}
p_{j}(n)=P\left(X_{n}=j\right), \quad n=0,1,2, \cdots, j=0,1,2, \cdots
\end{equation*} p j ( n ) = P ( X n = j ) , n = 0 , 1 , 2 , ⋯ , j = 0 , 1 , 2 , ⋯
p j ( n ) = P ( X n = j ) = ∑ i ∈ S P ( X n = j , X 0 = i ) = ∑ i ∈ S P ( X n = j ∣ X 0 = i ) P ( X 0 = i ) = ∑ i ∈ S p i ( 0 ) p i j ( n ) , j = 0 , 1 , 2 , ⋯ \begin{align*}
p_{j}(n) & =P\left(X_{n}=j\right) \\
& =\sum_{i \in S} P\left(X_{n}=j, \quad X_{0}=i\right) \\
& =\sum_{i \in S} P\left(X_{n}=j \mid X_{0}=i\right) P\left(X_{0}=i\right) \\
& =\sum_{i \in S} p_{i}(0) p_{i j}(n), \quad j=0,1,2, \cdots
\end{align*} p j ( n ) = P ( X n = j ) = i ∈ S ∑ P ( X n = j , X 0 = i ) = i ∈ S ∑ P ( X n = j ∣ X 0 = i ) P ( X 0 = i ) = i ∈ S ∑ p i ( 0 ) p ij ( n ) , j = 0 , 1 , 2 , ⋯
π ( n ) = [ p 0 ( n ) p 1 ( n ) p 2 ( n ) ⋯ ] \pi(n)=\left[\begin{array}{llll}
p_{0}(n) & p_{1}(n) & p_{2}(n) & \cdots
\end{array}\right] π ( n ) = [ p 0 ( n ) p 1 ( n ) p 2 ( n ) ⋯ ]
π ( n ) = π ( 0 ) P ( n ) = π ( 0 ) P n , n = 0 , 1 , 2 , ⋯ \begin{equation*}
\pi(n)=\pi(0) P(n)=\pi(0) P^{n}, \quad n=0,1,2, \cdots
\end{equation*} π ( n ) = π ( 0 ) P ( n ) = π ( 0 ) P n , n = 0 , 1 , 2 , ⋯
7. Continuous Time Markov Chain (CTMC)
Sojourn Times in States
T i T_{i} T i is an exponential r.v., i.e., T i = EXP ( λ i ) T_{i}=\operatorname{EXP}\left(\lambda_{i}\right) T i = EXP ( λ i ) ,
P ( T i > x ) = e − λ i x , x ≥ 0 P\left(T_{i}>x\right)=e^{-\lambda_{i} x}, \quad x \geq 0 P ( T i > x ) = e − λ i x , x ≥ 0
If λ i = 0 \lambda_{i}=0 λ i = 0 , the state i i i is an absorbing state.
If λ i = ∞ \lambda_{i}=\infty λ i = ∞ , the state i i i is an instantaneous state.
If λ i ∈ ( 0 , ∞ ) \lambda_{i} \in(0, \infty) λ i ∈ ( 0 , ∞ ) , the state i i i is a stable state.
8. Chapman-Kolmogorov Equation (C-K eqn)
H ( s , t ) = [ p i j ( s , t ) ] H(s, t)=\left[p_{i j}(s, t)\right] H ( s , t ) = [ p ij ( s , t ) ]
H ( s , t ) = H ( s , u ) H ( u , t ) , 0 ≤ s ≤ u ≤ t \begin{equation*}
H(s, t)=H(s, u) H(u, t), \quad 0 \leq s \leq u \leq t
\end{equation*} H ( s , t ) = H ( s , u ) H ( u , t ) , 0 ≤ s ≤ u ≤ t
9. Kolmogorov Differential Equation
∂ H ( s , t ) ∂ t = H ( s , t ) Q ( t ) , 0 ≤ s ≤ t \begin{equation*}
\frac{\partial H(s, t)}{\partial t}=H(s, t) Q(t), \quad 0 \leq s \leq t
\end{equation*} ∂ t ∂ H ( s , t ) = H ( s , t ) Q ( t ) , 0 ≤ s ≤ t
∂ H ( s , t ) ∂ s = − Q ( s ) H ( s , t ) , 0 ≤ s ≤ t \begin{equation*}
\frac{\partial H(s, t)}{\partial s}=-Q(s) H(s, t), \quad 0 \leq s \leq t
\end{equation*} ∂ s ∂ H ( s , t ) = − Q ( s ) H ( s , t ) , 0 ≤ s ≤ t
q i i ( t ) = lim h → 0 p i i ( t , t + h ) − 1 h \begin{equation*}
q_{i i}(t)=\lim _{h \rightarrow 0} \frac{p_{i i}(t, t+h)-1}{h}
\end{equation*} q ii ( t ) = h → 0 lim h p ii ( t , t + h ) − 1
q i j ( t ) = lim h → 0 p i j ( t , t + h ) h , i ≠ j . \begin{equation*}
q_{i j}(t)=\lim _{h \rightarrow 0} \frac{p_{i j}(t, t+h)}{h}, \quad i \neq j .
\end{equation*} q ij ( t ) = h → 0 lim h p ij ( t , t + h ) , i = j .
∑ j q i j ( t ) = 0 , for all i \begin{equation*}
\sum_{j} q_{i j}(t)=0, \quad \text { for all } i
\end{equation*} j ∑ q ij ( t ) = 0 , for all i