Ergodic Stochastic Process Lecture 13

 Ergodic Stochastic Process 

(Ergodicity)

Lecture 13

An ergodic process is defined as a random stationary process in which the statistical characteristics, like mean and variance, etc., may be estimated using the time averages of sample functions. The ergodic is a technique to simplify the problem or stochastic process. The statistical characteristics can be deduced from a single random or single realisation.

A weak stochastic process is said to be ergodic if it satisfies

i. The time average is equal to the mean ensemble average.

μX = E(μt)

Where:

μxf(x)dx  t is constant

μt limT1/2Txf(t) dt

Let Xt be a weak stationary stochastic process with mean μX = μ, and let Xit (i = 1, 2, ..., n) be a single realisation or sample function, given below:

Then Xt is a random variable with mean E(Xt) = μx.   Now the ensemble mean is also a random variable. The expected value of the ensemble mean is defined as:

ii. The variance of the time mean is zero.

 limT Var (μt) = 0


iii. The time autocorrelation is equal to the ensemble autocorrelation, and the variance of autocorrelation will be zero.

The stochastic process has statistical properties and does not change over time. The stochastic process can be easy to analyse and forecast.

MCQs

1. The stochastic process on this S & T is divided into 

a

2

b

3

c

4

d

6

2. The blood pressure of a patient at time t is an example of 

a

Discrete Parameter Discrete Process

b

Continuous Parameters Continuous Process

c

Continuous Parameter Discrete Process

d

Discrete Parameter Continuous Process

3. If the joint distribution of X(t1), X(t2), ..., X(tn) is equal to the joint distribution of X(t1+τ), X(t2+τ), ..., X(tn+τ), the stochastic process is said to be ….

a

Stationary

b

Strong Stationary

c

Weak Stationary

d

Ergodic

4. A stochastic process that is stationary for first- and second-order distribution functions is called:

a

Stationary

b

Strong Stationary

c

Weak Stationary

d

Ergodic

5. The mean in a strictly stationary stochastic process is:

a

Time invariant

b

Constant

c

Variable

d

Both (a) & (b)

6. The Variance of a stochastic process is constant for first and second order only is called

a

Strong Stationary

b

Weak Stationary

c

Non-Stationary

d

Ergodic

7. In a weak stationary stochastic process, if the time average is equal to ensemble average and time correlation is  is equal to the ensemble correlation, it is called

a

Non-Stationary

b

Weak Stationary

c

Ergodic

d

None of them

8. In ergodicity
Question

Let Z(t) be a sequence of independent normal variables with mean zero and unit variance, and let a and b be constants. Which of the following functions is stationary? For each stationary process, specify the mean and auto-covariance.

X(t) = a + bZ(t) + cZ(t-2)

Solution: Z(t) is an SNV.

E[Z(t)] = 0

Var[Z(t)] = 0

E[X(t)] = E[a + bZ(t) + cZ(t-2)]

E[X(t)] = E[a] + b E[Z(t)] + c E[Z(t-2)]
E[X(t)] = a

The mean is time invariant.

Cov[X(t) X(t+τ)] = Cov [{a + bZ(t) + cZ(t-2)} {a + bZ(t+τ) + cZ(t-2+τ)}]

Cov[X(t) X(t+τ)] =Cov[ a {a + bZ(t+τ) + cZ(t-2+τ)} + bZ(t) {a + bZ(t+τ) + cZ(t-2+τ)} + cZ(t-2) {a + bZ(t+τ) + cZ(t-2+τ)}]

Cov[X(t) X(t+τ)] Cov [{a.a + a.b Z(t+τ) + ac Z(t-2+τ)} + a.b Z(t) + b.b Z(t) Z(t+τ) + bc Z(t) Z(t-2+τ)} + {ac Z(t-2) + bc Z(t-2) Z(t+τ) + c.c Z(t-2) Z(t+τ) ]

Cov[X(t) X(t+τ)] Cov[b.b Z(t) Z(t+τ+ bc Z(t-2) Z(t+τ) +bc Z(t) Z(t-2+τ)] 


Cov[X(t) X(t+τ)] b.b Cov[Z(t) Z(t+τ)] bc Cov[Z(t-2) Z(t+τ)] + bc Cov[Z(t)  Z(t-2+τ)] - 1

We know that

Cov[Z(t) Z(t+τ)] = E[{Z(t) - E((Z(t))} {{Z(t+τ) - E((Z(t+τ))}

at τ = 0

Cov{Z(t) Z(t)} = E [{Z(t) - E(Z(t))} {Z(t) - E(Z(t))}]

Cov {Z(t) Z(t)} = E [{Z(t) - 0} {Z(t) - 0}]

Cov {Z(t) Z(t)} = E [Z(t) Z(t)]

Cov {Z(t) Z(t)} = Var (Z(t))

Cov {Z(t) Z(t)} = σ^2

 Cov[Z(t-2), Z(t+τ)] = E[{Z(t-2) - E(Z(t-2))} {Z(t+τ) - E(Z(t+τ))}] 

at τ = -2

 Cov[Z(t-2), Z(t+τ)] = E[{Z(t-2) - E(Z(t-2))} {Z(t - 2) - E(Z(t - 2))}]

 Cov[Z(t-2), Z(t+τ)] = E[{Z(t-2)} {Z(t - 2)}]

Cov[Z(t-2), Z(t+τ)] = Var(Z(t-2))

Cov[Z(t-2), Z(t+τ)] = σ^2

 Cov[Z(t)  Z(t-2+τ)] = E[{Z(t) - E(Z(t))} {Z(t-2+τ) - E(Z(t-2+τ)}] 

at τ = 2

Cov[Z(t)  Z(t-2+τ)] = E [{Z(t) - E(Z(t))} {Z(t-2+2) - E[Z(t-2+2)}] 

Cov[Z(t) Z(t-2+τ)] = E[{Z(t)} {Z(t)}] 

Cov[Z(t), Z(t-2+τ)] = Var[Z(t)] 

Cov[Z(t), Z(t-2+τ)] = σ^2 

Equation 1 becomes

Cov[X(t) X(t+τ)] b.b σ^2  bc σ^2  + bc σ^2 

The covariance depends on the gap of time but not on time.

 X(t) = a + bZ(t) + cZ(t-2) is WSS

No comments:

Post a Comment

Moving Average Models (MA Models) Lecture 17

  Moving Average Models  (MA Models)  Lecture 17 The autoregressive model in which the current value 'yt' of the dependent variable ...