## Archive for **November 22nd, 2011**

## Markov Chain for historical volatility

In previous post we used Markov Chain to discover a behavior of historical volatility and find out the 3/2 rule for ups and downs of random variable.

Now let’s construct more complicated model with the following volatility changes as states in chain:

- Less than -25%: denote it as -100.
- From -25% to -15% : -20.
- From -15% to -5% : -10.
- From -5% to 5% : 0.
- From 5% to 15% : 10.
- From 15% to 25% : 20.
- More than 25% : 100.

## Markov chain for Geometric Brownian Motion parameters

A Markov chain is a discrete-time random process with Markov property. Its components are states and probability transitions between them. Markov property states that the probability of next states depends only on the current state.

So Markov chain is a set of states and all transition probabilities between states.

## Simple chain for drift

Let’s assume that estimation of drift parameter might lead to the following 2 states:

- Positive, i.e. drift is greater or equal zero
- Negative, i.e. drift is less than zero

So, one could construct Markov chain for these states as it shown below.