# FxPaul

Math in finance or vice versa

## Maximum Likelihood Estimation of Stochastic Process Parameters

Maximum Likelihood estimation (MLE) is a method of parameter estimations of statistical model. The base idea is to establish joint density probability for observations and to maximize its value by model’s parameters. To say it differently, we are looking for the most probable explanation of observed data.

### Problem setup

Assume that we’re given a one-dimensional stochastic process:
$dS_t = \mu dt + \sigma dW_t$
where $\mu$ and $\sigma$ are some functions of arguments $\theta$.

We observe this process by measuring $\latex S_i(t_i)$ where $i=1..N$. For sake of simplicity assume that observations are equidistant in time, i.e. $\Delta t = t_{i-1} - t_i = const$.

So, let’s estimate parameters.

### Maximum Likelihood

As for the method, one should construct joint probability density for the observations. By classic method we should construct all conditional probabilities of all time-forward transitions between observations, i.e. there should be $\frac{N (N-1)}{2}$ conditional probabilities. But we can reduce the amount if the process satisfies Markov property or in other words it is memoryless.

Therefore, joint probability density for the observations is:
$\rho(S_i | \theta) = \prod_{i=2}^N {P\left(S_i | S_{i-1}, \theta \right)}$
where $P\left(S_i | S_{i-1}, \theta \right)$ is a probability that the process starting at $S_{i-1}$ with parameters $\theta$ comes to $S_i$.

Let’s apply a common trick and take a logarithm of the probability:
$\ln \rho(S_i | \theta) = \sum_{i=2}^N {\ln P\left(S_i | S_{i-1}, \theta \right)}$
Usually it is better for computational reasons.

### Conditional probability

Conditional probability is somewhat hard to compute in closed form for the most interesting cases. But a general approach exists for Ito processes. There is a differential equation describing time evolution of probability density function of stochastic process. It is Fokker-Planck equation and is also know as Kolmogorov forward equation. The equation is widely used in quantum physics and is quite deeply studied.

Fokker-Planck equation for density function for the process given is:
$\frac{\partial f(s,t)}{\partial t} = - \frac{\partial}{\partial s} \mu(s,t,\theta) f(s,t) + \frac{1}{2} \frac{\partial^2}{\partial s^2} \sigma^2(s,t,\theta) f(s,t)$
$f(s, 0) = \delta (s - S_{i-1})$
Thus the conditional probability is a solution of the equation:
$P\left(S_i | S_{i-1}, \theta \right) = f(S_i, \Delta t)$

The equation usually has at least numerical solution. So it is possible to provide quite efficient implementation of MLE algorithm.

### Monte Carlo approach to conditional probability

Another method for conditional probability is to apply Monte Carlo sampling to find a particular conditional probabilities $P\left(S_i | S_{i-1}, \theta \right)$. It is quite a brute force algorithm to find them but it might be useful in case of complicated stochastic processes.

The method is available in Java as part of Monte-Carlo library.

Written by fxpaul

November 2, 2011 at 08:00

Tagged with , ,

### 4 Responses

1. You are a very bright individual!

Ali Andreu

November 4, 2011 at 19:51

2. That’s what I wanted to find. Thank you for the details. BTW, other posts are a bit less interesting. Please don’t be offended, simply try to keep quality in this way 🙂

Greg Davidson
Nike Coupons

GregDavidson

November 9, 2011 at 04:52

3. current events
Howdy! Would you mind if I share your weblog with my myspace group? There’s a lot of people that I think would really enjoy your content. Please let me know. Thanks

kerry collins

November 26, 2011 at 04:10

4. Cool texts are posted here. But you definitely should make changes to the design

Fill Franplou
ulta coupon

FillFranplou

December 15, 2011 at 13:26