Skip to content

1. Brownian Motion

1.1 Overview

This entry describes the fundamental properties and key characteristics of Brownian motion. I present its defining axioms, derive its moment structure, and establish the covariance formula that governs its dependence across time.

Brownian motion is the central driving noise in stochastic calculus. Rather than being defined by a formula, it is characterized by a small set of structural properties that completely determine its behavior. These properties encode randomness, independence, and scaling in time.



1.2 Definition of Brownian motion

A stochastic process \((W_t)_{t \ge 0}\) is called a standard Brownian motion if it satisfies:

  1. \(W_0 = 0\) almost surely
  2. The paths \(t \mapsto W_t(\omega)\) are almost surely continuous
  3. The process has independent increments
  4. For all \(0 \le s < t\), the increment \(W_t - W_s\) is normally distributed with mean \(0\) and variance \(t - s\)

The last property can be written as

\[ \begin{equation} W_t - W_s \sim \mathcal{N}(0, t - s). \label{eq:brownian_increment_distribution} \end{equation} \]

1.3 Mean of Brownian motion

Setting \(s = 0\) in \(\eqref{eq:brownian_increment_distribution}\), we obtain

\[ \begin{equation} W_t \sim \mathcal{N}(0, t). \label{eq:brownian_marginal_distribution} \end{equation} \]

Thus, the law of Brownian motion at a fixed time is completely determined by the elapsed time \(t\). The expected value of \(W_t\) follows directly from its Gaussian distribution. Using \(\eqref{eq:brownian_marginal_distribution}\),

\[ \begin{equation} \mathbb{E}[W_t] = 0. \label{eq:brownian_mean} \end{equation} \]

This reflects the absence of drift: Brownian motion fluctuates symmetrically around zero.


1.3.1 Variance expressed via second moments

The variance of \(W_t\) is defined as

\[ \begin{equation} \mathrm{Var}(W_t) = \mathbb{E}\left[(W_t - \mathbb{E}[W_t])^2\right]. \label{eq:variance_definition} \end{equation} \]
\[ \mathrm{Var}(W_t) = \mathbb{E}\left[W_t^2 - 2\cdot W_t \cdot \mathbb{E}[W_t] + \mathbb{E}[W_t]^2\right]. \]

Since Brownian motion has zero mean \(\mathbb{E}[W_t] = 0\), the variance simplifies to

\[ \begin{equation} \mathrm{Var}(W_t) = \mathbb{E}\left[W_t^2\right]. \label{eq:variance_second_moment} \end{equation} \]

1.3.2 Explicit computation using the Gaussian density

Let \(f_{W_t}\) denote the density of \(W_t\). From \(\eqref{eq:brownian_marginal_distribution}\) (mean zero, and variance \(t\)).

\[ \begin{equation} f_{W_t}(x) = \frac{1}{\sqrt{2\pi t}} \exp\left(-\frac{x^2}{2t}\right). \label{eq:brownian_density} \end{equation} \]

The second moment is therefore 1

\[ \begin{equation} \mathbb{E}[W_t^2] = \int_{-\infty}^{\infty} x^2 f_{W_t}(x)\,dx. \label{eq:second_moment_integral} \end{equation} \]

Substituting \(\eqref{eq:brownian_density}\),

\[ \begin{equation} \mathbb{E}[W_t^2] = \frac{1}{\sqrt{2\pi t}} \int_{-\infty}^{\infty} x^2 \exp\left(-\frac{x^2}{2t}\right)\,dx. \label{eq:second_moment_expanded} \end{equation} \]

1.3.3 Reduction to the standard Gaussian moment

Perform the change of variables

\[ \begin{equation} u = \frac{x}{\sqrt{t}}, \qquad dx = \sqrt{t}\,du. \label{eq:variance_change_of_variables} \end{equation} \]

Then \(\eqref{eq:second_moment_expanded}\) becomes

\[ \begin{equation} \mathbb{E}[W_t^2] = t \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} u^2 e^{-u^2/2}\,du. \label{eq:variance_reduced_integral} \end{equation} \]

The remaining integral is the second moment of a standard normal random variable and equals \(1\). Hence,

\[ \begin{equation} \mathbb{E}[W_t^2] = t. \label{eq:variance_result} \end{equation} \]

Combining with \(\eqref{eq:variance_second_moment}\), we conclude

\[ \begin{equation} \mathrm{Var}(W_t) = t. \label{eq:variance_final} \end{equation} \]

1.4 Covariance structure of Brownian motion

We now derive the covariance between two times \(s\) and \(t\).

Assume without loss of generality that \(0 \le s \le t\).


1.4.1 Definition of covariance

The covariance is defined by

\[ \begin{equation} \mathrm{Cov}(W_s, W_t) = \mathbb{E}[W_s W_t] - \mathbb{E}[W_s]\mathbb{E}[W_t]. \label{eq:covariance_definition} \end{equation} \]

Since Brownian motion has zero mean, this simplifies to

\[ \begin{equation} \mathrm{Cov}(W_s, W_t) = \mathbb{E}[W_s W_t]. \label{eq:covariance_simplified} \end{equation} \]

1.4.2 Decomposition using independent increments

Write \(W_t\) as

\[ \begin{equation} W_t = W_s + (W_t - W_s). \label{eq:brownian_decomposition} \end{equation} \]

Substituting into \(\eqref{eq:covariance_simplified}\),

\[ \begin{equation} \mathbb{E}[W_s W_t] = \mathbb{E}\left[ W_s^2 + W_s (W_t - W_s) \right]. \label{eq:covariance_expansion} \end{equation} \]

1.4.3 Vanishing of the mixed term

The increment \(W_t - W_s\) is independent of \(\mathcal{F}_s\) and has zero mean.
Since \(W_s\) is \(\mathcal{F}_s\)-measurable,

\[ \begin{equation} \mathbb{E}\left[W_s (W_t - W_s)\right] = \mathbb{E}\left[ W_s \mathbb{E}[W_t - W_s \mid \mathcal{F}_s] \right] = 0. \label{eq:mixed_term_zero} \end{equation} \]

1.4.4 Final covariance computation

Using \(\eqref{eq:mixed_term_zero}\),

\[ \begin{equation} \mathbb{E}[W_s W_t] = \mathbb{E}[W_s^2]. \label{eq:covariance_reduced} \end{equation} \]

From \(\eqref{eq:variance_final}\), we know that

\[ \begin{equation} \mathbb{E}[W_s^2] = s. \label{eq:variance_at_s} \end{equation} \]

Therefore,

\[ \begin{equation} \mathrm{Cov}(W_s, W_t) = s. \label{eq:covariance_result_ordered} \end{equation} \]

Since covariance is symmetric, this can be written compactly as

\[ \begin{equation} \mathrm{Cov}(W_s, W_t) = \min(s,t). \label{eq:covariance_final} \end{equation} \]

1.5 Expected Values of Brownian Motion Differences

A common task in stochastic calculus is computing expectations involving differences of Brownian motion at multiple time points. This section systematically derives these expected values, building on the independent increments property.

1.5.1 Single Increment: \(\mathbb{E}[W_t - W_s]\)

For any \(0 \le s < t\), the increment \(W_t - W_s\) is normally distributed:

\[ \begin{equation} W_t - W_s \sim \mathcal{N}(0, t-s). \label{eq:single_increment_dist} \end{equation} \]

Therefore, the expected value is immediately:

\[ \begin{equation} \mathbb{E}[W_t - W_s] = 0. \label{eq:single_increment_expectation} \end{equation} \]
Why This Matters

This zero-mean property is fundamental: Brownian motion has no drift. Given the process value at time \(s\), the expected change over any future interval is zero. This makes Brownian motion a martingale.

1.5.2 Product of Two Increments: \(\mathbb{E}[(W_t - W_s)(W_u - W_v)]\)

Consider two time intervals \([s,t]\) and \([v,u]\). The expected value of the product of increments depends on whether the intervals overlap.

1.5.2.1 Case 1: Disjoint Intervals (\(t \le v\) or \(u \le s\))

When the intervals are disjoint, the increments are independent. For example, if \(t \le v\):

\[ \begin{equation} \mathbb{E}[(W_t - W_s)(W_u - W_v)] = \mathbb{E}[W_t - W_s] \cdot \mathbb{E}[W_u - W_v] = 0 \cdot 0 = 0. \label{eq:disjoint_increments} \end{equation} \]
Independent Increments Property

When intervals \([s,t]\) and \([v,u]\) don't overlap, the corresponding increments are independent random variables. For independent zero-mean variables, the expectation of their product equals the product of their expectations (both zero).

1.5.2.2 Case 2: Overlapping Intervals

When intervals overlap, we must decompose the increments carefully. Assume \(s < v < t < u\). Then:

\[ W_t - W_s = (W_v - W_s) + (W_t - W_v) \]
\[ W_u - W_v = (W_t - W_v) + (W_u - W_t) \]

Expanding the product:

\[ \begin{align} \mathbb{E}[(W_t - W_s)(W_u - W_v)] &= \mathbb{E}\left[(W_v - W_s + W_t - W_v)(W_t - W_v + W_u - W_t)\right] \nonumber \\ &= \mathbb{E}\left[(W_v - W_s)(W_t - W_v)\right] \nonumber \\ &\quad + \mathbb{E}\left[(W_v - W_s)(W_u - W_t)\right] \nonumber \\ &\quad + \mathbb{E}\left[(W_t - W_v)^2\right] \nonumber \\ &\quad + \mathbb{E}\left[(W_t - W_v)(W_u - W_t)\right]. \label{eq:overlapping_expansion} \end{align} \]

Using independence of non-overlapping increments: - \(\mathbb{E}[(W_v - W_s)(W_t - W_v)] = 0\) - \(\mathbb{E}[(W_v - W_s)(W_u - W_t)] = 0\) - \(\mathbb{E}[(W_t - W_v)(W_u - W_t)] = 0\)

Only the variance term survives:

\[ \begin{equation} \mathbb{E}[(W_t - W_s)(W_u - W_v)] = \mathbb{E}[(W_t - W_v)^2] = t - v. \label{eq:overlapping_result} \end{equation} \]
General Pattern

For overlapping intervals, the expected product equals the length of the overlap:

\[ \mathbb{E}[(W_t - W_s)(W_u - W_v)] = \max(0, \min(t,u) - \max(s,v)) \]

This generalizes both the disjoint case (overlap = 0) and nested cases.

1.5.3 Special Case: \(\mathbb{E}[W_s W_t]\) Revisited

Setting \(s=0\) and \(v=0\) in the overlapping case gives us the covariance formula we already derived:

\[ \begin{equation} \mathbb{E}[W_s W_t] = \mathbb{E}[(W_s - W_0)(W_t - W_0)] = \min(s,t). \label{eq:product_expectation} \end{equation} \]

This confirms our earlier covariance result \(\eqref{eq:covariance_final}\).

1.5.4 Three-Point Differences: \(\mathbb{E}[(W_t - W_s)(W_s - W_r)]\)

For ordered times \(0 \le r < s < t\), consider the product of consecutive increments:

\[ \begin{equation} \mathbb{E}[(W_t - W_s)(W_s - W_r)]. \label{eq:consecutive_increments} \end{equation} \]

These increments correspond to disjoint intervals \([r,s]\) and \([s,t]\), so they are independent:

\[ \begin{equation} \mathbb{E}[(W_t - W_s)(W_s - W_r)] = \mathbb{E}[W_t - W_s] \cdot \mathbb{E}[W_s - W_r] = 0 \cdot 0 = 0. \label{eq:consecutive_zero} \end{equation} \]
Common Mistake

Don't confuse this with \(\mathbb{E}[W_t W_s]\). While \(W_t\) and \(W_s\) are correlated (both contain \(W_s\)), the increments \(W_t - W_s\) and \(W_s - W_r\) are independent because they occur over disjoint time intervals.

1.5.5 Squared Difference: \(\mathbb{E}[(W_t - W_s)^2]\)

For \(0 \le s < t\):

\[ \begin{align} \mathbb{E}[(W_t - W_s)^2] &= \mathrm{Var}(W_t - W_s) + \left(\mathbb{E}[W_t - W_s]\right)^2 \nonumber \\ &= (t-s) + 0^2 \nonumber \\ &= t - s. \label{eq:squared_difference} \end{align} \]

This is simply the variance of the increment, since the mean is zero.

Connection to Quadratic Variation

The fact that \(\mathbb{E}[(W_t - W_s)^2] = t - s\) is the foundation for the quadratic variation of Brownian motion. Summing over partitions leads to the result that the quadratic variation of Brownian motion over \([0,t]\) equals \(t\).


1.6 Quadratic Variation of Brownian Motion

Quadratic variation measures how "rough" or "wiggly" a path is. For smooth functions like polynomials, quadratic variation is zero. But Brownian motion is nowhere differentiable, as its quadratic variation is non-zero and fundamental to stochastic calculus.

1.6.1 Definition via Partition

For a partition of \([0,t]\):

\[ \begin{equation} 0 = t_0 < t_1 < t_2 < \cdots < t_n = t \label{eq:partition_qv} \end{equation} \]

the quadratic variation is defined as the limit of squared increments:

\[ \begin{equation} [W]_t := \lim_{n \to \infty} \sum_{i=0}^{n-1} \left(W(t_{i+1}) - W(t_i)\right)^2 \label{eq:quadratic_variation_definition} \end{equation} \]

where the limit is taken as the mesh size \(\max_i (t_{i+1} - t_i) \to 0\), and convergence is in probability.

Why Quadratic Variation?

Why focus on squared increments? Because:

  • First variation (sum of absolute increments) is infinite for Brownian motion
  • Quadratic variation (sum of squared increments) converges to a finite, non-random limit
  • Quadratic variation is what matters in Itô calculus — it replaces the role of classical derivatives

1.6.2 Deriving the Result

1.6.2.1 Compute Expected Value of Squared Increments

For a single partition interval \([t_i, t_{i+1}]\), we know from earlier that:

\[ \begin{equation} \mathbb{E}\left[\left(W(t_{i+1}) - W(t_i)\right)^2\right] = t_{i+1} - t_i =: \Delta t_i. \label{eq:qv_single_increments} \end{equation} \]

1.6.2.2 Sum Over All Increments

The expected value of the sum of squared increments is:

\[ \begin{equation} \mathbb{E}\left[\sum_{i=0}^{n-1} \left(W(t_{i+1}) - W(t_i)\right)^2\right] = \sum_{i=0}^{n-1} \mathbb{E}\left[\left(W(t_{i+1}) - W(t_i)\right)^2\right] = \sum_{i=0}^{n-1} \Delta t_i. \label{eq:qv_sum_increments} \end{equation} \]

1.6.2.3 Evaluate the Sum

The sum telescopes to give the total elapsed time:

\[ \begin{equation} \sum_{i=0}^{n-1} \Delta t_i = \sum_{i=0}^{n-1} (t_{i+1} - t_i) = t_n - t_0 = t - 0 = t. \label{eq:qv_telescoping} \end{equation} \]

Therefore:

\[ \begin{equation} \mathbb{E}\left[\sum_{i=0}^{n-1} \left(W(t_{i+1}) - W(t_i)\right)^2\right] = t. \label{eq:qv_expectation} \end{equation} \]

1.6.2.4 Show Convergence in Probability

Now we need to show that the actual sum (not just its expectation) converges. The variance of the sum is:

\[ \begin{equation} \mathrm{Var}\left[\sum_{i=0}^{n-1} \left(W(t_{i+1}) - W(t_i)\right)^2\right] = \sum_{i=0}^{n-1} \mathrm{Var}\left[\left(W(t_{i+1}) - W(t_i)\right)^2\right]. \label{eq:qv_variance} \end{equation} \]
Computing Variance of Squared Gaussian

For a zero-mean normal variable \(X \sim \mathcal{N}(0, \sigma^2)\):

\[\mathrm{Var}(X^2) = \mathbb{E}[X^4] - (\mathbb{E}[X^2])^2 = 3\sigma^4 - \sigma^4 = 2\sigma^4\]

For our increment \(W(t_{i+1}) - W(t_i) \sim \mathcal{N}(0, \Delta t_i)\):

\[\mathrm{Var}\left[\left(W(t_{i+1}) - W(t_i)\right)^2\right] = 2(\Delta t_i)^2\]

Each term contributes \(2(\Delta t_i)^2\):

\[ \begin{equation} \mathrm{Var}\left[\sum_{i=0}^{n-1} \left(W(t_{i+1}) - W(t_i)\right)^2\right] = \sum_{i=0}^{n-1} 2(\Delta t_i)^2 = 2\sum_{i=0}^{n-1} (\Delta t_i)^2. \label{eq:qv_variance_expanded} \end{equation} \]

As \(n \to \infty\) and mesh size \(\to 0\), the sum \(\sum_{i=0}^{n-1} (\Delta t_i)^2 \to 0\). Therefore:

\[ \begin{equation} \mathrm{Var}\left[\sum_{i=0}^{n-1} \left(W(t_{i+1}) - W(t_i)\right)^2\right] \to 0. \label{eq:qv_variance_limit} \end{equation} \]

1.6.2.5 Conclude

By Chebyshev's inequality, if the variance goes to zero, the sum converges in probability to its expected value:

Chebyshev's Inequality and Convergence in Probability

Chebyshev's Inequality states that for any random variable \(X\) and any \(\epsilon > 0\):

\[P(|X - \mathbb{E}[X]| \geq \epsilon) \leq \frac{\mathrm{Var}(X)}{\epsilon^2}\]

Applying to our quadratic variation sum:

Let \(X_n = \sum_{i=0}^{n-1} \left(W(t_{i+1}) - W(t_i)\right)^2\). We have:

  • \(\mathbb{E}[X_n] = t\) (from \(\ref{eq:qv_expectation}\))
  • \(\mathrm{Var}[X_n] \to 0\) as \(n \to \infty\) (from \(\ref{eq:qv_variance_limit}\))

By Chebyshev's inequality:

\[P(|X_n - t| \geq \epsilon) \leq \frac{\mathrm{Var}(X_n)}{\epsilon^2}\]

Taking the limit: As \(n \to \infty\), the variance \(\mathrm{Var}(X_n) \to 0\), so:

\[\lim_{n \to \infty} P(|X_n - t| \geq \epsilon) \leq \lim_{n \to \infty} \frac{\mathrm{Var}(X_n)}{\epsilon^2} = 0\]

This holds for any fixed \(\epsilon > 0\). This is exactly the definition of convergence in probability:

\[X_n \xrightarrow{p} t\]

which is precisely equation \(\eqref{eq:qv_convergence}\).

\[ \begin{equation} \sum_{i=0}^{n-1} \left(W(t_{i+1}) - W(t_i)\right)^2 \xrightarrow{p} t. \label{eq:qv_convergence} \end{equation} \]

Therefore, the quadratic variation of Brownian motion over \([0,t]\) is:

\[ \begin{equation} [W]_t := \lim_{n \to \infty} \sum_{i=0}^{n-1} \left(W(t_{i+1}) - W(t_i)\right)^2 = t \quad \text{(in probability)}. \label{eq:brownian_quadratic_variation} \end{equation} \]

1.6.3 Key Insight: Randomness Cancels

This result is counterintuitive: even though Brownian motion is highly random, the sum of squared increments produces a deterministic result \([W]_t = t\).

The randomness in individual increments cancels out when squared and summed: - Some increments are large, some small - Some positive, some negative - But on average, they square to exactly \(t\)

Why This Matters for Stochastic Calculus

The quadratic variation \([W]_t = t\) is the foundation of Itô's lemma. In ordinary calculus:

\[d(W^2) = 2W \, dW\]

But in stochastic calculus, the quadratic variation contributes an extra term:

\[d(W^2) = 2W \, dW + dW \otimes dW = 2W \, dW + dt\]

The term \(dt\) comes directly from the fact that \((dW)^2 \approx dt\) (more precisely, has quadratic variation \(dt\)).


  1. See the nice, an straigthforward, post in Open Source Quant about moments.