1. Derivation of the Gaussian Integral
If you've ever worked with normal distributions, you've probably encountered the Gaussian integral. Understanding how it's derived is super useful, especially when you're dealing with things like the Jensen Inequality 1 or validating statistical models. In this post, we'll walk through the derivation step by step, starting with our goal: computing \(\ref{eq:integral_result}\).
Once we nail this down, we'll extend it to the general form with variance \(\sigma^2\).
1.1 Defining the integral without variance
Let's start simple by assuming unit variance (i.e., \(\sigma^2 = 1\)). This makes the integral easier to work with initially.
1.2 Squaring the integral
Here's the clever trick: we can't solve \(\ref{eq:I_definition}\) directly using standard integration techniques. So instead, we'll square it! By introducing a second independent variable \(y\), we get \(\ref{eq:I_squared}\).
This transformation is legit thanks to Fubini's theorem as our integrand is non-negative and integrable, so we're good to go.
1.3 Converting to polar coordinates
After squaring the integral in \(\ref{eq:double_integral}\), we can see the power of the exponent \((x^2+y^2)\) seems to be pretty similar to the radial distance in polar coordinates \(r^2\). Thus, slet's make that substitution:
Then
Since we're covering the entire \(\mathbb{R}^2\) plane, our new bounds in polar coordinates are:
Putting it all together, our double integral becomes:
1.3.1 Evaluating the radial integral
Now let's tackle the radial part where we need to compute:
A quick u-substitution does the trick:
And guess what, voilà!!:
1.3.1.1 Why does the integral equal 1?
The function \(e^{-u}\) does have an elementary antiderivative:
Evaluating at the bounds,
This result also has a probabilistic interpretation: it is the total mass of an exponential distribution with rate \(1\).
1.3.2 Evaluating the angular integral
The angular part is straightforward:
Multiplying these together gives us:
We take the positive root since our integrand is always positive. And there we have it, the famous \(\sqrt{2\pi}\)!
1.4 Generalizing to arbitrary variance \(\sigma^2\)
What if our variance isn't 1? No problem! Let's tackle the more general case:
We'll use a simple substitution:
Substituting this in:
1.5 What about a shift in the mean?
Finally, what happens if we shift the distribution by some mean \(\mu\)? Turns out, it doesn't change the result at all!
This makes intuitive sense — we're just sliding the entire bell curve over, but we're still integrating over the whole real line. The substitution \(u = x - \mu\) makes this rigorous, leaving our integration limits unchanged.
1.6 What about making the result equal to 1?
This equation demonstrates a fundamental property of the normal distribution. By setting the result of \(\ref{eq:shifted_integral}\) equal to 1, we obtain precisely the cumulative distribution function (CDF) of a normal distribution. The CDF represents the probability that a normally distributed random variable takes on a value less than or equal to a given threshold. This normalization condition ensures that the integral of the probability density function over the entire real line equals unity, which is a requirement for any valid probability distribution. Thus, the shifted integral formulation directly connects to the probabilistic interpretation of the normal distribution through its cumulative distribution function.
-
For instance see chapter 9.3.1 in Rebonato (2018) 2, where he assesses the convexity adjustment in special case when gaussian random variables. ↩
-
Riccardo Rebonato. Bond Pricing and Yield Curve. Cambridge University Press, 2018. doi:10.1017/9781316694169. ↩