Limit of a t distribution

Published: June 27, 2021   |   Read time:

Tagged:

Image Attribution:

I like to write some proofs when I have need of them but have a hard time tracking them down. I did this before with my proof of Poincare’s lemma. Here is one about the limit of the \(t\) distribution with infinite degrees of freedom.

The t distribution

The \(t\) distribution, with \(\nu\) degrees of freedom, is a probability distribution with a density function given by:

\[f_t(x; \nu) = \frac{\Gamma(\frac{\nu + 1}{2})}{\sqrt{\nu \pi} \Gamma(\frac{\nu}{2})} \left( 1 + \frac{x^2}{\nu} \right) ^ {-(\nu + 1) / 2}\]

where \(\Gamma( \cdot )\) is the gamma function. From this definition, we can see that the \(t\) distribution is symmetric about \(x = 0\), with a maximum density of \(\frac{\Gamma(\frac{\nu + 1}{2})}{\sqrt{\nu \pi} \Gamma(\frac{\nu}{2})}\) at \(x = 0\). If we recall the limit definition of the exponential,

\[\lim_{\nu \rightarrow \infty} \left( 1 + \frac{x}{\nu} \right) ^ \nu = e^x\]

we can also see that the \(t\) distribution looks something like an exponential distribution, \(f_e(x; \lambda) \propto e^{-\lambda x}\). Distributions like this often have many relationships to the normal distribution.

Two facts about the \(t\) distribution are often stated and rarely cited or proved.

  1. The \(t\) distribution is like a normal distribution but with thicker tails
  2. The \(t\) distribution becomes a normal distribution with infinite degrees of freedom.

Their proofs aren’t difficult, but I think are worth writing out.

Proof of thicker tails

Consider the ratio of the two probability densities:

\[\begin{align} \frac{f_t(x; \nu)}{f_n(x)} &= \frac{ \frac{\Gamma \left( \frac{\nu + 1}{2} \right)}{\sqrt{\nu \pi} \Gamma \left( \frac{nu}{2} \right)} \left( 1 + \frac{x^2}{\nu} \right)^{-(\nu + 1)/2} }{ \frac{1}{\sqrt{2 \pi}} e^{-x^2 / 2} } \\ &= \sqrt{\frac{2}{\nu}} \frac{\Gamma \left( \frac{\nu + 1}{2} \right)}{\Gamma \left( \frac{\nu}{2} \right)} \sqrt{\frac{e^{x^2}}{ \left( 1 + \frac{x^2}{\nu} \right)^{(\nu + 1) }} } \end{align}\]

Since \(e^{x^2} > \left( 1 + \frac{x^2}{\nu} \right) ^ {\nu + 1} \forall \nu\), then

\[\begin{align} \lim_{x \rightarrow \pm \infty} \frac{f_t(x; \nu)}{f_n(x)} = \infty \end{align}\]

The normal distribution decreases to 0 faster than the \(t\) distribution does, and so the \(t\) distribution has thicker tails \(\square\).

Proof of normalcy with infinite degrees of freedom

The proof here is also simple and based on limits. You need to be technically more rigorous, since we’ll be taking infinite limits of products of \(\nu\). But when you break the factors down and show those limits exist, then the product rule for limits makes everything okay.

\[\begin{align} \lim_{\nu \rightarrow \infty} \left(1 + \frac{x^2}{\nu} \right)^{-(\nu + 1) / 2} &= \sqrt{ \lim_{\nu \rightarrow \infty} \left( 1 + \frac{x^2}{\nu} \right ) \cdot \lim_{\nu \rightarrow \infty} \left( 1 + \frac{x^2}{\nu} \right )^{-v} } \\ &= \sqrt{ \lim_{\nu \rightarrow \infty} \left( 1 + \frac{x^2}{\nu} \right )^{-v} } \\ &= \sqrt{ e^{-x^2} } \\ &= e^{- x^2 / 2} \end{align}\]

We can make use of Stirling’s approximation for the \(\Gamma\) function.

\[\begin{align} \lim_{\nu \rightarrow \infty} \frac{\Gamma \left( \frac{\nu + 1}{2} \right)}{\sqrt{\nu \pi} \Gamma \left( \frac{\nu}{2} \right)} &= \lim_{\nu \rightarrow \infty} \frac{ \sqrt{ \frac{4 \pi}{\nu + 1} } \left( \frac{\nu + 1}{2e} \right)^{(\nu + 1)/2} }{ \sqrt{\nu \pi} \sqrt{ \frac{4 \pi}{\nu} } \left( \frac{\nu}{2e} \right)^{\nu/2} } \\ &= \sqrt{ \lim_{\nu \rightarrow \infty} \frac{1}{\pi (\nu + 1)} \left( \frac{\nu + 1}{2e} \right)^{\nu + 1} \left( \frac{2e}{\nu} \right)^\nu } \\ &= \sqrt{ \frac{1}{2 \pi e} \lim_{\nu \rightarrow \infty} \left( 1 + \frac{1}{\nu} \right)^{\nu + 1} } \\ &= \frac{1}{ \sqrt{2 \pi}} \end{align}\]

Since these two limits are defined and finite, by the product rule we have:

\[\begin{align} \lim_{\nu \rightarrow \infty} f_t(x; \nu) &= \left( \lim_{\nu \rightarrow \infty} \frac{\Gamma \left( \frac{\nu + 1}{2} \right)}{\sqrt{\nu \pi} \Gamma \left( \frac{\nu}{2} \right)} \right) \cdot \left( \lim_{\nu \rightarrow \infty} \left(1 + \frac{x^2}{\nu} \right)^{-(\nu + 1) / 2} \right) \\ &= \frac{1}{\sqrt{2 \pi}} e^{-x^2 / 2} \end{align}\]

Thus, \(t \rightarrow \mathcal{N}(0, 1)\), the standard normal distribution, as \(\nu \rightarrow \infty\) \(\square\).

Conclusions

I’ve come across some slightly faulty proofs on StackExchange and other websites that used hand-wavy arguments about distributions or sequences of values, trying to use Slutsky’s theorem without properly showing independence or convergence, and some stuff I felt was too complicated. These proofs here are simpler and just based on limits that any high schooler could understand. They may not elegantly show some underlying principle about why the \(t\) distribution behaves this way, but it gets the job done.