When to use t distribution versus normal distribution quantiles in constructing confidence interval for the mean

To construct confidence interval for the mean, we often use quantiles of standardized sample mean distribution. Here, I include a list of cases where I’d use quantiles of t-distribution versus quantiles of normal distribution for that purpose.

Note: the below text could be directly translated to answer when to use t-test versus z-test in testing hypothesis about the mean parameter.

Table of Contents

Standardized sample mean

Consider $X_1, \ldots, X_n$ – a sequence of i.i.d. random variables with mean $E(X_i) = \mu$ and variance $\text{var}(X_i) = \sigma^2$. To construct confidence intervals for $\mu$ parameter, we often use a standardized sample mean,

$$ \begin{aligned} \frac{\overline{X}_n - \mu}{\sigma/\sqrt{n}}, \end{aligned} $$

or its version where $S_n$ – a consistent estimator of true standard deviation $\sigma$ – is used, $\frac{\overline{X}_n - \mu}{S_n/\sqrt{n}}$; the latter is common in practice as we typically do not know $\sigma$ and must estimate it from the data. Knowing distribution of a standardized sample mean allows us to construct confidence interval for a mean $\mu$ parameter.

Example 1: constructing confidence interval for $\mu$ with $z$-quantiles

Assume $X_1, \ldots, X_n$ are i.i.d. $\sim N(\mu,\sigma^2)$ and $\sigma$ is known. Then we have an exact distributional result for a standardized sample mean,

$$ \begin{aligned} \frac{\overline{X}_n - \mu}{\sigma/\sqrt{n}} \sim N(0,1). \end{aligned} $$

Let us denote $z_{ 1-\frac{\alpha}{2}}$ to be $(1-\frac{\alpha}{2})$-th quantile of standard normal distribution $N(0,1)$. Since $N(0,1)$ is symmetric around $0$, we have $z_{\frac{\alpha}{2}} = -z_{1-\frac{\alpha}{2}}$ and we can write

$$ \begin{aligned} 1-\alpha = P\left(-z_{1-\frac{\alpha}{2}} \leq \frac{\overline{X}_n-\mu}{\sigma/\sqrt{n}} \leq z_{1-\frac{\alpha}{2}} \right) = P\left(\bar{X}_n-z_{1-\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}} \leq \mu \leq \bar{X}_n+z_{1-\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}} \right), \end{aligned} $$

which yields that $\left[ \bar{X}_n-z_{1-\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}}, ; \bar{X}_n+z_{1-\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}}\right]$ is a $(1-\alpha )$-confidence interval for a mean parameter $\mu$.

Example 2: constructing confidence interval for $\mu$ with $t$-quantiles

Assume $X_1, \ldots, X_n$ are i.i.d. $\sim N(\mu,\sigma^2)$ and $\sigma$ is unknown. We use $S_n$ – a consistent sample estimator of true standard deviation – to approximate $\sigma$, and have an exact distributional result for a standardized sample mean,

$$ \begin{aligned} \frac{\overline{X}_n - \mu}{S_n/\sqrt{n}} \sim t_{n-1}. \end{aligned} $$

Let us denote $t_{n-1,1-\frac{\alpha}{2}}$ to be a $(n-1,1-\frac{\alpha}{2})$-th quantile of $t$-distributuon with $n-1$ degrees of freedom. Since $t$ is symmetric around $0$, we have

$$ \begin{aligned} 1-\alpha =P\left(-t_{n-1,1-\frac{\alpha}{2}} \leq \frac{\bar{X}_{n}-\mu}{S_{n} / \sqrt{n}} \leq t_{n-1,1-\frac{\alpha}{2}}\right) = P\left(\bar{X}_n-t_{n-1, 1-\frac{\alpha}{2}}\frac{S_n}{\sqrt{n}} \leq \mu \leq \bar{X}_n+t_{n-1, 1-\frac{\alpha}{2}} \frac{S_n}{\sqrt{n}} \right), \end{aligned} $$

which yields that $\left[ \bar{X}_n-t_{n-1, 1-\frac{\alpha}{2}} \frac{S_n}{\sqrt{n}}, ; \bar{X}_n+t_{n-1, 1-\frac{\alpha}{2}} \frac{S_n}{\sqrt{n}}\right]$ is a $(1-\alpha )$-confidence interval for a mean parameter $\mu$.

Cases

In many cases, whether to use quantiles of $t$-student distribution versus standard normal distribution is based on:

  • distribution of $X_1, \ldots, X_n$ variables,
  • whether $\sigma$ is known or not (and we need to estimate it i.e. with $S_n$),
  • what is sample size $n$.

Note: the below cases could be directly translated to answer when to use $t$-test versus $z$-test in testing hypothesis about the mean $\mu$ parameter, i.e. test for $H_0: \mu = \mu_0$ versus $H_1: \mu < \mu_0$, or $H_1: \mu \neq \mu_0$, or $H_1: \mu > \mu_0$.

Case 1: observations from normal distribution, $\sigma$ known, any $n$

  • Observations $X_1, \ldots, X_n$ are from normal $N(\mu, \sigma^2)$ distribution.
  • $\sigma$ known.
  • Any sample size $n$.

$\Rightarrow$ We have exact result that $\frac{\bar{X}_{n}-\mu}{\sigma / \sqrt{n}} \sim N(0,1)$ and hence we use quantiles of normal distribution in constructing the CI.

Case 2: observations from normal distribution, $\sigma$ unknown, small $n$

  • Observations $X_1, \ldots, X_n$ are from normal $N(\mu, \sigma^2)$ distribution.
  • $\sigma$ unknown.
  • Small sample size ($n \leq 50$).

$\Rightarrow$ We use $S_{n}$ to approximate $\sigma$. We have exact result that $\frac{\bar{X}_{n}-\mu}{S_{n} / \sqrt{n}} \sim t_{n-1}$ and hence we use quantiles of $t$ distribution with $n-1$ degrees of freedom in constructing the CI.

Case 3: observations from normal distribution, $\sigma$ unknown, large $n$

  • Observations $X_1, \ldots, X_n$ are from normal $N(\mu, \sigma^2)$ distribution.
  • $\sigma$ unknown.
  • Moderate to large sample size ($n > 50$).

$\Rightarrow$ We use $S_{n}$ to approximate $\sigma$. Because of $n$ large enough, Slutsky’s theorem asymptotic ``kicks in’’ and allows to replace $\sigma$ with $S_n$ – a consistent estimator of true population standard deviation, and to write that $\frac{\bar{X}_{n}-\mu}{S_n / \sqrt{n}} \approx \sim N(0,1)$. Because of $n$ large enough, we assume $N(0,1)$ is approximated ($\approx$) well enough to use quantiles of normal distribution in constructing the CI.

$\Rightarrow$ Another way to think about this case is that, as in Case 2, we have an exact result that $\frac{\bar{X}_{n}-\mu}{S_{n} / \sqrt{n}} \sim t_{n-1}$, and with large $n$, quantiles of $t$-distribution with $n-1$ degrees of freedom are almost equvalent to quantiles of normal distribution.

Case 4: observations from any distribution, $\sigma$ known, small $n$

  • Observations $X_1, \ldots, X_n$ are from (other than normal) distribution of mean $E(X_i) = \mu$ and variance $\text{var}(X_i) = \sigma^2$ (for normally distributed $X_i$’s, see cases 1-3).
  • $\sigma$ known.
  • Small sample size ($n \leq 50$).

$\Rightarrow$ Use CLT to get that standardized sample mean is approximately normal, $\frac{\bar{X}_{n}-\mu}{\sigma / \sqrt{n}} \approx \sim N(0,1)$. Since there is CLT approximation and we have a small sample size, in practice, we typically use quantiles of $t$ distribution with $n-1$ degrees of freedom to get more conservative (wider) CI.

  • Note: when $X_1, \ldots, X_n$ distribution of is very skewed (i.e. Poisson) it may be not plausible that CLT already ``kicks in’’ and other techniques may be needed.

Case 5: observations from any distribution, $\sigma$ known, large $n$

  • Observations $X_1, \ldots, X_n$ are from (other than normal) distribution of mean $E(X_i) = \mu$ and variance $\text{var}(X_i) = \sigma^2$ (for normally distributed $X_i$’s, see cases 1-3).
  • $\sigma$ known.
  • Moderate to large sample size ($n > 50$).

$\Rightarrow$ Use CLT to get that standardized sample mean is approximately normal. We have $\frac{\bar{X}_{n}-\mu}{\sigma/ \sqrt{n}} \approx \sim N(0,1)$. Since we have a moderate to small sample size, we assume that CLT ``kicks in’’ and the approximation ($\approx$) is good enough to use quantiles of normal distribution.

Case 6: observations from any distribution, $\sigma$ unknown, small $n$

  • Observations $X_1, \ldots, X_n$ are from (other than normal) distribution of mean $E(X_i) = \mu$ and variance $\text{var}(X_i) = \sigma^2$ (for normally distributed $X_i$’s, see cases 1-3).
  • $\sigma$ unknown.
  • Small sample size ($n \leq 50$).

$\Rightarrow$ Use CLT to get that standardized sample mean is approximately normal and we also use Slutsky’s theorem to replace $\sigma$ with $S_n$ – a consistent estimator of true population standard deviation. We have $\frac{\bar{X}_{n}-\mu}{S_n/ \sqrt{n}} \approx \sim N(0,1)$. Since there is CLT and Slutsky’s theorem approximation and we have a small sample size, in practice, we typically use quantiles of $t$ distribution with $n-1$ degrees of freedom to get more conservative (wider) CI.

  • Note: two approximations are happening here!
  • Note: when $X_1, \ldots, X_n$ distribution of is very skewed (i.e. Poisson) it may be not plausible that CLT already ``kicks in’’ and other techniques may be needed.

Case 7: observations from any distribution, $\sigma$ unknown, large $n$

  • Observations $X_1, \ldots, X_n$ are from (other than normal) distribution of mean $E(X_i) = \mu$ and variance $\text{var}(X_i) = \sigma^2$ (for normally distributed $X_i$’s, see cases 1-3).
  • $\sigma$ unknown.
  • Moderate to large sample size ($n > 50$).

$\Rightarrow$ Use CLT to get that standardized sample mean is approximately normal and we also use Slutsky’s theorem to replace $\sigma$ with $S_n$ – a consistent estimator of true population standard deviation. We have $\frac{\bar{X}_{n}-\mu}{S_n/ \sqrt{n}} \approx \sim N(0,1)$. Since we have a moderate to small sample size, we assume that both CLT and Slutsky asymptotics ``kicks in’’ and the approximation ($\approx$) is good enough to use quantiles of normal distribution.

Disclaimer

The views, thoughts, and opinions expressed in the text belong solely to the author, and not necessarily to the author’s employer, organization, committee or other group or individual.

References

[1]: Methods in Biostatistics with R. Ciprian Crainiceanu, Brian Caffo, John Muschelli (2019). Available online at https://leanpub.com/biostatmethods.

Marta Karas
Marta Karas
Senior Manager, Statistics