Unbiased Estimators: Conceptual Overview Video



Of most importance to the various forms of data analysis is the fact that sample mean (ȳ) and sample variance (s2), with (n – 1) in the denominator, are unbiased estimators of their corresponding population parameters: μ and σ2.

An unbiased estimator is a statistic whose expected value (E) is the true population parameter. An E is a type of mean. It is a mean of a statistic rather than a mean of a set of individual observations. Furthermore, it is a mean of an infinite number of instances of a statistic, or a very, very large number of instances, or all possible instances of that statistic. The final proviso is that the sample size for these instances must be held constant. This means that if we drew an infinite number of samples of n observations from a given population, calculated the ȳ and the for each sample, and then computed the mean of all of those sample means and variances, we would know the true μ and σ2 of the population from which the samples were drawn.

  • Recognize the difference between a population and a sample
  • Understand the difference between population parameters and sample statistics
  • Understand how a statistic can be an “unbiased” estimator of a parameter
  • Demonstrate an in depth understanding of the effect of sample size on estimates of parameters