Videos: Random error – the motivation for inference [7 min, 6 min]

Random error in estimates is the problem we try to solve using formal methods of inference. Confidence intervals, e.g. those obtained from bootstrap resampling, are a solution.

In these 2 videos, we focus on the errors we make when we use data from a sample to tell us something about a whole population. We ask: “How wrong could I be?” We also show the difference between the effects of systematic biases and random errors as we take more and more observations.

(See also the Review Questions following these two movies.)

[Illustrated transcript (pdf) for Part I]

[Illustrated transcript (pdf) for Part I]

After you’ve watched these videos, you should be able to answer these questions:

Part 1 video

  • Do the problems caused by bad measurement systems and biased selection mechanisms go away when we get huge amounts of data?
  • Do the problems caused by confounding go away when we get huge amounts of data?
  • Do the problems caused by random error go away when we get huge amounts of data?
  • What is a sampling error?
  • Do we ever know how big the actual sampling error we incurred was? What do we try to do about that?

Part 2 video

  • What effect does sample size have on sampling error?
  • For what two reasons are non-random selection mechanisms worse than random selection mechanisms?
  • What were the 5 “take home messages” from this movie?

If you couldn’t answer a question, you might find it helpful to look at the illustrated transcripts (linked under each movie).