Foundations of inference
Among the key concepts in statistics is making conclusions about a population using information in a sample; the process is called statistical inference. By using computational methods as well as well-developed mathematical theory, we can understand how one dataset differs from a different dataset — even when the two datasets have been collected under identical settings. In this part, we will walk through the key concepts and terms which will be applied more explicitly in later chapters.
- Chapter 11 Hypothesis testing with randomization describes randomization which involves repeatedly permuting observations to represent scenarios in which there is no association between two variables of interest.
- Chapter 12 Confidence intervals with bootstrapping describes bootstrapping which involves repeatedly sampling (with replacement) from the observed data in order to produce many samples which are similar to, but different from, the original data.
- Chapter 13 Inference with mathematical models introduces the Central Limit Theorem which is a theoretical mathematical approximation to the variability in data seen through randomization and bootstrapping.
- In Chapter 14 Decision Errors you will be presented with a structure for describing when and how errors can happen within statistical inference.
- Chapter 15 Applications: Foundations includes an application on the Malaria vaccine case study where the topics from this part of the book are fully developed.
Although often computational and mathematical methods are both appropriate (and give similar results), your study of both approaches should convince you that (1) there is almost never a single “correct” approach, and (2) there are different ways to quantify the variability seen from dataset to dataset.