# 2 Summarizing and visualizing data

This chapter focuses on the mechanics and construction of summary statistics and graphs. We use statistical software for generating the summaries and graphs presented in this chapter and book. However, since this might be your first exposure to these concepts, we take our time in this chapter to detail how to create them. Mastery of the content presented in this chapter will be crucial for understanding the methods and techniques introduced in rest of the book.

## 2.1 Exploring numerical data

In this section we will explore techniques for summarizing numerical variables. For example, consider the loan_amount variable from the loan50 data set, which represents the loan size for each of 50 loans in the data set. This variable is numerical since we can sensibly discuss the numerical difference of the size of two loans. On the other hand, area codes and zip codes are not numerical, but rather they are categorical variables.

Throughout this section and the next, we will apply numerical methods using the loan50 and county data sets, which were introduced in Section 1.2. If you’d like to review the variables from either data set, see Tables 1.4 and 1.6.

The county data can be found in the usdata package and the loan50 data can be found in the openintro package.

### 2.1.1 Scatterplots for paired data

A scatterplot provides a case-by-case view of data for two numerical variables. In Figure 1.2, a scatterplot was used to examine the homeownership rate against the fraction of housing units that were part of multi-unit properties (e.g., apartments) in the county data set. Another scatterplot is shown in Figure 2.1, comparing the total income of a borrower total_income and the amount they borrowed loan_amount for the loan50 data set. In any scatterplot, each point represents a single case. Since there are 50 cases in loan50, there are 50 points in Figure 2.1.

Looking at Figure 2.1, we see that there are many borrowers with income below $100,000 on the left side of the graph, while there are a handful of borrowers with income above$250,000.

Figure 2.2 shows a plot of median household income against the poverty rate for 3142 counties. What can be said about the relationship between these variables?

The relationship is evidently nonlinear, as highlighted by the dashed line. This is different from previous scatterplots we have seen, which indicate very little, if any, curvature in the trend.

What do scatterplots reveal about the data, and how are they useful?34

Describe two variables that would have a horseshoe-shaped association in a scatterplot $$(\cap$$ or $$\frown).$$35

### 2.1.2 Dot plots and the mean

Sometimes we are interested in the distribution of a single variable. In these cases, a dot plot provides the most basic of displays. A dot plot is a one-variable scatterplot; an example using the interest rate of 50 loans is shown in Figure 2.3.

The mean, often called the average is a common way to measure the center of a distribution of data. To compute the mean interest rate, we add up all the interest rates and divide by the number of observations.

The sample mean is often labelled $$\bar{x}.$$ The letter $$x$$ is being used as a generic placeholder for the variable of interest and the bar over the $$x$$ communicates we’re looking at the average interest rate, which for these 50 loans is 11.57%. It’s useful to think of the mean as the balancing point of the distribution, and it’s shown as a triangle in Figure 2.3.

Mean. The sample mean can be calculated as the sum of the observed values divided by the number of observations:

$\bar{x} = \frac{x_1 + x_2 + \cdots + x_n}{n}$

Examine the equation for the mean. What does $$x_1$$ correspond to? And $$x_2$$? Can you infer a general meaning to what $$x_i$$ might represent?36

What was $$n$$ in this sample of loans?37

The loan50 data set represents a sample from a larger population of loans made through Lending Club. We could compute a mean for the entire population in the same way as the sample mean. However, the population mean has a special label: $$\mu.$$ The symbol $$\mu$$ is the Greek letter mu and represents the average of all observations in the population. Sometimes a subscript, such as $$_x,$$ is used to represent which variable the population mean refers to, e.g., $$\mu_x.$$ Often times it is too expensive to measure the population mean precisely, so we often estimate $$\mu$$ using the sample mean, $$\bar{x}.$$

The Greek letter $$\mu$$ is pronounced mu, listen to the pronunciation here.

Although we don’t have an ability to calculate the average interest rate across all loans in the populations, we can estimate the population value using the sample data. Based on the sample of 50 loans, what would be a reasonable estimate of $$\mu_x,$$ the mean interest rate for all loans in the full data set?

The sample mean, 11.57, provides a rough estimate of $$\mu_x.$$ While it is not perfect, this is our single best guess point estimate of the average interest rate of all the loans in the population under study. In Chapter 5 and beyond, we will develop tools to characterize the accuracy of point estimates, like the sample mean. As you might have guessed, point estimates based on larger samples tend to be more accurate than those based on smaller samples.

The mean is useful because it allows us to rescale or standardize a metric into something more easily interpretable and comparable. Suppose we would like to understand if a new drug is more effective at treating asthma attacks than the standard drug. A trial of 1500 adults is set up, where 500 receive the new drug, and 1000 receive a standard drug in the control group:

New drug Standard drug
Number of patients 500 1000
Total asthma attacks 200 300

Comparing the raw counts of 200 to 300 asthma attacks would make it appear that the new drug is better, but this is an artifact of the imbalanced group sizes. Instead, we should look at the average number of asthma attacks per patient in each group:

• New drug: $$200 / 500 = 0.4$$ asthma attacks per patient
• Standard drug: $$300 / 1000 = 0.3$$ asthma attacks per patient

The standard drug has a lower average number of asthma attacks per patient than the average in the treatment group.

Provide another examples where the mean is useful for making comparisons.

Emilio opened a food truck last year where he sells burritos, and his business has stabilized over the last 3 months. Over that 3 month period, he has made $11,000 while working 625 hours. Emilio’s average hourly earnings provides a useful statistic for evaluating whether his venture is, at least from a financial perspective, worth it: $\frac{\11000}{625\text{ hours}} = \17.60\text{ per hour}$ By knowing his average hourly wage, Emilio now has put his earnings into a standard unit that is easier to compare with many other jobs that he might consider. Suppose we want to compute the average income per person in the US. To do so, we might first think to take the mean of the per capita incomes across the 3,142 counties in the county data set. What would be a better approach? The county data set is special in that each county actually represents many individual people. If we were to simply average across the income variable, we would be treating counties with 5,000 and 5,000,000 residents equally in the calculations. Instead, we should compute the total income for each county, add up all the counties’ totals, and then divide by the number of people in all the counties. If we completed these steps with the county data, we would find that the per capita income for the US is$30,861. Had we computed the simple mean of per capita income across counties, the result would have been just 26,093! This example used what is called a weighted mean. For more information on this topic, check out the following online supplement regarding weighted means. ### 2.1.3 Histograms and shape Dot plots show the exact value for each observation. They are useful for small data sets but can become hard to read with larger samples. Rather than showing the value of each observation, we prefer to think of the value as belonging to a bin. For example, in the loan50 data set, we created a table of counts for the number of loans with interest rates between 5.0% and 7.5%, then the number of loans with rates between 7.5% and 10.0%, and so on. Observations that fall on the boundary of a bin (e.g., 10.00%) are allocated to the lower bin. The tabulation is shown in Table 2.1, and the binned counts are plotted as bars in Figure 2.4 into what is called a histogram. Note that the histogram resembles a more heavily binned version of the stacked dot plot shown in Figure 2.3. Table 2.1: Counts for the binned interest_rate data. Interest rate 5% - 7.5% 7.5% - 10% 10% - 12.5% 12.5% - 15% 15% - 17.5% 17.5% - 20% 20% - 22.5% 22.5% - 25% 25% - 27.5% Count 11 15 8 4 5 4 1 1 1 Histograms provide a view of the data density. Higher bars represent where the data are relatively more common. For instance, there are many more loans with rates between 5% and 10% than loans with rates between 20% and 25% in the data set. The bars make it easy to see how the density of the data changes relative to the interest rate. Histograms are especially convenient for understanding the shape of the data distribution. Figure 2.4 suggests that most loans have rates under 15%, while only a handful of loans have rates above 20%. When data trail off to the right in this way and has a longer right tail, the shape is said to be right skewed.38 Data sets with the reverse characteristic – a long, thinner tail to the left – are said to be left skewed. We also say that such a distribution has a long left tail. Data sets that show roughly equal trailing off in both directions are called symmetric. When data trail off in one direction, the distribution has a long tail. If a distribution has a long left tail, it is left skewed. If a distribution has a long right tail, it is right skewed. Besides the mean (since it was labelled), what can you see in the dot plot in Figure 2.3 that you cannot see in the histogram in Figure 2.4?39 In addition to looking at whether a distribution is skewed or symmetric, histograms can be used to identify modes. A mode is represented by a prominent peak in the distribution. There is only one prominent peak in the histogram of interest_rate. A definition of mode sometimes taught in math classes is the value with the most occurrences in the data set. However, for many real-world data sets, it is common to have no observations with the same value in a data set, making this definition impractical in data analysis. Figure 2.5 shows histograms that have one, two, or three prominent peaks. Such distributions are called unimodal, bimodal, and multimodal, respectively. Any distribution with more than 2 prominent peaks is called multimodal. Notice that there was one prominent peak in the unimodal distribution with a second less prominent peak that was not counted since it only differs from its neighboring bins by a few observations. Figure 2.4 reveals only one prominent mode in the interest rate. Is the distribution unimodal, bimodal, or multimodal?40 Height measurements of young students and adult teachers at a K-3 elementary school were taken. How many modes would you expect in this height data set?41. Looking for modes isn’t about finding a clear and correct answer about the number of modes in a distribution, which is why prominent is not rigorously defined in this book. The most important part of this examination is to better understand your data. ### 2.1.4 Variance and standard deviation The mean was introduced as a method to describe the center of a data set, and variability in the data is also important. Here, we introduce two measures of variability: the variance and the standard deviation. Both of these are very useful in data analysis, even though their formulas are a bit tedious to calculate by hand. The standard deviation is the easier of the two to comprehend, and it roughly describes how far away the typical observation is from the mean. We call the distance of an observation from its mean its deviation. Below are the deviations for the $$1^{st},$$ $$2^{nd},$$ $$3^{rd},$$ and $$50^{th}$$ observations in the interest_rate variable: $x_1 - \bar{x} = 10.9 - 11.57 = -0.67$ $x_2 - \bar{x} = 9.92 - 11.57 = -1.65$ $x_3 - \bar{x} = 26.3 - 11.57 = 14.73$ $\vdots$ $x_{50} - \bar{x} = 6.08 - 11.57 = -5.49$ If we square these deviations and then take an average, the result is equal to the sample variance, denoted by $$s^2$$: \begin{align} s^2 &= \frac{(-0.67)^2 + (-1.65)^2 + (14.73)^2 + \cdots + (-5.49)^2}{50 - 1} \\ &= \frac{0.45 + 2.72 + \cdots + 30.14}{49} = 25.52 \end{align} We divide by $$n - 1,$$ rather than dividing by $$n,$$ when computing a sample’s variance; there’s some mathematical nuance here, but the end result is that doing this makes this statistic slightly more reliable and useful. Notice that squaring the deviations does two things. First, it makes large values relatively much larger. Second, it gets rid of any negative signs. The standard deviation is defined as the square root of the variance: $s = \sqrt{25.52} = 5.05$ While often omitted, a subscript of $$_x$$ may be added to the variance and standard deviation, i.e., $$s_x^2$$ and $$s_x^{},$$ if it is useful as a reminder that these are the variance and standard deviation of the observations represented by $$x_1,$$ $$x_2,$$ …, $$x_n.$$ Variance and standard deviation. The variance is the average squared distance from the mean. The standard deviation is the square root of the variance. The standard deviation is useful when considering how far the data are distributed from the mean. The standard deviation represents the typical deviation of observations from the mean. Often about 68% of the data will be within one standard deviation of the mean and about 95% will be within two standard deviations. However, these percentages are not strict rules. Like the mean, the population values for variance and standard deviation have special symbols: $$\sigma^2$$ for the variance and $$\sigma$$ for the standard deviation. The Greek letter $$\sigma$$ is pronounced sigma, listen to the pronunciation here. A good description of the shape of a distribution should include modality and whether the distribution is symmetric or skewed to one side. Using Figure 2.7 as an example, explain why such a description is important.42 Describe the distribution of the interest_rate variable using the histogram in Figure 2.4. The description should incorporate the center, variability, and shape of the distribution, and it should also be placed in context. Also note any especially unusual cases. The distribution of interest rates is unimodal and skewed to the high end. Many of the rates fall near the mean at 11.57%, and most fall within one standard deviation (5.05%) of the mean. There are a few exceptionally large interest rates in the sample that are above 20%. In practice, the variance and standard deviation are sometimes used as a means to an end, where the “end” is being able to accurately estimate the uncertainty associated with a sample statistic. For example, in Chapter 5 the standard deviation is used in calculations that help us understand how much a sample mean varies from one sample to the next. ### 2.1.5 Box plots, quartiles, and the median A box plot summarizes a data set using five statistics while also identifying unusual observations. Figure 2.8 provides a dot plot alongside a box plot of the interest_rate variable from the loan50 data set. The dark line inside the box represents the median, which splits the data in half. 50% of the data fall below this value and 50% fall above it. Since in the loan50 dataset there are 50 observations (an even number), the median is defined as the average of the two observations closest to the $$50^{th}$$ percentile. Table 2.2 shows all interest rates, arranged in ascending order. We can see that the $$25^{th}$$ and the $$26^{th}$$ values are both 9.93, which corresponds to the dark line in the box plot in Figure 2.8. Table 2.2: Interest rates from the loan50 dataset, arranged in ascending order. 1 2 3 4 5 6 7 8 9 10 1 5.31 5.31 5.32 6.08 6.08 6.08 6.71 6.71 7.34 7.35 10 7.35 7.96 7.96 7.96 7.97 9.43 9.43 9.44 9.44 9.44 20 9.92 9.92 9.92 9.92 9.93 9.93 10.42 10.42 10.90 10.90 30 10.91 10.91 10.91 11.98 12.62 12.62 12.62 14.08 15.04 16.02 40 17.09 17.09 17.09 18.06 18.45 19.42 20.00 21.45 24.85 26.30 When there are an odd number of observations, there will be exactly one observation that splits the data into two halves, and in such a case that observation is the median (no average needed). Median: the number in the middle If the data are ordered from smallest to largest, the median is the observation right in the middle. If there are an even number of observations, there will be two values in the middle, and the median is taken as their average. The second step in building a box plot is drawing a rectangle to represent the middle 50% of the data. The length of the the box is called the interquartile range, or IQR for short. It, like the standard deviation, is a measure of variability in data. The more variable the data, the larger the standard deviation and IQR tend to be. The two boundaries of the box are called the first quartile (the $$25^{th}$$ percentile, i.e., 25% of the data fall below this value) and the third quartile (the $$75^{th}$$ percentile, i.e., 75% of the data fall below this value) , and these are often labelled $$Q_1$$ and $$Q_3,$$ respectively. Interquartile range (IQR) The IQR interquartile range is the length of the box in a box plot. It is computed as $$IQR = Q_3 - Q_1,$$ where $$Q_1$$ and $$Q_3$$ are the $$25^{th}$$ and $$75^{th}$$ percentiles, respectively. A $$\alpha$$ percentile is a number with $$\alpha$$% of the observations below and $$100-\alpha$$% of the observations above. For example, the $$90^{th}$$ percentile of SAT scores is the value of the SAT score with 90% of students below that value and 10% of students above that value. What percent of the data fall between $$Q_1$$ and the median? What percent is between the median and $$Q_3$$?43 Extending out from the box, the whiskers attempt to capture the data outside of the box. The whiskers of a box plot reach to the minimum and the maximum values in the data, unless there are points that are considered unusually high or unusually low, which are identified as potential outliers by the box plot. These are labelled with a dot on the box plot. The purpose of labelling the outlying points – instead of extending the whiskers to the minimum and maximum observed values – is to help identify any observations that appear to be unusually distant from the rest of the data. There are a variety of formulas for determining whether a particular data point is considered an outlier, and different statistical software use different formulas. A commonly used formula is that any observation beyond $$1.5\times IQR$$ away from the first or the third quartile is considered an outlier. In a sense, the box is like the body of the box plot and the whiskers are like its arms trying to reach the rest of the data, up to the outliers. Outliers are extreme. An outlier is an observation that appears extreme relative to the rest of the data. Examining data for outliers serves many useful purposes, including • identifying strong skew in the distribution, • identifying possible data collection or data entry errors, and • providing insight into interesting properties of the data. Keep in mind, however, that some datasets have a naturally long skew and outlying points do not represent any sort of problem in the dataset. Using the box plot in Figure 2.8, estimate the values of the $$Q_1,$$ $$Q_3,$$ and IQR for interest_rate in the loan50 data set.44 ### 2.1.6 Robust statistics How are the sample statistics of the interest_rate data set affected by the observation, 26.3%? What would have happened if this loan had instead been only 15%? What would happen to these summary statistics if the observation at 26.3% had been even larger, say 35%? The three conjectured scenarios are plotted alongside the original data in Figure 2.9, and sample statistics are computed under each scenario in Table 2.3. Table 2.3: A comparison of how the median, IQR, mean, and standard deviation change as the value of an extereme observation from the original interest data changes. Robust Not robust Scenario Median IQR Mean SD Original data 9.93 5.75 11.6 5.05 Move 26.3% to 15% 9.93 5.75 11.3 4.61 Move 26.3% to 35% 9.93 5.75 11.7 5.68 Which is more affected by extreme observations, the mean or median? Is the standard deviation or IQR more affected by extreme observations?45 The median and IQR are called robust statistics because extreme observations have little effect on their values: moving the most extreme value generally has little influence on these statistics. On the other hand, the mean and standard deviation are more heavily influenced by changes in extreme observations, which can be important in some situations. The median and IQR did not change under the three scenarios in Table 2.3. Why might this be the case? The median and IQR are only sensitive to numbers near $$Q_1,$$ the median, and $$Q_3.$$ Since values in these regions are stable in the three data sets, the median and IQR estimates are also stable. The distribution of loan amounts in the loan50 data set is right skewed, with a few large loans lingering out into the right tail. If you were wanting to understand the typical loan size, should you be more interested in the mean or median?46 ### 2.1.7 Transforming data When data are very strongly skewed, we sometimes transform them so they are easier to model. Consider the histogram of county populations shown on the left in Figure 2.10, which shows extreme skew. What characteristics of the plot keep it from being useful? Nearly all of the data fall into the left-most bin, and the extreme skew obscures many of the potentially interesting details at the low values. There are some standard transformations that may be useful for strongly right skewed data where much of the data is positive but clustered near zero. A transformation is a rescaling of the data using a function. For instance, a plot of the logarithm (base 10) of county populations results in the new histogram on the right in Figure 2.10. The transformed data is symmetric, and any potential outliers appear much less extreme than in the original data set. By reigning in the outliers and extreme skew, transformations often make it easier to build statistical models for the data. Transformations can also be applied to one or both variables in a scatterplot. A scatterplot of the population change from 2010 to 2017 against the population in 2010 is shown in Figure 2.11. In this first scatterplot, it’s hard to decipher any interesting patterns because the population variable is so strongly skewed (left plot). However, if we apply a log$$_{10}$$ transformation to the population variable, as shown in Figure 2.11, a positive association between the variables is revealed (right plot). In fact, we may be interested in fitting a trend line to the data when we explore methods around fitting regression lines in Chapter 3. Transformations other than the logarithm can be useful, too. For instance, the square root $$(\sqrt{\text{original observation}})$$ and inverse $$\bigg ( \frac{1}{\text{original observation}} \bigg )$$ are commonly used by data scientists. Common goals in transforming data are to see the data structure differently, reduce skew, assist in modeling, or straighten a nonlinear relationship in a scatterplot. ### 2.1.8 Mapping data The county data set offers many numerical variables that we could plot using dot plots, scatterplots, or box plots, but they can miss the true nature of the data as geographic. When we encounter geographic data, we should create an intensity map, where colors are used to show higher and lower values of a variable. Figures 2.12 and 2.13 show intensity maps for poverty rate in percent (poverty), unemployment rate in percent (unemployment_rate), homeownership rate in percent (homeownership), and median household income in1000s (median_hh_income). The color key indicates which colors correspond to which values. The intensity maps are not generally very helpful for getting precise values in any given county, but they are very helpful for seeing geographic trends and generating interesting research questions or hypotheses.

What interesting features are evident in the poverty and unemployment rate intensity maps?

Poverty rates are evidently higher in a few locations. Notably, the deep south shows higher poverty rates, as does much of Arizona and New Mexico. High poverty rates are evident in the Mississippi flood plains a little north of New Orleans and also in a large section of Kentucky.

The unemployment rate follows similar trends, and we can see correspondence between the two variables. In fact, it makes sense for higher rates of unemployment to be closely related to poverty rates. One observation that stand out when comparing the two maps: the poverty rate is much higher than the unemployment rate, meaning while many people may be working, they are not making enough to break out of poverty.

What interesting features are evident in the median household income intensity map in Figure 2.13?47

### 2.1.9 Exercises

1. Mammal life spans. Data were collected on life spans (in years) and gestation lengths (in days) for 62 mammals. A scatterplot of life span versus length of gestation is shown below.48

1. What type of an association is apparent between life span and length of gestation?

2. What type of an association would you expect to see if the axes of the plot were reversed, i.e. if we plotted length of gestation versus life span?

3. Are life span and length of gestation independent? Explain your reasoning.

2. Associations. Indicate which of the plots show (a) a positive association, (b) a negative association, or (c) no association. Also determine if the positive and negative associations are linear or nonlinear. Each part may refer to more than one plot.

3. Reproducing bacteria. Suppose that there is only sufficient space and nutrients to support one million bacterial cells in a petri dish. You place a few bacterial cells in this petri dish, allow them to reproduce freely, and record the number of bacterial cells in the dish over time. Sketch a plot representing the relationship between number of bacterial cells and time.

4. Office productivity. Office productivity is relatively low when the employees feel no stress about their work or job security. However, high levels of stress can also lead to reduced employee productivity. Sketch a plot to represent the relationship between stress and productivity.

5. Parameters and statistics. Identify which value represents the sample mean and which value represents the claimed population mean.

1. American households spent an average of about $52 in 2007 on Halloween merchandise such as costumes, decorations and candy. To see if this number had changed, researchers conducted a new survey in 2008 before industry numbers were reported. The survey included 1,500 households and found that average Halloween spending was$58 per household.

2. The average GPA of students in 2001 at a private university was 3.37. A survey on a sample of 203 students from this university yielded an average GPA of 3.59 a decade later.

6. Sleeping in college. A recent article in a college newspaper stated that college students get an average of 5.5 hrs of sleep each night. A student who was skeptical about this value decided to conduct a survey by randomly sampling 25 students. On average, the sampled students slept 6.25 hours per night. Identify which value represents the sample mean and which value represents the claimed population mean.

7. Days off at a mining plant. Workers at a particular mining site receive an average of 35 days paid vacation, which is lower than the national average. The manager of this plant is under pressure from a local union to increase the amount of paid time off. However, he does not want to give more days off to the workers because that would be costly. Instead he decides he should fire 10 employees in such a way as to raise the average number of days off that are reported by his employees. In order to achieve this goal, should he fire employees who have the most number of days off, least number of days off, or those who have about the average number of days off?

8. Medians and IQRs. For each part, compare distributions A and B based on their medians and IQRs. You do not need to calculate these statistics; simply state how the medians and IQRs compare. Make sure to explain your reasoning.

1. A: 3, 5, 6, 7, 9; B: 3, 5, 6, 7, 20

2. A: 3, 5, 6, 7, 9; B: 3, 5, 7, 8, 9

3. A: 1, 2, 3, 4, 5; B: 6, 7, 8, 9, 10

4. A: 0, 10, 50, 60, 100; B: 0, 100, 500, 600, 1000

9. Means and SDs. For each part, compare distributions A and B based on their means and standard deviations. You do not need to calculate these statistics; simply state how the means and the standard deviations compare. Make sure to explain your reasoning. Hint: It may be useful to sketch dot plots of the distributions.

1. A: 3, 5, 5, 5, 8, 11, 11, 11, 13; B: 3, 5, 5, 5, 8, 11, 11, 11, 20

2. A: -20, 0, 0, 0, 15, 25, 30, 30; B: -40, 0, 0, 0, 15, 25, 30, 30

3. A: 0, 2, 4, 6, 8, 10; B: 20, 22, 24, 26, 28, 30

4. A: 100, 200, 300, 400, 500; B: 0, 50, 300, 550, 600

10. Histograms and box plots. Describe the distribution in the histograms below and match them to the box plots.

11. Air quality. Daily air quality is measured by the air quality index (AQI) reported by the Environmental Protection Agency. This index reports the pollution level and what associated health effects might be a concern. The index is calculated for five major air pollutants regulated by the Clean Air Act and takes values from 0 to 300, where a higher value indicates lower air quality. AQI was reported for a sample of 91 days in 2011 in Durham, NC. The relative frequency histogram below shows the distribution of the AQI values on these days.49

1. Estimate the median AQI value of this sample.

2. Would you expect the mean AQI value of this sample to be higher or lower than the median? Explain your reasoning.

3. Estimate Q1, Q3, and IQR for the distribution.

4. Would any of the days in this sample be considered to have an unusually low or high AQI? Explain your reasoning.

12. Median vs. mean. Estimate the median for the 400 observations shown in the histogram, and note whether you expect the mean to be higher or lower than the median.

13. Histograms vs. box plots. Compare the two plots below. What characteristics of the distribution are apparent in the histogram and not in the box plot? What characteristics are apparent in the box plot but not in the histogram?

14. Facebook friends. Facebook data indicate that 50% of Facebook users have 100 or more friends, and that the average friend count of users is 190. What do these findings suggest about the shape of the distribution of number of friends of Facebook users?

15. Distributions and appropriate statistics. For each of the following, state whether you expect the distribution to be symmetric, right skewed, or left skewed. Also specify whether the mean or median would best represent a typical observation in the data, and whether the variability of observations would be best represented using the standard deviation or IQR. Explain your reasoning.

1. Number of pets per household.

2. Distance to work, i.e. number of miles between work and home.

16. Distributions and appropriate statistics. For each of the following, state whether you expect the distribution to be symmetric, right skewed, or left skewed. Also specify whether the mean or median would best represent a typical observation in the data, and whether the variability of observations would be best represented using the standard deviation or IQR. Explain your reasoning.

1. Housing prices in a country where 25% of the houses cost below $350,000, 50% of the houses cost below$450,000, 75% of the houses cost below $1,000,000 and there are a meaningful number of houses that cost more than$6,000,000.

2. Housing prices in a country where 25% of the houses cost below $300,000, 50% of the houses cost below$600,000, 75% of the houses cost below $900,000 and very few houses that cost more than$1,200,000.

3. Number of alcoholic drinks consumed by college students in a given week. Assume that most of these students don’t drink since they are under 21 years old, and only a few drink excessively.

4. Annual salaries of the employees at a Fortune 500 company where only a few high level executives earn much higher salaries than all the other employees.

17. Income at the coffee shop. The first histogram below shows the distribution of the yearly incomes of 40 patrons at a college coffee shop. Suppose two new people walk into the coffee shop: one making $225,000 and the other$250,000. The second histogram shows the new income distribution. Summary statistics are also provided, rounded to the nearest whole number.

n

Min

Q1

Median

Mean

Max

SD

Before

40

$60,679$60,818

$65,238$65,089

$69,885$2,122

After

42

$60,679$60,838

$65,352$73,299

$250,000$37,321

1. Would the mean or the median best represent what we might think of as a typical income for the 42 patrons at this coffee shop? What does this say about the robustness of the two measures?

2. Would the standard deviation or the IQR best represent the amount of variability in the incomes of the 42 patrons at this coffee shop? What does this say about the robustness of the two measures?

18. Midrange. The midrange of a distribution is defined as the average of the maximum and the minimum of that distribution. Is this statistic robust to outliers and extreme skew? Explain your reasoning.

19. Commute times. The US census collects data on the time it takes Americans to commute to work, among many other variables. The histogram below shows the distribution of average commute times in 3,142 US counties in 2017. Also shown below is a spatial intensity map of the same data.50

1. Describe the numerical distribution and comment on whether or not a log transformation may be advisable for these data.

2. Describe the spatial distribution of commuting times using the map.

20. Hispanic population. The US census collects data on race and ethnicity of Americans, among many other variables. The histogram below shows the distribution of the percentage of the population that is Hispanic in 3,142 counties in the US in 2010. Also shown is a histogram of logs of these values.51

1. Describe the numerical distribution and comment on why we might want to use log-transformed values in analyzing or modeling these data.

2. What features of the distribution of the Hispanic population in US counties are apparent in the map but not in the histogram? What features are apparent in the histogram but not the map?

3. Is one visualization more appropriate or helpful than the other? Explain your reasoning.

## 2.2 Exploring categorical data

In this section, we will introduce tables and other basic tools for categorical data that are used throughout this book. The loan50 data set represents a sample from a larger loan data set called loans. This larger data set contains information on 10,000 loans made through Lending Club. We will examine the relationship between homeownership, which for the loans data can take a value of rent, mortgage (owns but has a mortgage), or own, and app_type, which indicates whether the loan application was made with a partner or whether it was an individual application.

The data can be found in the openintro package: loans_full_schema.

### 2.2.1 Contingency tables and bar plots

Table 2.4 summarizes two variables: application_type and homeownership. A table that summarizes data for two categorical variables in this way is called a contingency table. Each value in the table represents the number of times a particular combination of variable outcomes occurred. For example, the value 3496 corresponds to the number of loans in the data set where the borrower rents their home and the application type was by an individual. Row and column totals are also included. The row totals provide the total counts across each row and the column totals down each column. We can also create a table that shows only the overall percentages or proportions for each combination of categories, or we can create a table for a single variable, such as the one shown in Table 2.5 for the homeownership variable.

Table 2.4: A contingency table for application type and homeownership.
homeownership
application_type mortgage own rent Total
joint 950 183 362 1495
individual 3839 1170 3496 8505
Total 4789 1353 3858 10000
Table 2.5: A table summarizing the frequencies for each value of the homeownership variable: mortgage, own, and rent.
homeownership Count
mortgage 4789
own 1353
rent 3858
Total 10000

A bar plot is a common way to display a single categorical variable. The left panel of Figure 2.14 shows a bar plot for the homeownership variable. In the right panel, the counts are converted into proportions, showing the proportion of observations that are in each level.

### 2.2.2 Using a bar plot with two variables

We can display the distributions of two categorical variables on a bar plot concurrently. Such plots are generally useful for visualizing the relationship between two categorical variables. Figure 2.15 shows three such plots that visualize the relationship between homeownership and application_type variables. Plot A in Figure 2.15 is a stacked bar plot. This plot most clearly displays that loan applicants most commonly live in mortgaged homes. It is difficult to say, based on this plot alone, how different application types vary across the levels of homeownership. Plot B is a dodged bar plot. This plot most clearly displays that within each level of homeownership, individual applications are more common than joint applications. Finally, plot C is a standardized bar plot (also known as filled bar plot). This plot most clearly displays that joint applications are most common among applications who live in mortgaged homes, compared to renters and owners. This type of visualization is helpful in understanding the fraction of individual or joint loan applications for borrowers in each level of homeownership. Additionally, since the proportions of joint and individual loans vary across the groups, we can conclude that the two variables are associated for this sample.

Examine the three bar plots in Figure 2.15. When is the stacked, dodged, or standardized bar plot the most useful?

The stacked bar plot is most useful when it’s reasonable to assign one variable as the explanatory variable (here homeownership) and the other variable as the response (here application_type), since we are effectively grouping by one variable first and then breaking it down by the others.

Dodged bar plots are more agnostic in their display about which variable, if any, represents the explanatory and which the response variable. It is also easy to discern the number of cases in of the six different group combinations. However, one downside is that it tends to require more horizontal space; the narrowness of Plot B compared to the other two in Figure 2.15 makes the plot feel a bit cramped. Additionally, when two groups are of very different sizes, as we see in the group own relative to either of the other two groups, it is difficult to discern if there is an association between the variables.

The standardized stacked bar plot is helpful if the primary variable in the stacked bar plot is relatively imbalanced, e.g., the category has only a third of the observations in the category, making the simple stacked bar plot less useful for checking for an association. The major downside of the standardized version is that we lose all sense of how many cases each of the bars represents.

### 2.2.3 Mosaic plots

A mosaic plot is a visualization technique suitable for contingency tables that resembles a standardized stacked bar plot with the benefit that we still see the relative group sizes of the primary variable as well.

To get started in creating our first mosaic plot, we’ll break a square into columns for each category of the variable, with the result shown in Plot A of Figure 2.16. Each column represents a level of homeownership, and the column widths correspond to the proportion of loans in each of those categories. For instance, there are fewer loans where the borrower is an owner than where the borrower has a mortgage. In general, mosaic plots use box areas to represent the number of cases in each category.

Plot B in Figure 2.16 displays the relationship between homeownership and application type. Each column is split proportionally to the number of loans from individual and joint borrowers. For example, the second column represents loans where the borrower has a mortgage, and it was divided into individual loans (upper) and joint loans (lower). As another example, the bottom segment of the third column represents loans where the borrower owns their home and applied jointly, while the upper segment of this column represents borrowers who are homeowners and filed individually. We can again use this plot to see that the homeownership and application_type variables are associated, since some columns are divided in different vertical locations than others, which was the same technique used for checking an association in the standardized stacked bar plot.

In Figure 2.16, we chose to first split by the homeowner status of the borrower. However, we could have instead first split by the application type, as in Figure 2.17. Like with the bar plots, it’s common to use the explanatory variable to represent the first split in a mosaic plot, and then for the response to break up each level of the explanatory variable, if these labels are reasonable to attach to the variables under consideration.

### 2.2.4 Row and column proportions

In the previous sections we inspected visualisations of two categorical variables in bar plots and mosaic plots. However, we have not discussed how the values in the bar and mosaic plots that show proportions are calculated. In this section we will investigate fractional breakdown of one variable in another and we can modify our contingency table to provide such a view. Table 2.6 shows row proportions for Table 2.4, which are computed as the counts divided by their row totals. The value 3496 at the intersection of individual and rent is replaced by $$3496 / 8505 = 0.411,$$ i.e., 3496 divided by its row total, 8505. So what does 0.411 represent? It corresponds to the proportion of individual applicants who rent.

Table 2.6: A contingency table with row proportions for the application type and homeownership variables.
homeownership
application_type rent mortgage own Total
joint 0.242 0.635 0.122 1
individual 0.411 0.451 0.138 1

A contingency table of the column proportions is computed in a similar way, where each is computed as the count divided by the corresponding column total. Table 2.7 shows such a table, and here the value 0.906 indicates that 90.6% of renters applied as individuals for the loan. This rate is higher compared to loans from people with mortgages (80.2%) or who own their home (86.5%). Because these rates vary between the three levels of homeownership (rent, mortgage, own), this provides evidence that app_type and homeownership variables may be associated.

Table 2.7: A contingency table with column proportions for the application type and homeownership variables.
homeownership
application_type rent mortgage own
joint 0.094 0.198 0.135
individual 0.906 0.802 0.865
Total 1.000 1.000 1.000

We could also have checked for an association between application_type and homeownership in Table 2.6 using row proportions. When comparing these row proportions, we would look down columns to see if the fraction of loans where the borrower rents, has a mortgage, or owns varied across the to application types.

1. What does 0.451 represent in Table 2.6?

2. What does 0.802 represent in Table 2.7?52

1. What does 0.122 represent in Table 2.6?

2. What does 0.135 represent in Table 2.7?53

Data scientists use statistics to filter spam from incoming email messages. By noting specific characteristics of an email, a data scientist may be able to classify some emails as spam or not spam with high accuracy. One such characteristic is whether the email contains no numbers, small numbers, or big numbers. Another characteristic is the email format, which indicates whether or not an email has any HTML content, such as bolded text. We’ll focus on email format and spam status using the data set; these variables are summarized in a contingency table in Table 2.8. Which would be more helpful to someone hoping to classify email as spam or regular email for this table: row or column proportions?

A data scientist would be interested in how the proportion of spam changes within each email format. This corresponds to column proportions: the proportion of spam in plain text emails and the proportion of spam in HTML emails.

If we generate the column proportions, we can see that a higher fraction of plain text emails are spam ($$209/1195 = 17.5\%$$) than compared to HTML emails ($$158/2726 = 5.8\%$$). This information on its own is insufficient to classify an email as spam or not spam, as over 80% of plain text emails are not spam. Yet, when we carefully combine this information with many other characteristics, we stand a reasonable chance of being able to classify some emails as spam or not spam with confidence. This example points out that row and column proportions are not equivalent. Before settling on one form for a table, it is important to consider each to ensure that the most useful table is constructed. However, sometimes it simply isn’t clear which, if either, is more useful.

Table 2.8: A contingency table for spam and format
spam HTML text Total
not spam 2568 986 3554
spam 158 209 367
Total 2726 1195 3921

Look back to Table 2.6 and Table 2.7. Are there any obvious scenarios where one might be more useful than the other?

None that we think are obvious! What is distinct about and vs the email example is that the two loan variables don’t have a clear explanatory-response variable relationship that we might hypothesize. Usually it is most useful to “condition” on the explanatory variable. For instance, in the email example, the email format was seen as a possible explanatory variable of whether the message was spam, so we would find it more interesting to compute the relative frequencies (proportions) for each email format.

### 2.2.5 Pie charts

A pie chart is shown in Figure 2.18 alongside a bar plot representing the same information. Pie charts can be useful for giving a high-level overview to show how a set of cases break down. However, it is also difficult to decipher details in a pie chart. For example, it;s not immediately obvious that there are more loans where the borrower has a mortgage than rent when looking at the pie chart, while this detail is very obvious in the bar plot. While pie charts can be useful, we prefer bar plots for their ease in comparing groups.

### 2.2.6 Comparing numerical data across groups

Some of the more interesting investigations can be considered by examining numerical data across groups. In this section we will expand on a few methods we’ve already seen to make plots for numerical data from multiple groups on the same graph as well as introduce a few new methods for comparing numerical data across groups.

We will revisit the county dataset and compare the median household income for counties that gained population from 2010 to 2017 versus counties that had no gain. While we might like to make a causal connection between income and population growth, remember that these are observational data and so such an interpretation would be, at best, half-baked.

We have data on 3142 counties in the United States. We are missing 2017 population data from 3 of them, and of the remaining 3139 counties, in 1541 the population increased from 2010 to 2017 and in the remaining 1598 the population decreased. Table 2.9 shows a sample of 5 observations from each group.

Table 2.9: The median household income from a random sample of 5 counties with population gain between 2010 to 2017 and another random sample of 5 counties with no population gain.
State County Change in population, % Gain / No gain Median household income
Arizona Navajo County 1.75 gain 38798
Louisiana Jefferson Davis Parish 0.57 gain 40744
Texas Victoria County 2.25 gain 55740
Virginia Shenandoah County 1.68 gain 53934
Wisconsin Clark County 0.36 gain 49131
Alabama Marengo County -3.63 no gain 32255
Iowa O’Brien County -1.73 no gain 56314
Kansas Geary County -7.95 no gain 46096
West Virginia Ohio County -3.86 no gain 45777
West Virginia Preston County -0.01 no gain 46673

Color can be used to split histograms for numerical variables by levels of a categorical variable. An example of this is shown in Plot A of Figure 2.19. The side-by-side box plot is another traditional tool for comparing across groups. An example is shown in Plot B of Figure 2.19, where there are two box plots, one for each group, placed into one plotting window and drawn on the same scale.

Use the plots in Figure 2.19 to compare the incomes for counties across the two groups. What do you notice about the approximate center of each group? What do you notice about the variability between groups? Is the shape relatively consistent between groups? How many prominent modes are there for each group?54

What components of each plot in Figure 2.19 do you find most useful?55

Another useful visualisation for comparing numerical data across groups is a ridge plot, which combines density plots for various groups drawn on the same scale in a single plotting window. Figure 2.20 displays a ridge plot for the distribution of median household income in counties, split by whether there was a population gain or not.

What components of the ridge plot in Figure 2.20 do you find most useful compared to those in Figure 2.19?56

One last visualization technique we’ll highlight for comparing numerical data across groups is faceting. In this technique we split (facet) the graphical display of the data across plotting windows based on groups. Plot A in Figure 2.21 displays the same information as Plot A in Figure 2.19, however here the distributions of median household income for counties with and without population gain are faceted across two plotting windows. We preserve the same scale on the x and y axes for easier comparison. An advantage of this approach is that it extends to splitting the data across levels of two categorical variables, which allows for displaying relationships between three variables. In Plot B in Figure 2.21 we have now split the data into four groups using the pop_change and metro variables:

• top left represents counties that are not in a metropolitan area with population gain,
• top right represents counties that are in a metropolitan area with population gain,
• bottom left represents counties that are not in a metropolitan area without population gain, and finally
• bottom right represents counties that are in a metropolitan area without population gain.

We can continue building up on this visualisation to add one more variable, median_edu, which is the median education level in the county. In Figure 2.22, we represent median education level using color, where yellow (dotted line) represents counties where the median (dashed line) education level is Bachelor’s, green is some college degree, and blue (solid line) is high school diploma.

Based on Figure 2.22, what can you say about how median household income in counties vary depending on population gain/no gain, metropolitan area/not, and median degree?57

### 2.2.7 Exercises

1. Antibiotic use in children. The bar plot and the pie chart below show the distribution of pre-existing medical conditions of children involved in a study on the optimal duration of antibiotic use in treatment of tracheitis, which is an upper respiratory infection.58

1. What features are apparent in the bar plot but not in the pie chart?

2. What features are apparent in the pie chart but not in the bar plot?

3. Which graph would you prefer to use for displaying these categorical data?

2. Views on immigration. 910 randomly sampled registered voters from Tampa, FL were asked if they thought workers who have illegally entered the US should be (i) allowed to keep their jobs and apply for US citizenship, (ii) allowed to keep their jobs as temporary guest workers but not allowed to apply for US citizenship, or (iii) lose their jobs and have to leave the country. The results of the survey by political ideology are shown below.59

Response

Conservative

Liberal

Moderate

Total

Apply for citizenship

57

101

120

278

Guest worker

121

28

113

262

Leave the country

179

45

126

350

Not sure

15

1

4

20

Total

372

175

363

910

1. What percent of these Tampa, FL voters identify themselves as conservatives?

2. What percent of these Tampa, FL voters are in favor of the citizenship option?

3. What percent of these Tampa, FL voters identify themselves as conservatives and are in favor of the citizenship option?

4. What percent of these Tampa, FL voters who identify themselves as conservatives are also in favor of the citizenship option? What percent of moderates share this view? What percent of liberals share this view?

5. Do political ideology and views on immigration appear to be independent? Explain your reasoning.

6. What other variables might explain the potential relationship between these two variables.

3. Black Lives Matter. A Washington Post-Schar School poll conducted in the United States in June 2020, among a random national sample of 1,006 adults, asked respondents whether they support or oppose protests following George Floyd’s killing that have taken place in cities across the US. The survey also collected information on the age of the respondents. (n.d.) The results are summarized in the stacked bar plot below.

1. Based on the stacked bar plot, do views on the protests and age appear to be independent? Explain your reasoning.

2. What other variables might explain the potential relationship between these two variables?

4. Raise taxes. A random sample of registered voters nationally were asked whether they think it’s better to raise taxes on the rich or raise taxes on the poor. The survey also collected information on the political party affiliation of the respondents. (n.d.)

1. Based on the stacked bar plot shown above, do views on raising taxes and political affiliation appear to be independent? Explain your reasoning.

2. What other variables might explain the potential relationship between these two variables?

## 2.3 Effective data visualization

### 2.3.1 Keep it simple

We discussed earlier that pie charts do not tend to be useful when the categorical variable being displayed has many levels. In addition, there is little value added to coloring each pie. And definitely no value added to making the pie chart three dimensional. A simple bar plot can communicate the same information in a simpler, easier to digest way.

vs.

### 2.3.2 Use color to draw attention

Avoid adding color just to add color, instead use it to draw attention. This doesn’t mean you shouldn’t think about how visually pleasing your visualization is, and if you’re adding color for making it visually pleasing without drawing attention to a particular feature, that might still be fine. But you should be critical of default coloring and explicitly decide whether to include color and how. Also note that not everyone sees color the same way, often it’s useful to add color and one more feature (e.g., pattern) so that you can refer to the features you’re drawing attention to in multiple ways.

### 2.3.4 Order matters

In September 2019, YouGov survey asked 1,639 GB adults the following question60:

In hindsight, do you think Britain was right/wrong to vote to leave EU?

• Right to leave
• Wrong to leave
• Don’t know

Alphabetical order is rarely ideal, so sometimes it’s better to order bars by frequency.

We can improve this further by cleaning up axis labels.

There may also be inherent ordering to levels of your categorical variables. If so, the visualization should respect that.

### 2.3.5 Put long categories on the y-axis

And clean up axis labels.

### 2.3.6 Pick a purpose

Segmented bar plots can be hard to read. Use faceting, avoid redundancy, and use informative labels (note the shortlink to the survey).

### 2.3.7 Select meaningful colors

Rainbow colors are not always the right choice. Viridis scale works well with ordinal data

### 2.3.8 Exercises

Exercises for this section will be available in the 1st edition of this book, which will be available in Summer 2021. In the meantime, OpenIntro::Introduction to Statistics with Randomization and Simulation and OpenIntro::Statistics, both of which are available for free, have many exercises you can use alongside this book.

## 2.4 Case study: malaria vaccine

Suppose your professor splits the students in class into two groups: students on the left and students on the right. If $$\hat{p}_{_L}$$ and $$\hat{p}_{_R}$$ represent the proportion of students who own an Apple product on the left and right, respectively, would you be surprised if $$\hat{p}_{_L}$$ did not exactly equal $$\hat{p}_{_R}$$? While the proportions would probably be close to each other, it would be unusual for them to be exactly the same. We would probably observe a small difference due to chance.

If we don’t think the side of the room a person sits on in class is related to whether the person owns an Apple product, what assumption are we making about the relationship between these two variables?

### 2.4.1 Variability within data

We consider a study on a new malaria vaccine called PfSPZ. In this study, volunteer patients were randomized into one of two experiment groups: 14 patients received an experimental vaccine and 6 patients received a placebo vaccine. Nineteen weeks later, all 20 patients were exposed to a drug-sensitive malaria virus strain; the motivation of using a drug-sensitive strain of virus here is for ethical considerations, allowing any infections to be treated effectively. The results are summarized in Table 2.10, where 9 of the 14 treatment patients remained free of signs of infection while all of the 6 patients in the control group patients showed some baseline signs of infection.

Table 2.10: Summary results for the malaria vaccine experiment.
treatment infection no infection Total
placebo 6 0 6
vaccine 5 9 14
Total 11 9 20

Is this an observational study or an experiment? What implications does the study type have on what can be inferred from the results?61

In this study, a smaller proportion of patients who received the vaccine showed signs of an infection (35.7% versus 100%). However, the sample is very small, and it is unclear whether the difference provides convincing evidence that the vaccine is effective.

Statisticians and data scientists are sometimes called upon to evaluate the strength of evidence. When looking at the rates of infection for patients in the two groups in this study, what comes to mind as we try to determine whether the data show convincing evidence of a real difference?

The observed infection rates (35.7% for the treatment group versus 100% for the control group) suggest the vaccine may be effective. However, we cannot be sure if the observed difference represents the vaccine’s efficacy or is just from random chance. Generally there is a little bit of fluctuation in sample data, and we wouldn’t expect the sample proportions to be exactly equal, even if the truth was that the infection rates were independent of getting the vaccine. Additionally, with such small samples, perhaps it’s common to observe such large differences when we randomly split a group due to chance alone!

This example is a reminder that the observed outcomes in the data sample may not perfectly reflect the true relationships between variables since there is random noise. While the observed difference in rates of infection is large, the sample size for the study is small, making it unclear if this observed difference represents efficacy of the vaccine or whether it is simply due to chance. We label these two competing claims, $$H_0$$ and $$H_A,$$ which are spoken as “H-nought” and “H-A”:

• $$H_0$$: Independence model. The variables and are independent. They have no relationship, and the observed difference between the proportion of patients who developed an infection in the two groups, 64.3%, was due to chance.

• $$H_A$$: Alternative model. The variables are not independent. The difference in infection rates of 64.3% was not due to chance. Here (because an experiment was done), if the difference in infection rate is not due to change, it was the vaccine that affected the rate of infection.

What would it mean if the independence model, which says the vaccine had no influence on the rate of infection, is true? It would mean 11 patients were going to develop an infection no matter which group they were randomized into, and 9 patients would not develop an infection no matter which group they were randomized into. That is, if the vaccine did not affect the rate of infection, the difference in the infection rates was due to chance alone in how the patients were randomized.

Now consider the alternative model: infection rates were influenced by whether a patient received the vaccine or not. If this was true, and especially if this influence was substantial, we would expect to see some difference in the infection rates of patients in the groups.

We choose between these two competing claims by assessing if the data conflict so much with $$H_0$$ that the independence model cannot be deemed reasonable. If this is the case, and the data support $$H_A,$$ then we will reject the notion of independence and conclude the vaccine was effective.

### 2.4.2 Simulating the study

We’re going to implement simulation under the setting where we will pretend we know that the malaria vaccine being tested does not work. Ultimately, we want to understand if the large difference we observed in the data is common in these simulations that represent independence. If it is common, then maybe the difference we observed was purely due to chance. If it is very uncommon, then the possibility that the vaccine was helpful seems more plausible.

Table 2.10 shows that 11 patients developed infections and 9 did not. For our simulation, we will suppose the infections were independent of the vaccine and we were able to rewind back to when the researchers randomized the patients in the study. If we happened to randomize the patients differently, we may get a different result in this hypothetical world where the vaccine doesn’t influence the infection. Let’s complete another randomization using a simulation.

In this simulation, we take 20 notecards to represent the 20 patients, where we write down “infection” on 11 cards and “no infection” on 9 cards. In this hypothetical world, we believe each patient that got an infection was going to get it regardless of which group they were in, so let’s see what happens if we randomly assign the patients to the treatment and control groups again. We thoroughly shuffle the notecards and deal 14 into a pile and 6 into a pile. Finally, we tabulate the results, which are shown in Table 2.11.

Table 2.11: Simulation results, where any difference in infection ratio is purely due to chance.
treatment placebo vaccine Total
infection 4 7 11
no infection 2 7 9
Total 6 14 20

How does this compare to the observed 64.3% difference in the actual data?62

### 2.4.3 Checking for independence

We computed one possible difference under the independence model in the previous Guided Practice, which represents one difference due to chance. While in this first simulation, we physically dealt out notecards to represent the patients, it is more efficient to perform the simulation using a computer.

Repeating the simulation on a computer, we get another difference due to chance: $\frac{2}{6{}} - \frac{9}{14{}} = -0.310$

And another: $\frac{3}{6{}} - \frac{8}{14{}} = -0.071$

And so on until we repeat the simulation enough times to create a distribution of differences from chance alone.

Figure 2.24 shows a stacked plot of the differences found from 100 simulations, where each dot represents a simulated difference between the infection rates (control rate minus treatment rate).

Note that the distribution of these simulated differences is centered around 0. We simulated these differences assuming that the independence model was true, and under this condition, we expect the difference to be near zero with some random fluctuation, where near is pretty generous in this case since the sample sizes are so small in this study.

How often would you observe a difference of at least 64.3% (0.643) according to Figure 2.24? Often, sometimes, rarely, or never?

It appears that a difference of at least 64.3% due to chance alone would only happen about 2% of the time according to Figure 2.24. Such a low probability indicates a rare event.

The difference of 64.3% being a rare event suggests two possible interpretations of the results of the study:

• $$H_0$$: Independence model. The vaccine has no effect on infection rate, and we just happened to observe a difference that would only occur on a rare occasion.

• $$H_A$$: Alternative model. The vaccine has an effect on infection rate, and the difference we observed was actually due to the vaccine being effective at combatting malaria, which explains the large difference of 64.3%.

Based on the simulations, we have two options. (1) We conclude that the study results do not provide strong evidence against the independence model. That is, we do not have sufficiently strong evidence to conclude the vaccine had an effect in this clinical setting. (2) We conclude the evidence is sufficiently strong to reject $$H_0$$ and assert that the vaccine was useful. When we conduct formal studies, usually we reject the notion that we just happened to observe a rare event.[^This reasoning does not generally extend to anecdotal observations. Each of us observes incredibly rare events every day, events we could not possibly hope to predict. However, in the non-rigorous setting of anecdotal evidence, almost anything may appear to be a rare event, so the idea of looking for rare events in day-to-day activities is treacherous. For example, we might look at the lottery: there was only a 1 in 292 million chance that the Powerball numbers for the largest jackpot in history (January 13th, 2016) would be (04, 08, 19, 27, 34) with a Powerball of (10), but nonetheless those numbers came up! However, no matter what numbers had turned up, they would have had the same incredibly rare odds. That is, any set of numbers we could have observed would ultimately be incredibly rare. This type of situation is typical of our daily lives: each possible event in itself seems incredibly rare, but if we consider every alternative, those outcomes are also incredibly rare. We should be cautious not to misinterpret such anecdotal evidence.] So in the vaccine case, we reject the independence model in favor of the alternative. That is, we are concluding the data provide strong evidence that the vaccine provides some protection against malaria in this clinical setting.

One field of statistics, statistical inference, is built on evaluating whether such differences are due to chance. In statistical inference, data scientists evaluate which model is most reasonable given the data. Errors do occur, just like rare events, and we might choose the wrong model. While we do not always choose correctly, statistical inference gives us tools to control and evaluate how often decision errors occur. Before diving in to the formal introduction to statistical inference in Chapter 5, we spend the next two chapters constructing linear and generalized linear models.

### 2.4.4 Exercises

1. Side effects of Avandia. Rosiglitazone is the active ingredient in the controversial type 2 diabetes medicine Avandia and has been linked to an increased risk of serious cardiovascular problems such as stroke, heart failure, and death. A common alternative treatment is Pioglitazone, the active ingredient in a diabetes medicine called Actos. In a nationwide retrospective observational study of 227,571 Medicare beneficiaries aged 65 years or older, it was found that 2,593 of the 67,593 patients using Rosiglitazone and 5,386 of the 159,978 using Pioglitazone had serious cardiovascular problems. These data are summarized in the contingency table below.63

treatment

No

Yes

Total

Pioglitazone

154592

5386

159978

Rosiglitazone

65000

2593

67593

Total

219592

7979

227571

1. Determine if each of the following statements is true or false. If false, explain why. Be careful: The reasoning may be wrong even if the statement’s conclusion is correct. In such cases, the statement should be considered false.

1. Since more patients on pioglitazone had cardiovascular problems (5,386 vs. 2,593), we can conclude that the rate of cardiovascular problems for those on a pioglitazone treatment is higher.

2. The data suggest that diabetic patients who are taking rosiglitazone are more likely to have cardiovascular problems since the rate of incidence was (2,593 / 67,593 = 0.038) 3.8% for patients on this treatment, while it was only (5,386 / 159,978 = 0.034) 3.4% for patients on pioglitazone.

3. The fact that the rate of incidence is higher for the rosiglitazone group proves that Rosiglitazone causes serious cardiovascular problems.

4. Based on the information provided so far, we cannot tell if the difference between the rates of incidences is due to a relationship between the two variables or due to chance.

2. What proportion of all patients had cardiovascular problems?

3. If the type of treatment and having cardiovascular problems were independent, about how many patients in the Rosiglitazone group would we expect to have had cardiovascular problems?

4. We can investigate the relationship between outcome and treatment in this study using a randomization technique. While in reality we would carry out the simulations required for randomization using statistical software, suppose we actually simulate using index cards. In order to simulate from the independence model, which states that the outcomes were independent of the treatment, we write whether or not each patient had a cardiovascular problem on cards, shuffled all the cards together, then deal them into two groups of size 67,593 and 159,978. We repeat this simulation 100 times and each time record the difference between the proportions of cards that say “Yes” in the Rosiglitazone and Pioglitazone groups. Use the histogram of these differences in proportions to answer the following questions.

1. What are the claims being tested?

2. Compared to the number calculated in part (b), which would provide more support for the alternative hypothesis, higher or lower proportion of patients with cardiovascular problems in the Rosiglitazone group?

3. What do the simulation results suggest about the relationship between taking Rosiglitazone and having cardiovascular problems in diabetic patients?

2. Heart transplants. The Stanford University Heart Transplant Study was conducted to determine whether an experimental heart transplant program increased lifespan. Each patient entering the program was designated an official heart transplant candidate, meaning that he was gravely ill and would most likely benefit from a new heart. Some patients got a transplant and some did not. The variable transplant indicates which group the patients were in; patients in the treatment group got a transplant and those in the control group did not. Of the 34 patients in the control group, 30 died. Of the 69 people in the treatment group, 45 died. Another variable called survived was used to indicate whether or not the patient was alive at the end of the study.64

1. Based on the stacked bar plot, is survival independent of whether or not the patient got a transplant? Explain your reasoning.

2. What do the box plots below suggest about the efficacy (effectiveness) of the heart transplant treatment.

3. What proportion of patients in the treatment group and what proportion of patients in the control group died?

4. One approach for investigating whether or not the treatment is effective is to use a randomization technique.

1. What are the claims being tested?

2. The paragraph below describes the set up for such approach, if we were to do it without using statistical software. Fill in the blanks with a number or phrase, whichever is appropriate.

We write alive on $$\rule{2cm}{0.5pt}$$ cards representing patients who were alive at the end of the study, and deceased on $$\rule{2cm}{0.5pt}$$ cards representing patients who were not. Then, we shuffle these cards and split them into two groups: one group of size $$\rule{2cm}{0.5pt}$$ representing treatment, and another group of size $$\rule{2cm}{0.5pt}$$ representing control. We calculate the difference between the proportion of cards in the treatment and control groups (treatment - control) and record this value. We repeat this 100 times to build a distribution centered at $$\rule{2cm}{0.5pt}$$. Lastly, we calculate the fraction of simulations where the simulated differences in proportions are $$\rule{2cm}{0.5pt}$$. If this fraction is low, we conclude that it is unlikely to have observed such an outcome by chance and that the null hypothesis should be rejected in favor of the alternative.

1. What do the simulation results shown below suggest about the effectiveness of the transplant program?

## 2.5 Chapter review

### 2.5.1 Terms

We introduced the following terms in the chapter. If you’re not sure what some of these terms mean, we recommend you go back in the text and review their definitions. We are purposefully presenting them in alphabetical order, instead of in order of appearance, so they will be a little more challenging to locate. However you should be able to easily spot them as bolded text.

 average first quartile percentile symmetric bimodal histogram point estimate tail box plot intensity map random noise third quartile column totals interquartile range ridge plot transformation contingency table IQR right skewed unimodal data density left skewed robust statistics variability deviation mean row totals variance distribution median scatterplot weighted mean dodged bar plot mosaic plot side-by-side box plot whiskers dot plot multimodal stacked bar plot faceted plot nonlinear standard deviation filled bar plot outlier standardized bar plot

### 2.5.2 Chapter exercises

1. Make-up exam. In a class of 25 students, 24 of them took an exam in class and 1 student took a make-up exam the following day. The professor graded the first batch of 24 exams and found an average score of 74 points with a standard deviation of 8.9 points. The student who took the make-up the following day scored 64 points on the exam.

1. Does the new student’s score increase or decrease the average score?

2. What is the new average?

3. Does the new student’s score increase or decrease the standard deviation of the scores?

2. Infant mortality. The infant mortality rate is defined as the number of infant deaths per 1,000 live births. This rate is often used as an indicator of the level of health in a country. The relative frequency histogram below shows the distribution of estimated infant death rates for 224 countries for which such data were available in 2014.65

1. Estimate Q1, the median, and Q3 from the histogram.

2. Would you expect the mean of this data set to be smaller or larger than the median? Explain your reasoning.

3. TV watchers. College students in a statistics class were asked how many hours of television they watch per week, including online streaming services. This sample yielded an average of 8.28 hours, with a standard deviation of 7.18 hours. Is the distribution of number of hours students watch television weekly symmetric? If not, what shape would you expect this distribution to have? Explain your reasoning.

4. A new statistic. The statistic $$\frac{\bar{x}}{median}$$ can be used as a measure of skewness. Suppose we have a distribution where all observations are greater than 0, $$x_i > 0$$. What is the expected shape of the distribution under the following conditions? Explain your reasoning.

1. $$\frac{\bar{x}}{median} = 1$$

2. $$\frac{\bar{x}}{median} < 1$$

3. $$\frac{\bar{x}}{median} > 1$$

5. Oscar winners. The first Oscar awards for best actor and best actress were given out in 1929. The histograms below show the age distribution for all of the best actor and best actress winners from 1929 to 2019. Summary statistics for these distributions are also provided. Compare the distributions of ages of best actor and actress winners.66

Mean

SD

n

Best actor

43.8

8.8

92

Best actress

36.2

11.9

92

6. Exam scores. The average on a history exam (scored out of 100 points) was 85, with a standard deviation of 15. Is the distribution of the scores on this exam symmetric? If not, what shape would you expect this distribution to have? Explain your reasoning.

7. Stats scores. The final exam scores of twenty introductory statistics students, arranged in ascending order, as as follows: 57, 66, 69, 71, 72, 73, 74, 77, 78, 78, 79, 79, 81, 81, 82, 83, 83, 88, 89, 94. Suppose students who score above the 75th percentile on the final exam get an A in the class. How many students will get an A in this class?

8. Marathon winners. The histogram and box plots below show the distribution of finishing times for male and female winners of the New York Marathon between 1970 and 1999.67

1. What features of the distribution are apparent in the histogram and not the box plot? What features are apparent in the box plot but not in the histogram?

2. What may be the reason for the bimodal distribution? Explain.

3. Compare the distribution of marathon times for men and women based on the box plot shown below.

1. The time series plot shown below is another way to look at these data. Describe what is visible in this plot but not in the others.

### 2.5.3 Interactive R tutorials

Navigate the concepts you’ve learned in this chapter in R using the following self-paced tutorials. All you need is your browser to get started!

You can also access the full list of tutorials supporting this book here.

### 2.5.4 R labs

Further apply the concepts you’ve learned in this chapter in R with computational labs that walk you through a data analysis case study.