Chapter 4 Multivariable and logistic models

This chapter is currently under construction.

The principles of simple linear regression lay the foundation for more sophisticated regression models used in a wide range of challenging settings. In this chapter, we explore multiple regression, which introduces the possibility of more than one predictor in a linear model, and logistic regression, a technique for predicting categorical outcomes with two levels.

4.1 Regression with multiple predictors

Multiple regression extends simple two-variable regression to the case that still has one response but many predictors (denoted \(x_1\), \(x_2\), \(x_3\), …). The method is motivated by scenarios where many variables may be simultaneously connected to an output.

We will consider data about loans from the peer-to-peer lender, Lending Club, which is a data set we first encountered in Chapters 1. The loan data includes terms of the loan as well as information about the borrower. The outcome variable we would like to better understand is the interest rate assigned to the loan. For instance, all other characteristics held constant, does it matter how much debt someone already has? Does it matter if their income has been verified? Multiple regression will help us answer these and other questions.

The data set includes results from 10,000 loans, and we’ll be looking at a subset of the available variables, some of which will be new from those we saw in earlier chapters. The first six observations in the data set are shown in Table 4.1, and descriptions for each variable are shown in Table 4.2. Notice that the past bankruptcy variable (bankruptcy) is an indicator variable, where it takes the value 1 if the borrower had a past bankruptcy in their record and 0 if not. Using an indicator variable in place of a category name allows for these variables to be directly used in regression. Two of the other variables are categorical (verified_income and issue_month), each of which can take one of a few different non-numerical values; we’ll discuss how these are handled in the model in Section 4.1.1.

The data can be found in the openintro package: loans_full_schema. Based on the data in this dataset we have created to new variables: credit_util which is calculated as the total credit utilized divided by the total credit limit and bankruptcy which turns the number of bankruptcies to an indicator variable (0 for no bankruptcies and 1 for at least 1 bankruptcies). We will refer to this modified dataset as loans.

Table 4.1: First six rows from the loans_full_schema data set.
interest_rate verified_income debt_to_income credit_util bankruptcy term credit_checks issue_month
14.07 Verified 18.01 0.548 0 60 6 Mar-2018
12.61 Not Verified 5.04 0.150 1 36 1 Feb-2018
17.09 Source Verified 21.15 0.661 0 36 4 Feb-2018
6.72 Not Verified 10.16 0.197 0 36 0 Jan-2018
14.07 Verified 57.96 0.755 0 36 7 Mar-2018
6.72 Not Verified 6.46 0.093 0 36 6 Jan-2018
Table 4.2: Variables and their descriptions for the loans data set.
variable description
interest_rate Interest rate on the loan, in an annual percentage.
verified_income Categorical variable describing whether the borrower’s income source and amount have been verified, with levels Verified, Source Verified, and Not Verified.
debt_to_income Debt-to-income ratio, which is the percentage of total debt of the borrower divided by their total income.
credit_util Of all the credit available to the borrower, what fraction are they utilizing. For example, the credit utilization on a credit card would be the card’s balance divided by the card’s credit limit.
bankruptcy An indicator variable for whether the borrower has a past bankruptcy in their record. This variable takes a value of 1 if the answer is yes and 0 if the answer is no.
term The length of the loan, in months.
issue_month The month and year the loan was issued, which for these loans is always during the first quarter of 2018.
credit_checks Number of credit checks in the last 12 months. For example, when filing an application for a credit card, it is common for the company receiving the application to run a credit check.

4.1.1 Indicator and categorical predictors

Let’s start by fitting a linear regression model for interest rate with a single predictor indicating whether or not a person has a bankruptcy in their record:

\[\widehat{\texttt{interest_rate}} = 12.33 + 0.74 \times bankruptcy\]

Results of this model are shown in Table 4.3.

Table 4.3: Summary of a linear model for predicting interest rate based on whether the borrower has a bankruptcy in their record. Degrees of freedom for this model is 9998.
term estimate std.error statistic p.value
(Intercept) 12.338 0.053 231.49 <0.0001
bankruptcy1 0.737 0.153 4.82 <0.0001

Example 4.1 Interpret the coefficient for the past bankruptcy variable in the model. Is this coefficient significantly different from 0?


The variable takes one of two values: 1 when the borrower has a bankruptcy in their history and 0 otherwise. A slope of 0.74 means that the model predicts a 0.74% higher interest rate for those borrowers with a bankruptcy in their record. (See Section 3.2.6 for a review of the interpretation for two-level categorical predictor variables.) Examining the regression output in Table 4.3, we can see that the p-value for is very close to zero, indicating there is strong evidence the coefficient is different from zero when using this simple one-predictor model.

Suppose we had fit a model using a 3-level categorical variable, such as verified_income. The output from software is shown in Table 4.4. This regression output provides multiple rows for the variable. Each row represents the relative difference for each level of verified_income. However, we are missing one of the levels: Not Verified. The missing level is called the reference level and it represents the default level that other levels are measured against.

Table 4.4: Summary of a linear model for predicting interest rate based on whether the borrower’s income source and amount has been verified. This predictor has three levels, which results in 2 rows in the regression output.
term estimate std.error statistic p.value
(Intercept) 11.10 0.081 137.2 <0.0001
verified_incomeSource Verified 1.42 0.111 12.8 <0.0001
verified_incomeVerified 3.25 0.130 25.1 <0.0001

Example 4.2 How would we write an equation for this regression model?


The equation for the regression model may be written as a model with two predictors:

\[\widehat{\texttt{interest_rate}} = 11.10 + 1.42 \times \text{verified_income}_{\text{Source Verified}} + 3.25 \times \text{verified_income}_{\text{Verified}}\]

We use the notation \(\text{variable}_{\text{level}}\) to represent indicator variables for when the categorical variable takes a particular value. For example, \(\text{verified_income}_{\text{Source Verified}}\) would take a value of 1 if was for a loan, and it would take a value of 0 otherwise. Likewise, \(\text{verified_income}_{\text{Verified}}\) would take a value of 1 if took a value of verified and 0 if it took any other value.

The notation \(\text{variable}_{\text{level}}\) may feel a bit confusing. Let’s figure out how to use the equation for each level of the verified_income variable.

Example 4.3 Using the model for predicting interest rate from income verification type, compute the average interest rate for borrowers whose income source and amount are both unverified.


When verified_income takes a value of Not Verified, then both indicator functions in the equation for the linear model are set to 0:

\[\widehat{\texttt{interest_rate}} = 11.10 + 1.42 \times 0 + 3.25 \times 0 = 11.10\]

The average interest rate for these borrowers is 11.1%. Because the level does not have its own coefficient and it is the reference value, the indicators for the other levels for this variable all drop out.

Example 4.4 Using the model for predicting interest rate from income verification type, compute the average interest rate for borrowers whose income source and amount are both unverified.


When verified_income takes a value of Source Verified, then the corresponding variable takes a value of 1 while the other (\(\text{verified_income}_{\text{Verified}}\)) is 0:

\[\widehat{\texttt{interest_rate}} = = 11.10 + 1.42 \times 1 + 3.25 \times 0 = 12.52\]

The average interest rate for these borrowers is 12.52%.

Compute the average interest rate for borrowers whose income source and amount are both verified.94

Predictors with several categories.

When fitting a regression model with a categorical variable that has \(k\) levels where \(k > 2\), software will provide a coefficient for \(k - 1\) of those levels. For the last level that does not receive a coefficient, this is the , and the coefficients listed for the other levels are all considered relative to this reference level.

Interpret the coefficients in the model.95

The higher interest rate for borrowers who have verified their income source or amount is surprising. Intuitively, we’d think that a loan would look less risky if the borrower’s income has been verified. However, note that the situation may be more complex, and there may be confounding variables that we didn’t account for. For example, perhaps lender require borrowers with poor credit to verify their income. That is, verifying income in our data set might be a signal of some concerns about the borrower rather than a reassurance that the borrower will pay back the loan. For this reason, the borrower could be deemed higher risk, resulting in a higher interest rate. (What other confounding variables might explain this counter-intuitive relationship suggested by the model?)

How much larger of an interest rate would we expect for a borrower who has verified their income source and amount vs a borrower whose income source has only been verified?96

4.1.2 Many predictors in a model

The world is complex, and it can be helpful to consider many factors at once in statistical modeling. For example, we might like to use the full context of borrower to predict the interest rate they receive rather than using a single variable. This is the strategy used in multiple regression. While we remain cautious about making any causal interpretations using multiple regression on observational data, such models are a common first step in gaining insights or providing some evidence of a causal connection.

We want to construct a model that accounts for not only for any past bankruptcy or whether the borrower had their income source or amount verified, but simultaneously accounts for all the variables in the loans data set: verified_income, debt_to_income, credit_util, bankruptcy, term, issue_month, and credit_checks.

\[\begin{align*} \widehat{\texttt{interest_rate}} &= \beta_0 + \beta_1\times \texttt{verified_income}_{\texttt{Source Verified}} + \beta_2\times \texttt{verified_income}_{\texttt{Verified}} \\ &\qquad\ + \beta_3\times \texttt{debt_to_income} \\ &\qquad\ + \beta_4 \times \texttt{credit_util} \\ &\qquad\ + \beta_5 \times \texttt{bankruptcy} \\ &\qquad\ + \beta_6 \times \texttt{term} \\ &\qquad\ + \beta_7 \times \texttt{issue_month}_{\texttt{Jan-2018}} + \beta_8 \times \texttt{issue_month}_{\texttt{Mar-2018}} \\ &\qquad\ + \beta_9 \times \texttt{credit_checks} \end{align*}\]

This equation represents a holistic approach for modeling all of the variables simultaneously. Notice that there are two coefficients for verified_income and also two coefficients for issue_month, since both are 3-level categorical variables.

We estimate the parameters \(\beta_0\), \(\beta_1\), \(\beta_2\), \(\cdots\), \(\beta_9\) in the same way as we did in the case of a single predictor. We select \(b_0\), \(b_1\), \(b_2\), \(\cdots\), \(b_9\) that minimize the sum of the squared residuals:

\[SSE = e_1^2 + e_2^2 + \dots + e_{10000}^2 = \sum_{i=1}^{10000} e_i^2 = \sum_{i=1}^{10000} \left(y_i - \hat{y}_i\right)^2\]

where \(y_i\) and \(\hat{y}_i\) represent the observed interest rates and their estimated values according to the model, respectively. 10,000 residuals are calculated, one for each observation. We typically use a computer to minimize the sum of squares and compute point estimates, as shown in the sample output in Table 4.5. Using this output, we identify the point estimates \(b_i\) of each \(\beta_i\), just as we did in the one-predictor case.

Table 4.5: Output for the regression model, where interest rate is the outcome and the variables listed are the predictors. Degrees of freedom for this model is 9990.
term estimate std.error statistic p.value
(Intercept) 1.894 0.210 9.008 <0.0001
verified_incomeSource Verified 0.997 0.099 10.056 <0.0001
verified_incomeVerified 2.563 0.117 21.873 <0.0001
debt_to_income 0.022 0.003 7.434 <0.0001
credit_util 4.897 0.162 30.249 <0.0001
bankruptcy1 0.391 0.132 2.957 0.0031
term 0.153 0.004 38.889 <0.0001
credit_checks 0.228 0.018 12.516 <0.0001
issue_monthJan-2018 0.046 0.108 0.421 0.6736
issue_monthMar-2018 -0.042 0.107 -0.391 0.696

Multiple regression model.

A multiple regression model is a linear model with many predictors. In general, we write the model as

\[\hat{y} = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \cdots + \beta_k x_k\]

when there are \(k\) predictors. We always estimate the \(\beta_i\) parameters using statistical software.

Example 4.5 Write out the regression model using the point estimates from Table 4.5. How many predictors are there in this model?


The fitted model for the interest rate is given by:

\[\begin{align*} \widehat{\texttt{interest_rate}} &= 1.925 + 0.975 \times \texttt{verified_income}_{\texttt{Source Verified}} \\ &\qquad\ + 2.537 \times \texttt{verified_income}_{\texttt{Verified}} \\ &\qquad\ + 0.021 \times \texttt{debt_to_income} \\ &\qquad\ + 4.896 \times \texttt{credit_util} \\ &\qquad\ + 0.386 \times \texttt{bankruptcy} \\ &\qquad\ + 0.154 \times \texttt{term} \\ &\qquad\ + 0.028 \times \texttt{issue_month}_{\texttt{Jan-2018}} \\ &\qquad\ - 0.040 \times \texttt{issue_month}_{\texttt{Mar-2018}} \\ &\qquad\ + 0.228 \times \texttt{credit_checks} \end{align*}\]

If we count up the number of predictor coefficients, we get the effective number of predictors in the model: \(k = 9\). Notice that the categorical predictor counts as two, once for the two levels shown in the model. In general, a categorical predictor with \(p\) different levels will be represented by \(p - 1\) terms in a multiple regression model.

What does \(\beta_4\), the coefficient of variable , represent? What is the point estimate of \(\beta_4\)?97

Compute the residual of the first observation in Table 4.1 on page using the full model.98

Example 4.6 We estimated a coefficient for in Section 4.1.1 of \(b_4 = 0.74\) with a standard error of \(SE_{b_1} = 0.15\) when using simple linear regression. Why is there a difference between that estimate and the estimated coefficient of 0.39 in the multiple regression setting?


If we examined the data carefully, we would see that some predictors are correlated. For instance, when we estimated the connection of the outcome interest_rate and predictor bankruptcy using simple linear regression, we were unable to control for other variables like whether the borrower had her income verified, the borrower’s debt-to-income ratio, and other variables. That original model was constructed in a vacuum and did not consider the full context. When we include all of the variables, underlying and unintentional bias that was missed by these other variables is reduced or eliminated. Of course, bias can still exist from other confounding variables.

The previous example describes a common issue in multiple regression: correlation among predictor variables. We say the two predictor variables are (pronounced as co-linear) when they are correlated, and this collinearity complicates model estimation. While it is impossible to prevent collinearity from arising in observational data, experiments are usually designed to prevent predictors from being collinear.

The estimated value of the intercept is 1.925, and one might be tempted to make some interpretation of this coefficient, such as, it is the model’s predicted price when each of the variables take value zero: income source is not verified, the borrower has no debt (debt-to-income and credit utilization are zero), and so on. Is this reasonable? Is there any value gained by making this interpretation?99

4.1.3 Adjusted R-squared

We first used \(R^2\) in Section 3.2.5 to determine the amount of variability in the response that was explained by the model: \[\begin{aligned} R^2 = 1 - \frac{\text{variability in residuals}} {\text{variability in the outcome}} = 1 - \frac{Var(e_i)}{Var(y_i)} \end{aligned}\] where \(e_i\) represents the residuals of the model and \(y_i\) the outcomes. This equation remains valid in the multiple regression framework, but a small enhancement can make it even more informative when comparing models.

The variance of the residuals for the model given in the earlier Guided Practice is 18.53, and the variance of the total price in all the auctions is 25.01. Calculate \(R^2\) for this model.100

This strategy for estimating \(R^2\) is acceptable when there is just a single variable. However, it becomes less helpful when there are many variables. The regular \(R^2\) is a biased estimate of the amount of variability explained by the model when applied to a new sample of data. To get a better estimate, we use the adjusted \(R^2\).

Adjusted R-squared as a tool for model assessment

The adjusted R-squared is computed as \[\begin{aligned} R_{adj}^{2} = 1 - \frac{s_{\text{residuals}}^2 / (n-k-1)} {s_{\text{outcome}}^2 / (n-1)} = 1 - \frac{s_{\text{residuals}}^2}{s_{\text{outcome}}^2} \times \frac{n-1}{n-k-1} \end{aligned}\]

where \(n\) is the number of cases used to fit the model and \(k\) is the number of predictor variables in the model. Remember that a categorical predictor with \(p\) levels will contribute \(p - 1\) to the number of variables in the model.

Because \(k\) is never negative, the adjusted \(R^2\) will be smaller – often times just a little smaller – than the unadjusted \(R^2\). The reasoning behind the adjusted \(R^2\) lies in the associated with each variance, which is equal to \(n - k - 1\) for the multiple regression context. If we were to make predictions for new data using our current model, we would find that the unadjusted \(R^2\) would tend to be slightly overly optimistic, while the adjusted \(R^2\) formula helps correct this bias.

There were \(n=10000\) auctions in the data set and \(k=9\) predictor variables in the model. Use \(n\), \(k\), and the variances from the earlier Guided Practice to calculate \(R_{adj}^2\) for the interest rate model.101

Suppose you added another predictor to the model, but the variance of the errors \(Var(e_i)\) didn’t go down. What would happen to the \(R^2\)? What would happen to the adjusted \(R^2\)?102

Adjusted \(R^2\) could have been used in Chapter 3. However, when there is only \(k = 1\) predictors, adjusted \(R^2\) is very close to regular \(R^2\), so this nuance isn’t typically important when the model has only one predictor.

4.1.4 Exercises

  1. Absenteeism, Part I Researchers interested in the relationship between absenteeism from school and certain demographic characteristics of children collected data from 146 randomly sampled students in rural New South Wales, Australia, in a particular school year. Below are three observations from this data set.

    eth sex lrn days
    1 0 1 1 2
    2 0 1 1 11
    \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\)
    146 1 0 0 37

    The summary table below shows the results of a linear regression model for predicting the average number of days absent based on ethnic background (eth: 0 - aboriginal, 1 - not aboriginal), sex (sex: 0 - female, 1 - male), and learner status (lrn: 0 - average learner, 1 - slow learner). (Venables and Ripley 2002)

    Estimate Std. Error t value Pr(\(>\)\(|\)t\(|\))
    (Intercept) 18.93 2.57 7.37 0.0000
    eth -9.11 2.60 -3.51 0.0000
    sex 3.10 2.64 1.18 0.2411
    lrn 2.15 2.65 0.81 0.4177
    1. Write the equation of the regression model.

    2. Interpret each one of the slopes in this context.

    3. Calculate the residual for the first observation in the data set: a student who is aboriginal, male, a slow learner, and missed 2 days of school.

    4. The variance of the residuals is 240.57, and the variance of the number of absent days for all students in the data set is 264.17. Calculate the \(R^2\) and the adjusted \(R^2\). Note that there are 146 observations in the data set.

  2. Baby weights, Part III We considered the variables smoke and parity, one at a time, in modeling birth weights of babies in Exercises \[baby_weights_smoke\] and \[baby_weights_parity\]. A more realistic approach to modeling infant weights is to consider all possibly related variables at once. Other variables of interest include length of pregnancy in days (gestation), mother’s age in years (age), mother’s height in inches (height), and mother’s pregnancy weight in pounds (weight). Below are three observations from this data set.

    bwt gestation parity age height weight smoke
    1 120 284 0 27 62 100 0
    2 113 282 0 33 64 135 0
    \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\) \(\vdots\)
    1236 117 297 0 38 65 129 0

    The summary table below shows the results of a regression model for predicting the average birth weight of babies based on all of the variables included in the data set.

    Estimate Std. Error t value Pr(\(>\)\(|\)t\(|\))
    (Intercept) -80.41 14.35 -5.60 0.0000
    gestation 0.44 0.03 15.26 0.0000
    parity -3.33 1.13 -2.95 0.0033
    age -0.01 0.09 -0.10 0.9170
    height 1.15 0.21 5.63 0.0000
    weight 0.05 0.03 1.99 0.0471
    smoke -8.40 0.95 -8.81 0.0000
    1. Write the equation of the regression model that includes all of the variables.

    2. Interpret the slopes of gestation and age in this context.

    3. The coefficient for parity is different than in the linear model shown in Exercise \[baby_weights_parity\]. Why might there be a difference?

    4. Calculate the residual for the first observation in the data set.

    5. The variance of the residuals is 249.28, and the variance of the birth weights of all babies in the data set is 332.57. Calculate the \(R^2\) and the adjusted \(R^2\). Note that there are 1,236 observations in the data set.

  3. Baby weights, Part II Exercise \[baby_weights_smoke\] introduces a data set on birth weight of babies. Another variable we consider is parity, which is 1 if the child is the first born, and 0 otherwise. The summary table below shows the results of a linear regression model for predicting the average birth weight of babies, measured in ounces, from parity.

    Estimate Std. Error t value Pr(\(>\)\(|\)t\(|\))
    (Intercept) 120.07 0.60 199.94 0.0000
    parity -1.93 1.19 -1.62 0.1052
    1. Write the equation of the regression model.

    2. Interpret the slope in this context, and calculate the predicted birth weight of first borns and others.

    3. Is there a statistically significant relationship between the average birth weight and parity?

  4. Baby weights, Part I The Child Health and Development Studies investigate a range of topics. One study considered all pregnancies between 1960 and 1967 among women in the Kaiser Foundation Health Plan in the San Francisco East Bay area. Here, we study the relationship between smoking and weight of the baby. The variable smoke is coded 1 if the mother is a smoker, and 0 if not. The summary table below shows the results of a linear regression model for predicting the average birth weight of babies, measured in ounces, based on the smoking status of the mother. (n.d.c)

    Estimate Std. Error t value Pr(\(>\)\(|\)t\(|\))
    (Intercept) 123.05 0.65 189.60 0.0000
    smoke -8.94 1.03 -8.65 0.0000

    The variability within the smokers and non-smokers are about equal and the distributions are symmetric. With these conditions satisfied, it is reasonable to apply the model. (Note that we don’t need to check linearity since the predictor has only two levels.)

    1. Write the equation of the regression model.

    2. Interpret the slope in this context, and calculate the predicted birth weight of babies born to smoker and non-smoker mothers.

    3. Is there a statistically significant relationship between the average birth weight and smoking?

  5. Cherry trees Timber yield is approximately equal to the volume of a tree, however, this value is difficult to measure without first cutting the tree down. Instead, other variables, such as height and diameter, may be used to predict a tree’s volume and yield. Researchers wanting to understand the relationship between these variables for black cherry trees collected data from 31 such trees in the Allegheny National Forest, Pennsylvania. Height is measured in feet, diameter in inches (at 54 inches above ground), and volume in cubic feet. (Hand 1994)

    Estimate Std. Error t value Pr(\(>\)\(|\)t\(|\))
    (Intercept) -57.99 8.64 -6.71 0.00
    height 0.34 0.13 2.61 0.01
    diameter 4.71 0.26 17.82 0.00
    1. Calculate a 95% confidence interval for the coefficient of height, and interpret it in the context of the data.

    2. One tree in this sample is 79 feet tall, has a diameter of 11.3 inches, and is 24.2 cubic feet in volume. Determine if the model overestimates or underestimates the volume of this tree, and by how much.

  6. GPA A survey of 55 Duke University students asked about their GPA, number of hours they study at night, number of nights they go out, and their gender. Summary output of the regression model is shown below. Note that male is coded as 1.

    Estimate Std. Error t value Pr(\(>\)\(|\)t\(|\))
    (Intercept) 3.45 0.35 9.85 0.00
    studyweek 0.00 0.00 0.27 0.79
    sleepnight 0.01 0.05 0.11 0.91
    outnight 0.05 0.05 1.01 0.32
    gender -0.08 0.12 -0.68 0.50
    1. Calculate a 95% confidence interval for the coefficient of gender in the model, and interpret it in the context of the data.

    2. Would you expect a 95% confidence interval for the slope of the remaining variables to include 0? Explain

4.2 Model selection

The best model is not always the most complicated. Sometimes including variables that are not evidently important can actually reduce the accuracy of predictions. In this section, we discuss model selection strategies, which will help us eliminate variables from the model that are found to be less important. It’s common (and hip, at least in the statistical world) to refer to models that have undergone such variable pruning as parsimonious.

In practice, the model that includes all available explanatory variables is often referred to as the full model. The full model may not be the best model, and if it isn’t, we want to identify a smaller model that is preferable.

Adjusted $R^2$ describes the strength of a model fit, and it is a useful tool for evaluating which predictors are adding value to the model, where adding value means they are (likely) improving the accuracy in predicting future outcomes.

Let’s consider two models, which are shown in Table 4.6 and Table 4.7. The first table summarizes the full model since it includes all predictors, while the second does not include the issue_month variable.

Table 4.6: The fit for the full regression model, including the adjusted \(R^2\).
term estimate std.error statistic p.value
(Intercept) 1.894 0.210 9.008 <0.0001
verified_incomeSource Verified 0.997 0.099 10.056 <0.0001
verified_incomeVerified 2.563 0.117 21.873 <0.0001
debt_to_income 0.022 0.003 7.434 <0.0001
credit_util 4.897 0.162 30.249 <0.0001
bankruptcy1 0.391 0.132 2.957 0.0031
term 0.153 0.004 38.889 <0.0001
credit_checks 0.228 0.018 12.516 <0.0001
issue_monthJan-2018 0.046 0.108 0.421 0.6736
issue_monthMar-2018 -0.042 0.107 -0.391 0.696
Adjusted \(R^2\) = 0.2597
df = 9964
Table 4.7: The fit for the regression model after dropping the issue_month variable.
term estimate std.error statistic p.value
(Intercept) 1.896 0.198 9.56 <0.0001
verified_incomeSource Verified 0.996 0.099 10.05 <0.0001
verified_incomeVerified 2.561 0.117 21.86 <0.0001
debt_to_income 0.022 0.003 7.44 <0.0001
credit_util 4.896 0.162 30.25 <0.0001
bankruptcy1 0.392 0.132 2.96 0.0031
term 0.153 0.004 38.89 <0.0001
credit_checks 0.228 0.018 12.52 <0.0001
Adjusted \(R^2\) = 0.2598
df = 9966

Example 4.7 Which of the two models is better?


We compare the adjusted \(R^2\) of each model to determine which to choose. Since the second model has a higher \(R^2_{adj}\) compared to the first model, we prefer the second model to the first.

Will the model without issue_month be better than the model with issue_month? We cannot know for sure, but based on the adjusted \(R^2\), this is our best assessment.

4.2.1 Model selection strategies

Two common strategies for adding or removing variables in a multiple regression model are called backward elimination and forward selection. These techniques are often referred to as model selection strategies, because they add or delete one variable at a time as they “step” through the candidate predictors.

Backward elimination starts with the model that includes all potential predictor variables. Variables are eliminated one-at-a-time from the model until we cannot improve the adjusted \(R^2\). The strategy within each elimination step is to eliminate the variable that leads to the largest improvement in adjusted \(R^2\).

Example 4.8 Results corresponding to the full model for the loans data are shown in Table 4.6. How should we proceed under the backward elimination strategy?


Our baseline adjusted \(R^2\) from the full model is , and we need to determine whether dropping a predictor will improve the adjusted \(R^2\). To check, we fit models that each drop a different predictor, and we record the adjusted \(R^2\):


4.2.2 Exercises

  1. Absenteeism, Part II Exercise \[absent_from_school_mlr\] considers a model that predicts the number of days absent using three predictors: ethnic background (), gender (), and learner status (). The table below shows the adjusted R-squared for the model as well as adjusted R-squared values for all models we evaluate in the first step of the backwards elimination process.

    Model Adjusted \(R^2\)
    1 Full model 0.0701
    2 No ethnicity -0.0033
    3 No sex 0.0676
    4 No learner status 0.0723

    Which, if any, variable should be removed from the model first?

  2. Absenteeism, Part III Exercise \[absent_from_school_mlr\] provides regression output for the full model, including all explanatory variables available in the data set, for predicting the number of days absent from school. In this exercise we consider a forward-selection algorithm and add variables to the model one-at-a-time. The table below shows the p-value and adjusted \(R^2\) of each model where we include only the corresponding predictor. Based on this table, which variable should be added to the model first?

    variable ethnicity sex learner status
    p-value 0.0007 0.3142 0.5870
    \(R_{adj}^2\) 0.0714 0.0001 0
  3. Baby weights, Part IV Exercise \[baby_weights_mlr\] considers a model that predicts a newborn’s weight using several predictors (gestation length, parity, age of mother, height of mother, weight of mother, smoking status of mother). The table below shows the adjusted R-squared for the full model as well as adjusted R-squared values for all models we evaluate in the first step of the backwards elimination process.

    Model Adjusted \(R^2\)
    1 Full model 0.2541
    2 No gestation 0.1031
    3 No parity 0.2492
    4 No age 0.2547
    5 No height 0.2311
    6 No weight 0.2536
    7 No smoking status 0.2072

    Which, if any, variable should be removed from the model first?

  4. Baby weights, Part V Exercise \[baby_weights_mlr\] provides regression output for the full model (including all explanatory variables available in the data set) for predicting birth weight of babies. In this exercise we consider a forward-selection algorithm and add variables to the model one-at-a-time. The table below shows the p-value and adjusted \(R^2\) of each model where we include only the corresponding predictor. Based on this table, which variable should be added to the model first?

    variable gestation parity age height weight smoke
    p-value \(2.2 \times 10^{-16}\) 0.1052 0.2375 \(2.97 \times 10^{-12}\) \(8.2 \times 10^{-8}\) \(2.2 \times 10^{-16}\)
    \(R_{adj}^2\) 0.1657 0.0013 0.0003 0.0386 0.0229 0.0569
  5. Movie lovers, Part II Suppose an online media streaming company is interested in building a movie recommendation system. The website maintains data on the movies in their database (genre, length, cast, director, budget, etc.) and additionally collects data from their subscribers ( demographic information, previously watched movies, how they rated previously watched movies, etc.). The recommendation system will be deemed successful if subscribers actually watch, and rate highly, the movies recommended to them. Should the company use the adjusted \(R^2\) or the p-value approach in selecting variables for their recommendation system?

4.3 Model diagnostics

4.3.1 Diagnostic plots

4.3.2 Improving model fit

4.3.3 Exercises

  1. Baby weights, Part VI Exercise \[baby_weights_mlr\] presents a regression model for predicting the average birth weight of babies based on length of gestation, parity, height, weight, and smoking status of the mother. Determine if the model assumptions are met using the plots below. If not, describe how to proceed with the analysis.

  2. Movie returns, Part I A FiveThirtyEight.com article reports that "Horror movies get nowhere near as much draw at the box office as the big-time summer blockbusters or action/adventure movies ... but there’s a huge incentive for studios to continue pushing them out. The return-on-investment potential for horror movies is absurd." To investigate how the return-on-investment compares between genres and how this relationship has changed over time, an introductory statistics student fit a model predicting the ratio of gross revenue of movies from genre and release year for 1,070 movies released between 2000 and 2018. Using the plots given below, determine if this regression model is appropriate for these data. (n.d.d)



4.4 Case study: Mario Kart

4.4.1 Data and the full model

4.4.2 Model selection

4.4.3 Checking model conditions

4.4.4 Exercises

4.5 Logistic regression

4.5.1 Resume data

4.5.2 Modeling the probability of an event

4.5.3 Logistic model with many variables

4.5.4 Model diagnostics

4.5.5 Groups of different sizes

4.5.6 Exercises

  1. Challenger disaster, Part I On January 28, 1986, a routine launch was anticipated for the Challenger space shuttle. Seventy-three seconds into the flight, disaster happened: the shuttle broke apart, killing all seven crew members on board. An investigation into the cause of the disaster focused on a critical seal called an O-ring, and it is believed that damage to these O-rings during a shuttle launch may be related to the ambient temperature during the launch. The table below summarizes observational data on O-rings for 23 shuttle missions, where the mission order is based on the temperature at the time of the launch. Temp gives the temperature in Fahrenheit, Damaged represents the number of damaged O- rings, and Undamaged represents the number of O-rings that were not damaged.


    Shuttle Mission 1 2 3 4 5 6 7 8 9 10 11 12

    Temperature 53 57 58 63 66 67 67 67 68 69 70 70
    Damaged 5 1 1 1 0 0 0 0 0 0 1 0
    Undamaged 1 5 5 5 6 6 6 6 6 6 5 6

    Shuttle Mission 13 14 15 16 17 18 19 20 21 22 23

    Temperature 70 70 72 73 75 75 76 76 78 79 81
    Damaged 1 0 0 0 0 1 0 0 0 0 0
    Undamaged 5 6 6 6 6 5 6 6 6 6 6


    1. Each column of the table above represents a different shuttle mission. Examine these data and describe what you observe with respect to the relationship between temperatures and damaged O-rings.

    2. Failures have been coded as 1 for a damaged O-ring and 0 for an undamaged O-ring, and a logistic regression model was fit to these data. A summary of this model is given below. Describe the key components of this summary table in words.

      Estimate Std. Error z value Pr(\(>\)\(|\)z\(|\))
      (Intercept) 11.6630 3.2963 3.54 0.0004
      Temperature -0.2162 0.0532 -4.07 0.0000
    3. Write out the logistic model using the point estimates of the model parameters.

    4. Based on the model, do you think concerns regarding O-rings are justified? Explain.

  2. Challenger disaster, Part II Exercise \[challenger_disaster_model_select\] introduced us to O-rings that were identified as a plausible explanation for the breakup of the Challenger space shuttle 73 seconds into takeoff in 1986. The investigation found that the ambient temperature at the time of the shuttle launch was closely related to the damage of O-rings, which are a critical component of the shuttle. See this earlier exercise if you would like to browse the original data.

    1. The data provided in the previous exercise are shown in the plot. The logistic model fit to these data may be written as \[\begin{aligned} \log\left( \frac{\hat{p}}{1 - \hat{p}} \right) = 11.6630 - 0.2162\times Temperature\end{aligned}\] where \(\hat{p}\) is the model-estimated probability that an O-ring will become damaged. Use the model to calculate the probability that an O-ring will become damaged at each of the following ambient temperatures: 51, 53, and 55 degrees Fahrenheit. The model-estimated probabilities for several additional ambient temperatures are provided below, where subscripts indicate the temperature: \[\begin{aligned} &\hat{p}_{57} = 0.341 && \hat{p}_{59} = 0.251 && \hat{p}_{61} = 0.179 && \hat{p}_{63} = 0.124 \\ &\hat{p}_{65} = 0.084 && \hat{p}_{67} = 0.056 && \hat{p}_{69} = 0.037 && \hat{p}_{71} = 0.024\end{aligned}\]

    2. Add the model-estimated probabilities from part (a) on the plot, then connect these dots using a smooth curve to represent the model-estimated probabilities.

    3. Describe any concerns you may have regarding applying logistic regression in this application, and note any assumptions that are required to accept the model’s validity.

  3. Possum classification, Part I The common brushtail possum of the Australia region is a bit cuter than its distant cousin, the American opossum (see Figure \[brushtail_possum\]). We consider 104 brushtail possums from two regions in Australia, where the possums may be considered a random sample from the population. The first region is Victoria, which is in the eastern half of Australia and traverses the southern coast. The second region consists of New South Wales and Queensland, which make up eastern and northeastern Australia. We use logistic regression to differentiate between possums in these two regions. The outcome variable, called , takes value 1 when a possum is from Victoria and 0 when it is from New South Wales or Queensland. We consider five predictors: (an indicator for a possum being male), , , , and . Each variable is summarized in a histogram. The full logistic regression model and a reduced model after variable selection are summarized in the table.


                   Estimate        SE       Z   Pr($>$$|$Z$|$)      Estimate       SE       Z   Pr($>$$|$Z$|$)
    
     (Intercept)    39.2349   11.5368    3.40           0.0007       33.5095   9.9053    3.38           0.0007
        sex_male    -1.2376    0.6662   -1.86           0.0632       -1.4207   0.6457   -2.20           0.0278
     head_length    -0.1601    0.1386   -1.16           0.2480                                
     skull_width    -0.2012    0.1327   -1.52           0.1294       -0.2787   0.1226   -2.27           0.0231
    total_length     0.6488    0.1531    4.24           0.0000        0.5687   0.1322    4.30           0.0000
     tail_length    -1.8708    0.3741   -5.00           0.0000       -1.8057   0.3599   -5.02           0.0000

    1. Examine each of the predictors. Are there any outliers that are likely to have a very large influence on the logistic regression model?

    2. The summary table for the full model indicates that at least one variable should be eliminated when using the p-value approach for variable selection: . The second component of the table summarizes the reduced model following variable selection. Explain why the remaining estimates change between the two models.

  4. Possum classification, Part II A logistic regression model was proposed for classifying common brushtail possums into their two regions in Exercise \[possum_classification_model_select\]. The outcome variable took value 1 if the possum was from Victoria and 0 otherwise.


                   Estimate       SE       Z   Pr($>$$|$Z$|$)
    
     (Intercept)    33.5095   9.9053    3.38           0.0007
        sex_male    -1.4207   0.6457   -2.20           0.0278
     skull_width    -0.2787   0.1226   -2.27           0.0231
    total_length     0.5687   0.1322    4.30           0.0000
     tail_length    -1.8057   0.3599   -5.02           0.0000

    1. Write out the form of the model. Also identify which of the variables are positively associated when controlling for other variables.

    2. Suppose we see a brushtail possum at a zoo in the US, and a sign says the possum had been captured in the wild in Australia, but it doesn’t say which part of Australia. However, the sign does indicate that the possum is male, its skull is about 63 mm wide, its tail is 37 cm long, and its total length is 83 cm. What is the reduced model’s computed probability that this possum is from Victoria? How confident are you in the model’s accuracy of this probability calculation?

4.6 Chapter review

4.6.1 Terms

We introduced the following terms in the chapter. If you’re not sure what some of these terms mean, we recommend you go back in the text and review their definitions. We are purposefully presenting them in alphabetical order, instead of in order of appearance, so they will be a little more challenging to locate. However you should be able to easily spot them as bolded text.

adjusted R-squared full model parsimonious
backward elimination multiple regression reference level

4.6.2 Chapter exercises

  1. Logistic regression fact checking Determine which of the following statements are true and false. For each statement that is false, explain why it is false.

    1. Suppose we consider the first two observations based on a logistic regression model, where the first variable in observation 1 takes a value of \(x_1 = 6\) and observation 2 has \(x_1 = 4\). Suppose we realized we made an error for these two observations, and the first observation was actually \(x_1 = 7\) (instead of 6) and the second observation actually had \(x_1 = 5\) (instead of 4). Then the predicted probability from the logistic regression model would increase the same amount for each observation after we correct these variables.

    2. When using a logistic regression model, it is impossible for the model to predict a probability that is negative or a probability that is greater than 1.

    3. Because logistic regression predicts probabilities of outcomes, observations used to build a logistic regression model need not be independent.

    4. When fitting logistic regression, we typically complete model selection using adjusted \(R^2\).

  2. Movie returns, Part II The student from Exercise \[movie_returns_altogether\] analyzed return-on-investment (ROI) for movies based on release year and genre of movies. The plots below show the predicted ROI vs. actual ROI for each of the genres separately. Do these figures support the comment in the FiveThirtyEight.com article that states, “The return-on-investment potential for horror movies is absurd.” Note that the x-axis range varies for each plot.

  3. Multiple regression fact checking Determine which of the following statements are true and false. For each statement that is false, explain why it is false.

    1. If predictors are collinear, then removing one variable will have no influence on the point estimate of another variable’s coefficient.

    2. Suppose a numerical variable \(x\) has a coefficient of \(b_1 = 2.5\) in the multiple regression model. Suppose also that the first observation has \(x_1 = 7.2\), the second observation has a value of \(x_1 = 8.2\), and these two observations have the same values for all other predictors. Then the predicted value of the second observation will be 2.5 higher than the prediction of the first observation based on the multiple regression model.

    3. If a regression model’s first variable has a coefficient of \(b_1 = 5.7\), then if we are able to influence the data so that an observation will have its \(x_1\) be 1 larger than it would otherwise, the value \(y_1\) for this observation would increase by 5.7.

    4. Suppose we fit a multiple regression model based on a data set of 472 observations. We also notice that the distribution of the residuals includes some skew but does not include any particularly extreme outliers. Because the residuals are not nearly normal, we should not use this model and require more advanced methods to model these data.

  4. Spam filtering, Part I Spam filters are built on principles similar to those used in logistic regression. We fit a probability that each message is spam or not spam. We have several email variables for this problem: , , , , , , , , , , and . We won’t describe what each variable means here for the sake of brevity, but each is either a numerical or indicator variable.

    1. For variable selection, we fit the full model, which includes all variables, and then we also fit each model where we’ve dropped exactly one of the variables. In each of these reduced models, the AIC value for the model is reported below. Based on these results, which variable, if any, should we drop as part of model selection? Explain.

      Variable Dropped AIC
      None Dropped 1863.50
      2023.50
      1863.18
      1871.89
      1879.70
      1885.03
      1865.55
      1879.31
      2008.85
      1904.60
      1862.76
      1958.18
    2. Consider the following model selection stage. Here again we’ve computed the AIC for each leave-one-variable-out model. Based on the results, which variable, if any, should we drop as part of model selection? Explain.

      Variable Dropped AIC
      None Dropped 1862.41
      2019.55
      1871.17
      1877.73
      1884.95
      1864.52
      1878.19
      2007.45
      1902.94
      1957.56
  5. Spam filtering, Part II In Exercise \[spam_filtering_model_sel\], we encountered a data set where we applied logistic regression to aid in spam classification for individual emails. In this exercise, we’ve taken a small set of these variables and fit a formal model with the following output:

    Estimate Std. Error z value Pr(\(>\)\(|\)z\(|\))
    (Intercept) -0.8124 0.0870 -9.34 0.0000
    tomultiple -2.6351 0.3036 -8.68 0.0000
    winner 1.6272 0.3185 5.11 0.0000
    format -1.5881 0.1196 -13.28 0.0000
    resubj -3.0467 0.3625 -8.40 0.0000
    1. Write down the model using the coefficients from the model fit.

    2. Suppose we have an observation where \(\var{to\us{}multiple} = 0\), \(\var{winner} = 1\), \(\var{format} = 0\), and \(\var{re\us{}subj} = 0\). What is the predicted probability that this message is spam?

    3. Put yourself in the shoes of a data scientist working on a spam filter. For a given message, how high must the probability a message is spam be before you think it would be reasonable to put it in a spambox (which the user is unlikely to check)? What tradeoffs might you consider? Any ideas about how you might make your spam-filtering system even better from the perspective of someone using your email service?

4.6.3 Interactive R tutorials

Navigate the concepts you’ve learned in this chapter in R using the following self-paced tutorials. All you need is your browser to get started!

You can also access the full list of tutorials supporting this book here.

4.6.4 R labs

Further apply the concepts you’ve learned in this chapter in R with computational labs that walk you through a data analysis case study.

References

Hand, D. J. 1994. A handbook of small data sets. Chapman & Hall/CRC.

Venables, W. N., and B. D. Ripley. 2002. Modern Applied Statistics with S. Fourth Edition. New York: Springer.

n.d.c.

n.d.d.


  1. When verified_income takes a value of Verified, then the corresponding variable takes a value of 1 while the other is 0: \[11.10 + 1.42 \times 0 + 3.25 \times 1 = 14.35\] The average interest rate for these borrowers is 14.35%.↩︎

  2. Each of the coefficients gives the incremental interest rate for the corresponding level relative to the Not Verified level, which is the reference level. For example, for a borrower whose income source and amount have been verified, the model predicts that they will have a 3.25% higher interest rate than a borrower who has not had their income source or amount verified.↩︎

  3. Relative to the Not Verified category, the Verified category has an interest rate of 3.25% higher, while the Source Verified category is only 1.42% higher. Thus, Verified borrowers will tend to get an interest rate about \(3.25% - 1.42% = 1.83%\) higher than Source Verified borrowers.↩︎

  4. \(\beta_4\) represents the change in interest rate we would expect if someone’s credit utilization was 0 and went to 1, all other factors held even. The point estimate is \(b_4 = 4.90%\).↩︎

  5. To compute the residual, we first need the predicted value, which we compute by plugging values into the equation from earlier. For example, \(\texttt{verified_income}_{\texttt{Source Verified}}\) takes a value of 0, \(\texttt{verified_income}_{\texttt{Verified}}\) takes a value of 1 (since the borrower’s income source and amount were verified), was 18.01, and so on. This leads to a prediction of \(\widehat{\texttt{interest_rate}}_1 = 18.09\). The observed interest rate was 14.07%, which leads to a residual of \(e_1 = 14.07 - 18.09 = -4.02\).↩︎

  6. Many of the variables do take a value 0 for at least one data point, and for those variables, it is reasonable. However, one variable never takes a value of zero: , which describes the length of the loan, in months. If is set to zero, then the loan must be paid back immediately; the borrower must give the money back as soon as she receives it, which means it is not a real loan. Ultimately, the interpretation of the intercept in this setting is not insightful.↩︎

  7. \(R^2 = 1 - \frac{18.53}{25.01} = 0.2591\).↩︎

  8. \(R_{adj}^2 = 1 - \frac{18.53}{25.01}\times \frac{10000-1}{1000-9-1} = 0.2584\). While the difference is very small, it will be important when we fine tune the model in the next section.↩︎

  9. The unadjusted \(R^2\) would stay the same and the adjusted \(R^2\) would go down.↩︎