(lecture_07)=

Fitting Over & Under

:::{post} Jan 7, 2024 :tags: statistical rethinking, bayesian inference, model fitting :category: intermediate :author: Dustin Stansbury :::

This notebook is part of the PyMC port of the Statistical Rethinking 2023 lecture series by Richard McElreath.

Video - Lecture 07 - Fitting Over & Under# Lecture 07 - Fitting Over & Under

Infinite causes, finite data

  • There are an infinite number of estimators that could explain a given sample.
  • Often there is a tradeoff between simplicity (parsimony) and accuracy

Two simultaneous struggles

  1. Causation: using logic and causal assumptions to design estimators; comparing and contrast alternative models
  2. Finite Data: how to make estimators work
    • the existence of an estimator is not enough, and having estimator doesn't mean that it's practical or possible to estimate
    • we need to think about the engineering considerations around estimation

Problems of prediction

Create Toy Brain Volumes Dataset

Considerations

  • what possible functions describe these points
    • curve fitting & compression
  • what possible functions explain these points
    • this is the goal of causal inference
  • what would happen if we changed one of the points mass?
    • intervention'
  • given a new mass, what's the expected value of corresponding volume
    • prediction

Leave-one-out Cross Validation (LOCCV)

Process for determining the ability for a model function to generalize -- i.e. accurately predicting out-of-sample data points.

  1. Drop one data point
  2. Fit the model function's parameters with the data point missing
  3. Predict the value of the dropped point, record the error
  4. Repeat 1-3 for all data points
  5. The LOCCV score is the sum of errors on all the dropped points

Comparing Polynomial Models with LO0CV

As we increase the polynomial order

  • The in-sample error reduces
  • The out-of-sample error SKYROCKETS
  • (at least for simple models) There's a tradeoff between model simplicity and complexity that is mirrored by the amount of in-sample and out-of-sample error
  • There will be a "sweet spot" in model complexity that is just complex enough to fit the data, but not too complex to overfit to noise.
  • This is known as the bias-variance tradeoff

NOTE this is all applicable to the goal of prediction, not causal inference.

Regularization

  • There will be a tradeoff between model flexibility and accuracy. Having a model that's too flexible can lead to overeighting the contribution of noise.
  • Do no want your estimator to get too excited about rare outlier events. You want it to find regular features.
  • Bayesian modeling uses tighter priors (in terms of variance) in order to be skeptical of unlikely events, tampering down the estimator's dendency to "get excited" about outlier events, thus reducing the model's flexibility to model those outliers.
    • Good priors are often tigher than one thinks
  • The goal isn't signal compression (i.e. memorizing all features of the data), it is generalization in predictions.

Cross validation is not regularization: CV can be used to compare models, but not to reduce flexibility (though we could average models)

Regularizing Priors

  • Use science: there's no replacement for domain knowledge
  • Can tune priors with using CV
  • Many tasks are a mix of inference and prediction, so weight the ability to predict and make actionable inferences

If we define the out-of-sample-penalty (OOSP) as the difference between the out-of-sample and in-sample error, we can see that as model complexity increases, so does the OOSP. We can use this metric for comparing models of different form and complexity by providing a signal for overfitting.

McElreath goes on to show that Bayesian cross-validation metrics WAIC and PSIS closely track the OOSP returned by brute-force LOOCV (via the proxy lppd). It would be nice to replicate that chart in Python/PyMC, however, that's a lot of extra coding and model estimation for one plot, so I'm going to skip that one for now. That said, I'll show in the section on Robust Regression an example of using WAIC and PSIS returned from the PyMC.

Penalty Prediction & Model (Mis-) Selection

For the simple example above, runing cross-validation to obtain in- and out-of-sample penality is no big deal. However, for more complex models that may take a long time to train, retraining multiple times can be prohibitive. Luckily there are some approximations to the CV procedure that allow us to obtain similar metrics directly from Bayesian models without having to explicitly run CV. These metrics include

  • Pareto-smoothed Importance Sampling (PSIS)
  • Waikake's Information Criterion (WAIC)

When directly addressing causal inference problems, do not use CV Penalties for selecting causal models, this can result in selecting a confounded model. Confounds often aid with prediction in the absence of intervention by milking all association signals. However, there are many associations that we do not want to include when addressing causal problems.

Example: Simulated Fungus Growth

The following is a translation of R code 6.13 from McElreath's v2 textbook that's used to simulate fungus growth on plants.

Simulate the plant growth experiment

If we are focusing on the total causal affect of Treatment, $T$ on final height $H1$

Incorrect adjustment set (stratifying by $F$)

H1N(μi,σ)μi=H0×gigi=α+βTTi+βFFi\begin{align*} H_1 &\sim \mathcal{N}(\mu_i, \sigma) \\ \mu_i &= H_0 \times g_i \\ g_i &= \alpha + \beta_T T_i + \beta_F F_i \end{align*}
  • incorporates treatment $T$ and fungus, $F$, which is a post-treatment variable (bad for causal inference)
  • $F$ would not have been found using backdoor criterion
  • However, provides more accurate predictions than not incorporating $F$

Correct adjustment set (not stratifying by $F$)

H1N(μi,σ)μi=H0×gigi=α+βTTi\begin{align*} H_1 &\sim \mathcal{N}(\mu_i, \sigma) \\ \mu_i &= H_0 \times g_i \\ g_i &= \alpha + \beta_T T_i \end{align*}
  • includes only treatment T
  • less accurate predictions, despite being the correct causal model

Biased model provides better predictions of the data

We can see that by comparing LOO Cross validation scores, the biased model is ranked higher:

  • biased model weight = 1
  • ELPD (deviance) is lower (indicating better predictions out of sample)

However, the unbiased model recovers the correct causal effect

Plotting the posteriors of each model provide different results. The biased model suggests that there is no, or even negative effect of the treatment on plant growth. We know this is not true because the simulation either has 0 effect for the untreated, or a positive effect (indirectly by reducting fungus) on plant growth.

Why does the wrong model win at prediction?

  • Fungus is a better predictor of growth than treatment.
    • there is a clear (negative) linear relationship between fungus and growth (left plot below)
      • i.e. the marginals are better separated when separating by fungus
    • control / treatment provides a much less
      • i.e. the marginals are highly overalping when splitting by treatment
    • there are fewer fungus points in the treatment group (see right plot below). This provides less predictive signal in the training set for assessing the predictive association between treatment and growth.

Model Mis-selection

  • NEVER use predictive tools (WAIC, PSIS, CV / OOSPS) to choose a causal estimate
  • Most analyses are a mix of prediction and inference.
  • Accurate functional descriptions create simpler, more parsimonious models that are less prone to overfitting, so use science where possible

Outliers and Robust Regression

  • Outliers are more influential on the posterior than "normal" points
    • You can quantify this using CV -- this is related to Importance Sampling
  • Do not drop outliers, they are still signal
    • The model is wrong, not the data
    • e.g. Mixture models & robust regression

Example: Divorce Rate

  • Maine & Idaho both unusual
  • Maine: High divorce rate for average marriage age
  • Idaho: Low divorce rate and less-than-average marriage age

What is the influence of outlier points like Idaho and Maine?

Fit Least Square Model

Here we can see that the two outliers have large effect on the the Normal Likelihood model posterior. This is because the outliers have very low probability under the Normal distribution, and thus are very "surprising" or "salient".

Mixing Gaussians -- the Student-T

  • Adding Gaussians of differing variances results in a distribution with fatter tails
  • Implicilty captures unobserved heterogeneity in the population
    • Multiple overlapping processes could have different variances.

Robust Linear Regression Using the Student-t Likelihood

Here we can see how the outliers pull the posterior closer to zero in the vanilla Gaussian linear regression. The more robust student-t model is less affected by those outliers.

Outliers have less effect on Student-t posterior

We can see that using a likelihood more robust to outliers weighs those outliers less, as indicated by Maine and Idaho given less extreme posteior importance weights, particularly for PSIS.

Review: Robust Regression

  • Mixture of Gaussians / Student-t implicilty captures unobserved heterogeneity in the population
  • This results in "thick tailed" distributions that are "less surprised" by outliers
  • We "just set" the degrees of freedom parameters in the Student-t. It turns out it is hard to accurately estimate degrees of freedom from the data because outlier values are rare.
  • Should we just use Robust Regression by default for under-theorized domains?
    • Probably a good idea
    • Example: The world is becoming more peacful since early 1900s.
      • Thick tailed-distributions would suggest that large-scale conflicts like those in early 1900s are not surprising, but are to be expected from processes like societal conflicts that can compound and "run-away" (which tend to be long-tailed).

Authors

  • Ported to PyMC by Dustin Stansbury (2024)
  • Based on Statistical Rethinking (2023) lectures by Richard McElreath

:::{include} ../page_footer.md :::