(lecture_18)=

Missing Data

:::{post} Jan 7, 2024 :tags: statistical rethinking, bayesian inference, missing data :category: intermediate :author: Dustin Stansbury :::

This notebook is part of the PyMC port of the Statistical Rethinking 2023 lecture series by Richard McElreath.

Video - Lecture 18 - Missing Data # Lecture 18 - Missing Data

Missing Data, Found

  • Observed data is a special case
    • we trick ourselves into believing there is no error
    • no/little error is rare

Most data are missing

  • but there's still information about the missing data. no data is fully missing with a generative model
  1. constraints - e.g. heights are never negative, and bounded
  2. relationships to other variables - e.g. weather affects our behavior, our behavior does not affect the weather

Dealing with missing data is a workflow

What to do with missing data?

  • Complete case analysis: drop cases missing values
    • sometimes the right thing to do
    • depends on the context and causal assumptions
  • Imputation: fill the missing values with reasonable values that are based on the present values
    • often necessary and benefitial for accurate estimation

Dog ate my homework: looking a different missing data scenarios

  • Student ability/knowledge state $S$
  • Students would receive score on homework $H$
  • We only get to observe the homework the teacher sees $H^*$
    • we don't get to observe the real score, because homework may not make it to teacher
    • e.g. the family dog $D$ randomly eats homework (missingness mechanism)*

Dog usually benign

Simulate random homework eating

  • When losing outcomes randomly, we obtain a similar linear fit with the similar slope, and no little/no bias.
  • Thus dropping complete cases ok, but lose efficiency

Dog eats homework based on cause of homework (student ability)

Now the student's ability effects the dog's behavior (e.g. the student studies too much and doesn't feed the dog, and the dog is hungry)

Simulate data where treatment effects oucome linearly

When the association between student ability $S$ and homework sckor $H$ is linear, we can still get fairly similar linear fits from the total and incomplete sample, you just lose precision on one the upper end of the $S$ dimenions in this simulation

Simulate data where treatment effects oucome nonlinearly

  • When the association between student ability $S$ and homework sckor $H$ is nonlinear, we now get very different linear fits from the total and incomplete sample
    • on nonlinearity: this is why we use generative models with scientifically-motivated functions: functions matter
  • Therefore need to correctly condition on the cause

Dog eats homework conditioned on the state of homework itself

Simulate some linear data

  • without knowing the causal relationship between the state and the data loss, and the functional forms of how $S$ is associated with $H$, it's difficult to account for this scenario

Stereotypical cases

  1. Data loss is random and indpendent of causes
    • "Dog eats homework randomly"
    • Dropping complete cases ok, but lose efficiency
  2. Data loss is conditioned on the cause
    • "Dog eats homework based on the student's study habits"
    • Need to correctly condition on cause
  3. Data loss is conditioned on the outcome
    • "Dog eats homeowork based on the score of the homework"
    • This is usually hopeless unless we can model the causal process of how state affects data loss (i.e. the dog's behavior in this simulation)
      • e.g. survival analysis and censored data

Bayesian Imputation

  • Works for Stereotypical cases 1), 2) and subsets of 3) (e.g. survival analysis)
  • Having a joint (Bayesian) model for all variables (observed data and parameters) provides a means for inferring reasonable bounds on the values that are missing

Imputing or Marginalizing

  • Imputation: compute a posterior probability distribution of the missing values
    • usually involves partial pooling
    • we've already done this to some extent with the missing district in the Bangladesh dataset
  • Marginalizing unknowns: average over the distribution of missing values using the posteriors for the other variables
    • no need to calculate posteriors for missing values

When to impute? When to marginalize?

  • Sometimes imputation is unnecessary -- e.g. discrete variables
    • Marginalization works here
  • It's sometimes easier to impute
    • e.g. when the data loss process is well understood as in censoring in survival analysis
    • marginalization may be difficult for more exotic models

Revisiting Phylogenetic Regression

  • 301 Primate profiles
  • Life history traits
    • Body Mass $M$
    • Brain size $B$
    • Social Group Size $G$
  • Issues
    • Lots of missing data
      • Complete case analysis only includes 151 observations
    • measurement error
    • unobserved confounding

Imputing Primates

  • Key Idea: missing values have probability distributions
    • Express a causal model for each partially-observed variable
    • Replace each missing value with a parameter, and let the model proceed as normal
      • Duality of parameters and data:
        • In Bayes there is no distinction between params and data: missing value is just a parameter, observed value is data
    • e.g. one variable that is available can inform us of reasonable values for missing variable associated with the observation
  • Conceptually weired, technically awkward

Revisiting previous causal model

  • where $M,G,B$ are defined above
  • shared history $H$, which leads to
  • historical-based confounds $u$, which we try to capture with a phylogeny

Let's now include how missing values may occur

  • We observe group size $G^*$, which has missing values for some observations/species
  • What influences the cause of missingness $m_G$?
    • often the assumption is that it's totally random, but this is highly unlikely
    • there are many possible ways that missingness could occur.*

Some hypotheses on what causes missing data...

  • All the colored arrows, and assumptions/hypotheses, are potentially in play
    • anthro-narcisism: humans are more likely to study primates that are like them
    • larger species with larger body mass easier to count: e.g. some species might live in trees and are harder to observe
    • some species have little or no social group; these solitary species are difficult to observe and gather data on
  • Whatever the assumption, the goal is to use causal model to infer the probability distribution of each missing value
  • Uncertainty in each missing value will cascade through the model
    • again, no need to be clever
    • trust the axioms of probability

Modeling missing data for multiple variables simultaneously

$M, G, B$ all have missing values $M^, G^, B^$

Dealing with missing values in Brain Size $B$

It turns out we already handled this using Gaussian processes in the original Phylogenetic Regression example. By leveraging local pooling offered by the Gaussian process, we fill in missing brain size $B$ values using information from species that are similar those species with missing values, but do have values for $B$.

The equivalent causal-sub-graph used to modele Brain Size in the "Phylogenetic Regression" section of Lecture 16 - Gaussian Processes [blocked]

Brain Size $B$ Model

Assuming missingness is totally at random

BMVNormal(μB,KB)μB,i=αB+βGBGi+βMBMiKB=ηB2exp(ρBDjk)αBNormal(0,1)βGB,MBNormal(0,1)ηB2HalfNormal(1,0.25)ρBHalfNormal(3,0.25)\begin{align} \textbf{B} &\sim \text{MVNormal}(\mu_B, \textbf{K}_B) \\ \mu_{B,i} &= \alpha_B + \beta_{GB} G_i + \beta_{MB} M_i \\ \mathbf{K}_B &= \eta_B^2 \exp(-\rho_B D_{jk})\\ \alpha_B &\sim \text{Normal}(0, 1) \\ \beta_{GB, MB} &\sim \text{Normal}(0, 1) \\ \eta_B^2 &\sim \text{HalfNormal}(1, 0.25)\\ \rho_B &\sim \text{HalfNormal}(3, 0.25) \end{align}

We can use a simular approach to simultaneously formulate parallel models for the other variables $G,M$ that may have missing values, a' la Full Luxury Bayes.

Group Size $G$ Model

GMVNormal(μG,KG)μG,i=αG+βMGMiKG=ηG2exp(ρGDjk)αGNormal(0,1)βMGNormal(0,1)ηG2HalfNormal(1,0.25)ρGHalfNormal(3,0.25)\begin{align} \textbf{G} &\sim \text{MVNormal}(\mu_G, \textbf{K}_G) \\ \mu_{G,i} &= \alpha_G + \beta_{MG} M_i \\ \mathbf{K}_G &= \eta_G^2 \exp(-\rho_G D_{jk})\\ \alpha_G &\sim \text{Normal}(0, 1) \\ \beta_{MG} &\sim \text{Normal}(0, 1) \\ \eta_G^2 &\sim \text{HalfNormal}(1, 0.25)\\ \rho_G &\sim \text{HalfNormal}(3, 0.25) \end{align}

Body Mass $M$ model

MMVNormal(0,KM)KM=ηM2exp(ρMDjk)ηM2HalfNormal(1,0.25)ρMHalfNormal(3,0.25)\begin{align} \textbf{M} &\sim \text{MVNormal}(\mathbf{0}, \textbf{K}_M) \\ \mathbf{K}_M &= \eta_M^2 \exp(-\rho_M D_{jk})\\ \eta_M^2 &\sim \text{HalfNormal}(1, 0.25)\\ \rho_M &\sim \text{HalfNormal}(3, 0.25) \end{align}

Drawing the missing owl 🦉

Building a model containing three simultaneous Gaussian processes has a lot of working parts. It's better to build up model complexity in small steps.

  1. Ignore cases with missing $B$ values (for now)
    • $B$ is the outcome, so any imputation would just be prediction
  2. Impute $G,M$ ignoring models for each
    • wrong model, correct process
    • allows us to see the consequences of adding complexity to model
  3. Impute $G$ using a $G$-specific submodel
  4. Impute $B,G,M$ using a submodel for each

1. Ignore cases with missing $B$ values

Filter out observations missing brain volume measurments

Cross-tabulate missingess for $G,M$ on the resulting dataset

  • True: Observed
  • False: Missing
  • Group size $G$ missing a lot of values (33)
  • Some correlation between the presence/absence of each variable

2. Impute $G,M$ naively, ignoring models for each

Idea here is to treat $G, M$ as independent random variables with no causes, e.g. standard Normals.

Mi,GiNormal(0,1)M_i, G_i \sim \text{Normal}(0, 1) \\
  • This is not the correct model, but the correct starting place to build up the model gradually
  • Interpretation:
    • When $G_i$ is observed, the distribution is the likelihood for a standardized variable
    • When $G_i$ is missing, the distribution is the prior for the "parameter"

Fit the impuatation model that ignores the submodels for $G,M$

Load the phlogenetic distance matrix

Fit the naive imputation model

Below is an implementation that uses PyMC's Gaussian process module.

Below is an alternative implementation that builds the covariance function by hand, and directly models the dataset as a MVNormal with mean being the linear function of $G$ and $M$, and covariance defined by the kernel. I find that these MVNormal implementations track better with the results from the lecture.

Demonstrate the effect of imputation

$M$ Imputation
Prior values for $M$
$G$ Imputation
Prior Values for $G$

Plot posterior imputation

Looking at imputed values

  • Because $M$ is strongly associated with $B$, the imputed $M$ values follow the linear relationship
  • Because the association between $M$ and $G$ isn't modeled, imputed values doesn't follow the linear trend between $M$ and $G$
  • We've left some information on the table; this is the result of ignoring the full generative model

Complete case model for comparison

Here we'll fit a complet-cases-only model where there's no need for imputation, but we throw away data.

Below is an implementation that uses PyMC's Gaussian process module.

Below is an alternative implementation that builds the covariance function by hand, and directly models the dataset as a MVNormal with mean being the linear function of $G$ and $M$, and covariance defined by the kernel. I find that these MVNormal implementations track better with the results from the lecture.

Compare posteriors for naively-imputed and complete case model

  • we can see that the posterior for the imputed model attenuates the effect of Group size on Brain Size

3. Impute $G$ using a $G$-specific submodels

Add the generative model for Group Size. But, again, we'll do this in baby steps, slowly building complexity

  1. Model that only models effect of body mass on group size $M \rightarrow G$
  2. Model that only includes Social group phylogentic interactions
  3. Model that combines phylogeny and $M \rightarrow G$

1. Model that only models effect of body mass on group size $M \rightarrow G$

Below is an implementation that uses PyMC's Gaussian process module.

Below is an alternative implementation that builds the covariance function by hand, and directly models the dataset as a MVNormal with mean being the linear function of $G$ and $M$, and covariance defined by the kernel. I find that these MVNormal implementations track better with the results from the lecture.

Look at $M \rightarrow G$-only imputation
  • By modeling the effect of $M$ on $G$, we're able to capture the linear trend for; imputed variables lie along this trend
  • However, by ignoring phylogenetic similarity, we aren't able to capture nonlinearities in the data
    • e.g. some species with small group size lying along the lower left of the plot do not follow the linear trend

2. Model that only includes Social group phylogentic interactions

Below is an implementation that uses PyMC's Gaussian process module.

Below is an alternative implementation that builds the covariance function by hand, and directly models the dataset as a MVNormal with mean being the linear function of $G$ and $M$, and covariance defined by the kernel. I find that these MVNormal implementations track better with the results from lecture.

Look at phylogeny-only imputation

⚠️ for some reason, after trying two different implementation of GPs using PyMCs gp module, as well as using MVNormal likelihood (both give similar results), I'm not able to replicate the imputation for solo primates presented in lecture.

Comments and suggestions are welcome on what I may be doing incorrectly here

3. Fit model that combines phylogeny and $M \rightarrow G$

Below is an implementation that implements the model using PyMC's Gaussian process module.

Below is an alternative implementation that builds the covariance function by hand, and directly models the dataset as a MVNormal with mean being the linear function of $G$ and $M$, and covariance defined by the kernel. I find that these MVNormal implementations track better with the results from lecture.

  • Purple points move toward the regression line

⚠️ for some reason, after trying two different implementation of GPs using PyMCs gp module, as well as using MVNormal likelihood (both give similar results), I'm not able to replicate the imputation for solo primates presented in lecture.

Comments and suggestions are welcome on what I may be doing incorrectly here

4. Impute $B,G,M$ using a submodel for each

Below is an implementation that uses PyMC's Gaussian process module.

Below is an alternative implementation that builds the covariance function by hand, and directly models the dataset as a MVNormal with mean being the linear function of $G$ and $M$, and covariance defined by the kernel. I find that these MVNormal implementations track better with the results from lecture.

Imputation changes little from the previous model that does not directly model the causes of $M$. I would think that this is because there are only two missing $M$ values

we also get very similare effects to the phylogeny + $M \rightarrow G$ model

Review: Imputing Primates

  • Key Idea: missing values have probability distributions
  • Think like a graph, not like a regression
  • Blind imputation without relationships among predictorrs may be risky
    • take advantage of partial pooling
  • Even if imputation doesn't change results, it's scientific duty

Authors

  • Ported to PyMC by Dustin Stansbury (2024)
  • Based on Statistical Rethinking (2023) lectures by Richard McElreath

:::{include} ../page_footer.md :::