Missing Data, Found
- Observed data is a special case
- we trick ourselves into believing there is no error
- no/little error is rare
Most data are missing
- but there's still information about the missing data. no data is fully missing with a generative model
- constraints - e.g. heights are never negative, and bounded
- relationships to other variables - e.g. weather affects our behavior, our behavior does not affect the weather
Dealing with missing data is a workflow
What to do with missing data?
- Complete case analysis: drop cases missing values
- sometimes the right thing to do
- depends on the context and causal assumptions
- Imputation: fill the missing values with reasonable values that are based on the present values
- often necessary and benefitial for accurate estimation
Dog ate my homework: looking a different missing data scenarios
- Student ability/knowledge state $S$
- Students would receive score on homework $H$
- We only get to observe the homework the teacher sees $H^*$
- we don't get to observe the real score, because homework may not make it to teacher
- e.g. the family dog $D$ randomly eats homework (missingness mechanism)*
Dog usually benign
Simulate random homework eating
- When losing outcomes randomly, we obtain a similar linear fit with the similar slope, and no little/no bias.
- Thus dropping complete cases ok, but lose efficiency
Dog eats homework based on cause of homework (student ability)
Now the student's ability effects the dog's behavior (e.g. the student studies too much and doesn't feed the dog, and the dog is hungry)
Simulate data where treatment effects oucome linearly
When the association between student ability $S$ and homework sckor $H$ is linear, we can still get fairly similar linear fits from the total and incomplete sample, you just lose precision on one the upper end of the $S$ dimenions in this simulation
Simulate data where treatment effects oucome nonlinearly
- When the association between student ability $S$ and homework sckor $H$ is nonlinear, we now get very different linear fits from the total and incomplete sample
- on nonlinearity: this is why we use generative models with scientifically-motivated functions: functions matter
- Therefore need to correctly condition on the cause
Dog eats homework conditioned on the state of homework itself
Simulate some linear data
- without knowing the causal relationship between the state and the data loss, and the functional forms of how $S$ is associated with $H$, it's difficult to account for this scenario
Stereotypical cases
- Data loss is random and indpendent of causes
- "Dog eats homework randomly"
- Dropping complete cases ok, but lose efficiency
- Data loss is conditioned on the cause
- "Dog eats homework based on the student's study habits"
- Need to correctly condition on cause
- Data loss is conditioned on the outcome
- "Dog eats homeowork based on the score of the homework"
- This is usually hopeless unless we can model the causal process of how state affects data loss (i.e. the dog's behavior in this simulation)
- e.g. survival analysis and censored data
Bayesian Imputation
- Works for Stereotypical cases 1), 2) and subsets of 3) (e.g. survival analysis)
- Having a joint (Bayesian) model for all variables (observed data and parameters) provides a means for inferring reasonable bounds on the values that are missing
Imputing or Marginalizing
- Imputation: compute a posterior probability distribution of the missing values
- usually involves partial pooling
- we've already done this to some extent with the missing district in the Bangladesh dataset
- Marginalizing unknowns: average over the distribution of missing values using the posteriors for the other variables
- no need to calculate posteriors for missing values
When to impute? When to marginalize?
- Sometimes imputation is unnecessary -- e.g. discrete variables
- Marginalization works here
- It's sometimes easier to impute
- e.g. when the data loss process is well understood as in censoring in survival analysis
- marginalization may be difficult for more exotic models
Revisiting Phylogenetic Regression
- 301 Primate profiles
- Life history traits
- Body Mass $M$
- Brain size $B$
- Social Group Size $G$
- Issues
- Lots of missing data
- Complete case analysis only includes 151 observations
- measurement error
- unobserved confounding
Imputing Primates
- Key Idea: missing values have probability distributions
- Express a causal model for each partially-observed variable
- Replace each missing value with a parameter, and let the model proceed as normal
- Duality of parameters and data:
- In Bayes there is no distinction between params and data: missing value is just a parameter, observed value is data
- e.g. one variable that is available can inform us of reasonable values for missing variable associated with the observation
- Conceptually weired, technically awkward
Revisiting previous causal model
- where $M,G,B$ are defined above
- shared history $H$, which leads to
- historical-based confounds $u$, which we try to capture with a phylogeny
Let's now include how missing values may occur
- We observe group size $G^*$, which has missing values for some observations/species
- What influences the cause of missingness $m_G$?
- often the assumption is that it's totally random, but this is highly unlikely
- there are many possible ways that missingness could occur.*
Some hypotheses on what causes missing data...
- All the colored arrows, and assumptions/hypotheses, are potentially in play
- anthro-narcisism: humans are more likely to study primates that are like them
- larger species with larger body mass easier to count: e.g. some species might live in trees and are harder to observe
- some species have little or no social group; these solitary species are difficult to observe and gather data on
- Whatever the assumption, the goal is to use causal model to infer the probability distribution of each missing value
- Uncertainty in each missing value will cascade through the model
- again, no need to be clever
- trust the axioms of probability
Modeling missing data for multiple variables simultaneously
$M, G, B$ all have missing values $M^, G^, B^$
Dealing with missing values in Brain Size $B$
It turns out we already handled this using Gaussian processes in the original Phylogenetic Regression example. By leveraging local pooling offered by the Gaussian process, we fill in missing brain size $B$ values using information from species that are similar those species with missing values, but do have values for $B$.
The equivalent causal-sub-graph used to modele Brain Size in the "Phylogenetic Regression" section of Lecture 16 - Gaussian Processes [blocked]
Brain Size $B$ Model
Assuming missingness is totally at random
BμB,iKBαBβGB,MBηB2ρB∼MVNormal(μB,KB)=αB+βGBGi+βMBMi=ηB2exp(−ρBDjk)∼Normal(0,1)∼Normal(0,1)∼HalfNormal(1,0.25)∼HalfNormal(3,0.25)We can use a simular approach to simultaneously formulate parallel models for the other variables $G,M$ that may have missing values, a' la Full Luxury Bayes.
GμG,iKGαGβMGηG2ρG∼MVNormal(μG,KG)=αG+βMGMi=ηG2exp(−ρGDjk)∼Normal(0,1)∼Normal(0,1)∼HalfNormal(1,0.25)∼HalfNormal(3,0.25) MKMηM2ρM∼MVNormal(0,KM)=ηM2exp(−ρMDjk)∼HalfNormal(1,0.25)∼HalfNormal(3,0.25) Drawing the missing owl 🦉
Building a model containing three simultaneous Gaussian processes has a lot of working parts. It's better to build up model complexity in small steps.
- Ignore cases with missing $B$ values (for now)
- $B$ is the outcome, so any imputation would just be prediction
- Impute $G,M$ ignoring models for each
- wrong model, correct process
- allows us to see the consequences of adding complexity to model
- Impute $G$ using a $G$-specific submodel
- Impute $B,G,M$ using a submodel for each
1. Ignore cases with missing $B$ values
Filter out observations missing brain volume measurments
Cross-tabulate missingess for $G,M$ on the resulting dataset
- True: Observed
- False: Missing
- Group size $G$ missing a lot of values (33)
- Some correlation between the presence/absence of each variable
2. Impute $G,M$ naively, ignoring models for each
Idea here is to treat $G, M$ as independent random variables with no causes, e.g. standard Normals.
Mi,Gi∼Normal(0,1)
- This is not the correct model, but the correct starting place to build up the model gradually
- Interpretation:
- When $G_i$ is observed, the distribution is the likelihood for a standardized variable
- When $G_i$ is missing, the distribution is the prior for the "parameter"
Fit the impuatation model that ignores the submodels for $G,M$
Load the phlogenetic distance matrix
Fit the naive imputation model
Below is an implementation that uses PyMC's Gaussian process module.
Below is an alternative implementation that builds the covariance function by hand, and directly models the dataset as a MVNormal with mean being the linear function of $G$ and $M$, and covariance defined by the kernel. I find that these MVNormal implementations track better with the results from the lecture.
Demonstrate the effect of imputation
$M$ Imputation
Plot posterior imputation
Looking at imputed values
- Because $M$ is strongly associated with $B$, the imputed $M$ values follow the linear relationship
- Because the association between $M$ and $G$ isn't modeled, imputed values doesn't follow the linear trend between $M$ and $G$
- We've left some information on the table; this is the result of ignoring the full generative model
Complete case model for comparison
Here we'll fit a complet-cases-only model where there's no need for imputation, but we throw away data.
Below is an implementation that uses PyMC's Gaussian process module.
Below is an alternative implementation that builds the covariance function by hand, and directly models the dataset as a MVNormal with mean being the linear function of $G$ and $M$, and covariance defined by the kernel. I find that these MVNormal implementations track better with the results from the lecture.
Compare posteriors for naively-imputed and complete case model
- we can see that the posterior for the imputed model attenuates the effect of Group size on Brain Size
3. Impute $G$ using a $G$-specific submodels
Add the generative model for Group Size. But, again, we'll do this in baby steps, slowly building complexity
- Model that only models effect of body mass on group size $M \rightarrow G$
- Model that only includes Social group phylogentic interactions
- Model that combines phylogeny and $M \rightarrow G$
1. Model that only models effect of body mass on group size $M \rightarrow G$
Below is an implementation that uses PyMC's Gaussian process module.
Below is an alternative implementation that builds the covariance function by hand, and directly models the dataset as a MVNormal with mean being the linear function of $G$ and $M$, and covariance defined by the kernel. I find that these MVNormal implementations track better with the results from the lecture.
Look at $M \rightarrow G$-only imputation
- By modeling the effect of $M$ on $G$, we're able to capture the linear trend for; imputed variables lie along this trend
- However, by ignoring phylogenetic similarity, we aren't able to capture nonlinearities in the data
- e.g. some species with small group size lying along the lower left of the plot do not follow the linear trend
2. Model that only includes Social group phylogentic interactions
Below is an implementation that uses PyMC's Gaussian process module.
Below is an alternative implementation that builds the covariance function by hand, and directly models the dataset as a MVNormal with mean being the linear function of $G$ and $M$, and covariance defined by the kernel. I find that these MVNormal implementations track better with the results from lecture.
Look at phylogeny-only imputation
⚠️ for some reason, after trying two different implementation of GPs using PyMCs gp module, as well as using MVNormal likelihood (both give similar results), I'm not able to replicate the imputation for solo primates presented in lecture.
Comments and suggestions are welcome on what I may be doing incorrectly here
3. Fit model that combines phylogeny and $M \rightarrow G$
Below is an implementation that implements the model using PyMC's Gaussian process module.
Below is an alternative implementation that builds the covariance function by hand, and directly models the dataset as a MVNormal with mean being the linear function of $G$ and $M$, and covariance defined by the kernel. I find that these MVNormal implementations track better with the results from lecture.
- Purple points move toward the regression line
⚠️ for some reason, after trying two different implementation of GPs using PyMCs gp module, as well as using MVNormal likelihood (both give similar results), I'm not able to replicate the imputation for solo primates presented in lecture.
Comments and suggestions are welcome on what I may be doing incorrectly here
4. Impute $B,G,M$ using a submodel for each
Below is an implementation that uses PyMC's Gaussian process module.
Below is an alternative implementation that builds the covariance function by hand, and directly models the dataset as a MVNormal with mean being the linear function of $G$ and $M$, and covariance defined by the kernel. I find that these MVNormal implementations track better with the results from lecture.
Imputation changes little from the previous model that does not directly model the causes of $M$. I would think that this is because there are only two missing $M$ values
we also get very similare effects to the phylogeny + $M \rightarrow G$ model
Review: Imputing Primates
- Key Idea: missing values have probability distributions
- Think like a graph, not like a regression
- Blind imputation without relationships among predictorrs may be risky
- take advantage of partial pooling
- Even if imputation doesn't change results, it's scientific duty
Authors
- Ported to PyMC by Dustin Stansbury (2024)
- Based on Statistical Rethinking (2023) lectures by Richard McElreath
:::{include} ../page_footer.md
:::