(GLM-ordinal-features.ipynb)=
:::{post} Oct 27, 2024 :tags: ordinal features, ordinal regression, generalized linear model, bayesian workflow, r-datasets :category: intermediate, reference :author: Jonathan Sedar :::
Here we use an ordinal exogenous predictor feature within a model:
... this is in contrast to estimating an ordinal endogenous target feature, which we show in another notebook
Disclaimer:
Human action and economics is all about expressing our ordinal preferences between limited options in the real-world.
We often encounter real-world situations and datasets where a predictor feature is an ordinal category recording a preference or summarizing a metric value, and is particularly common in insurance and health. For example:
["strongly disagree", "disagree", "agree", "strongly agree"])
this is the approach of the familiar Likert scale["<30", "30 to 60", "60+"]), or a subjective value that's been mapped to a metric scale (e.g. medical health
self-scoring ["0-10%", ..., "90-100%"]) - these are typically a misuse of the metric because the data has been
compressed (losing information), and the reason for the binning and the choices of bin-edges are usually not knownIn all these cases the critical issue is that the categorical values and their ordinal rank doesn't necessarily relate
linearly to the target variable. For example in a 4-value Likert scale (shown above) the relative effect of
"strongly agree" (rank 4) is probably not "agree" (rank 3) plus 1: 3+1 != 4.
Another way to put it is the metric distance between ordinal categories is not known and can be unequal. For example in
that 4-value Likert scale (shown above) the difference between "disagree" vs "agree" is probably not the same as
between "agree" vs "strongly agree".
These properties can unfortunately encourage modellers to incorporate ordinal features as either a categorical (with infinite degrees of freedom - so we lose ordering / rank information), or as a numeric coefficient (which ignores the unequal spacing, non-linear response). Both are poor choices and have subtly negative effects on the model performance.
A final nuance is that we might not see the occurrence of all valid categorial ordinal levels in the training dataset.
For example we might know a range is measured ["c0", "c1", "c2", "c3"] but only see ["c0", "c1", "c3"]. This is a
missing data problem which could further encourage the misuse of a numeric coefficient to average or "interpolate" a
value. What we should do is incorporate our knowledge of the data domain into the model structure to autoimpute a
coefficient value. This is actually the case in this dataset here (see Section 0)!
Our problem statement is that when faced with such ordinal features we want to:
This notebook takes the opportunity to:
burkner2018 and demonstrated in a
pymc-specific example by Austin Rochford {cite:p}rochford2018gelman2020bayesian including data curation and grabbing
data from an RDatasetThis notebook is a partner to another notebook (TBD) where we estimate an ordinal endogenous target feature.
:::{include} ../extra_installs.md :::
We use the same health dataset as in {cite:p}burkner2018, named ICFCoreSetCWP.RData, available in an R package
ordPens
Per the Bürkner paper (Section 4: Case Study) this dataset is from a study ** of Chronic Widespread Pain(CWP) wherein
420 patients were self-surveyed on 67 health features (each a subjective ordinal category) and also assigned a
differently generated (and also subjective) measure of physical health. In the dataset these 67 features are named e.g.
b1602, b102, ... d450, d455, ... s770 etc, and the target feature is named phcs.**
Per the Bürkner paper we will subselect 2 features d450, d455 (which measure an impairment of patient
walking ability on a scale [0 to 4] ["no problem" to "complete problem"]) and use them to predict phcs.
Quite interestingly, for feature d450, the highest ordinal level value 4 is not seen in the dataset, so we have a
missing data problem which could further encourage the misuse of a numeric coefficient to average or "interpolate" a
value. What we should do is incorporate our knowledge of the data domain into the model structure to auto-impute a
coefficient value. This means that our model can make predictions on new data where a d450=4 value might be seen.
** Just for completeness (but not needed for this notebook) that study is reported in Gertheiss, J., Hogger, S., Oberhauser, C., & Tutz, G. (2011). Selection of ordinally 784 scaled independent variables with applications to international classification of functioning 785 core sets. Journal of the Royal Statistical Society: Series C (Applied Statistics), 60 (3), 786 377–395.**
NOTE some boilerplate steps are included but struck through with and explanatory comment
e.g. "Not needed in this simple example". This is to preserve the logical workflow which is
more generally useful
Annoyingly but not surprisingly for an R project, despite being a small, simple table, the dataset is only available in
an obscure R binary format, and tarred, so we'll download, unpack and store locally as a normal CSV file.
This uses the rather helpful pyreadr package.
Observe:
d450, d455] and will press aheadNot needed in this simple example
Nothing really needed, will rename the index when we force dtype and set index
Not needed in this simple example
d450 to string representation and ordered categorical dtype (supplied as an int which is unhelpful)NOTE force the categorical levels to include c4 which is valid in the data domain but unobserved
d455 to string representation and ordered categorical dtype (supplied as an int which is unhelpful)phcsObserve:
phcs is a subjective scored measure of physical health, see {cite:p}burkner2018 for detailsphcs vs ['d450', 'd455']Observe:
phcs vs d450:
c0 wider and higher: possibly a catch-all category because heavily observed tooc3 fewer countsc4 is not observed: it's missing from the data despite being valid in the data domainphcs vs d455:
c1 and c2 look very similarc4 fewer countsdfx for model inputIMPORTANT NOTE
test dataset (nor k-fold cross validation)
to fit parameters. We also do not need a holdout (out-of-sample) dataset to evaluate model performance,
because we can use in-sample PPC, LOO-PIT and ELPD evaluationsdf as our model inputholdout set (even though we dont need it) to eyeball the predictive outputs, but here we have a summarized dataset, so this isn't possible nor suitableforecast set to demonstrate how we would use the model and it's predictive outputs in Production.NOTE:
Map ordinal categorical to an ordinal numeric (int) based on its preexisting categorical order
Transform (zscore and scale) numerics
forecast set and convert to dffx for model inputNOTE: We depart from the datasets used in {cite:p}rochford2018 and {cite:p}burkner2018 to make sure our
forecast dataset contains all valid levels of d450 and d455. Specifically, the observed dataset does not contain
the domain-valid d450 = 4, so we will make sure that our forecast set does include it.
Convert to dffx for model input
This is a simple linear model where we acknowledge that the categorical features are ordered, but ignore that the impact of the factor-values on the coefficient might not be equally-spaced, and instead just assume equal spacing:
where:
phcsObserve:
Observe:
beta_sigma, beta: (levels), epsilon all have reasonable prior ranges as specifiedObserve:
ess_bulk is good, r_hat is goodE-BFMI > 0.3 so apparently reasonableObserve:
phcs_hat tracks the observed phcs moderately well: slightly overdispersed, perhaps a likelihood
with fatter tails would be more appropriate (e.g. StudentT)Observe:
LOO-PIT looks good, again slightly overdispersed but acceptable for useLots of parameters, let's take our time
Observe:
beta_sigma: E ~ 10 indicates need for high variance in locations of betasbeta: intercept: E ~ 32 confirms the bulk of the variance in betas locations is simply due to the intercept
offset required to get the zscored values into range of phcs, no problembeta: d450_num: E ~ -3 negative, HDI94 does not span 0, substantial effect, smooth central distribution:
d450_num create a reduction in phcs_hatbeta: d455_num: E ~ -2 negative, HDI94 does not span 0, substantial effect, smooth central distribution
d455_num create a smaller reduction in phcs_hatepsilon: E ~ 7 indicates quite a lot of variance still in the data, not yet handled by a modelled featureforecast setJust for completeness, just compare to Figure 3 in the Bürkner paper and Rochford's blogpost. Those plots summarize to a mean though, which seems unnecessary - let's improve it a little with full sample posteriors
dffx and rebuildObserve:
rochford2018 and Figure 12 in {cite:p}burkner2018d450 and of d455 when treated as numeric valuesmdla technically can make predictions for d450=c4 which is not seen in the data. However, this
prediction is a purely linear extrapolation and although helpful in a sense, could be completely and misleadingly
wrong in this model specificationThis is an improved linear model where we acknowledge that the categorical features are ordinal and allow the ordinal
values to have a non-equal spacing, For example, it might well be that A > B > C, but the spacing is not metric:
instead A >>> B > C. We achieve this using a Dirichlet hyperprior to allocate hetrogenously spaced sections of a
linear coefficient:
where:
d450, d455) which each
have factor value levels $k_{d450}, k_{d455}$ and note per Section 0, these are both in range [0 - 4] as recorded
by notebook variable LVLS_D450_D455phcsObserve:
Observe:
beta_sigma, beta: (levels), epsilon: all have reasonable prior ranges as specified*_d450:
chi_*: obey the simplex constraint of the Dirichlet and span the rangenu_*: all reasonable as specified, note the ordering already present in the prior*_d455:
chi_*: obey the simplex constraint of the Dirichlet and span the rangenu_*: all reasonable as specified, note the ordering already present in the priorObserve:
target_accept=0.9 to mitigate / avoid divergences seen at 0.8
ess_bulk a little low, r_hat is okayE-BFMI > 0.3 so apparently reasonableObserve:
phcs_hat tracks the observed phcs moderately well: slightly overdispersed, perhaps a likelihood
with fatter tails would be more appropriate (e.g. StudentT)Observe:
LOO-PIT looks good, again slightly overdispersed but acceptable for useObserve:
mdlb appears to be the winner, taking nearly all the weight and a higher elpd_looLots of parameters, let's take our time
Observe:
beta_sigma: E ~ 12 indicates need for high variance in locations of betasbeta: intercept: E ~ 41 confirms the bulk of the variance in betas locations is simply due to the intercept
offset required to get the zscored values into range of phcs, no problemepsilon: E ~ 7 indicates quite a lot of variance still in the data, not yet handled by a modelled featurebeta: d450: E ~ -12 negative, HDI94 does not span 0, substantial effect, smooth central distribution:
d450_idx create a reduction in phcs_hatbeta: d455: E ~ -7 negative, HDI94 does not span 0, substantial effect, smooth central distribution
d455_idx create a reduction in phcs_hatIn general the bigger coefficient values here (vs mdla) suggest more disrimination between the values in the data and
better performance
Observe:
Interesting pattern:
chi_d450: Non-linear response throughout the rangenu_d450: The non-linear effect beta * chi.csum() is clear, in particular c0 is far from the trend of c1, c2, c3*Note in particular that the posterior distribution of chi_d450 = c4 is almost exactly the
same value as for its prior, because it hasn't been evidenced in the dataset. The constraint of the Dirichlet has in
turn scaled the values for chi_d450 = c0 to c3, the scale of beta_450, and downstream effects on nu_d450 = c4.
For comparison you can try the inferior alternative by setting COORDS['d450_nm']=list(df[d450].cat.categories) in the
model spec in Section 2.1 and re-running and seeing what happens
Observe:
Interesting pattern:
chi_d455: Non-linear response throughout the rangenu_d455: The non-linear effect beta * chi.csum() is clear, in particular c2 is almost the same as c1*Let's see those levels forestplotted to make even more clear
Observe:
Here we see the same patterns in more detail, in particular:
nu_d450:
c0 is an outlier with disproportionately less impact than c1, c2, c3c4 has been auto-imputed and takes the prior value which has very wide variance around a linear extrapolationnu_d455: c1, c2 overlap strongly and so have very similar impact to one anotherforecast setJust for completeness, just compare to Figure 3 in the Bürkner paper and Rochford's blogpost.
Those plots summarize to a mean though, which seems unnecessary - let's improve it a little.
dffx, rebuild, and sample PPCObserve:
rochford2018 and Figure 12 in {cite:p}burkner2018d450 and of d455 when treated as ordinal categoriesd450: all points for c0 are all higher than the plot in Section 1.5.2 (also note the overlap of d455: c1, c2 levels in the shaded points)d455: all points for c1, c2 overlap strongly (also note d455 c0 outlying)mdlb can make predictions for d450=c4 which is not seen in the data:::{bibliography} :filter: docname in docnames :::
:::{include} ../page_footer.md :::