To deal with missing data in parametric models, multiple imputation is the golden standard (Schafer & Graham (2002)). With GLMs, the models fitted on each imputed dataset can then be pooled. For non-parametric methods and specifically prediction rule ensembles, the jury is still out on how to best deal with missing data. There are several possible approaches:
Listwise deletion. Although the default in pre()
, it
is certainly the least favorable option.
Single imputation: Perform only a single imputation and fit a prediction rule ensemble on this single dataset. This is likely better than listwise deletion, but likely inferior to multiple imputation. The main advantage is that a PRE can be fit as usual.
Multiple imputation followed by pooling. This approach takes multiple imputed datasets and fits a separate prediction rule ensemble to each of the imputed datasets, which are aggregated to form a final ensemble. The main disadvantage is that this yields an ensemble of ensembles, which will be (much) more complex and thus less interpretable than a single PRE.
Multiple imputation followed by stacking. This approach stacks the multiple imputed datasets and fits a single prediction rule ensemble to this dataset. Rule generation and estimation of the final model can be adjusted to counter the artificial inflation of sample size. The main advantage is that it yields a single PRE, while treating missing data in an optimal manner.
An alternative, would be the Missing-In-Attributes approach for rule
generation, followed by mean imputation for the final model. According
to Josse, Prost, Scornet, & Varoquaux
(2019), mean imputation and the Missing-In-Attributes approaches
perform well from a prediction perspective and are computationally
inexpensive. This will be implemented in future versions of package
pre
.
Below, we provide examples for the four approaches described above.
For the examples, we will be predicting Wind speeds using the
airquality
dataset (we focus on predicting the
wind
variable, because it does not have missing values,
while variables Ozone
and Solar.R
do):
This option, although not recommended, is the default of function
pre()
:
## Warning in pre(Wind ~ ., data = airquality): Some observations have missing values and have been removed from the data. New sample size is 111.
##
## Final ensemble with cv error within 1se of minimum:
##
## lambda = 0.5076221
## number of terms = 2
## mean cv error (se) = 9.955139 (1.553628)
##
## cv error type : Mean-Squared Error
##
## rule coefficient description
## (Intercept) 9.034190233 1
## rule8 1.743533723 Ozone <= 45
## Ozone -0.006180118 6.75 <= Ozone <= 119
With listwise deletion, only 111 out of 153 observations are retained. We obtain a rather sparse ensemble.
Here we apply single imputation by replacing missing values with the mean:
imp0 <- airquality
imp0$Solar.R[is.na(imp0$Solar.R)] <- mean(imp0$Solar.R, na.rm=TRUE)
imp0$Ozone[is.na(imp0$Ozone)] <- mean(imp0$Ozone, na.rm=TRUE)
set.seed(43)
airq.ens.imp0 <- pre(Wind ~., data = imp0)
airq.ens.imp0
##
## Final ensemble with cv error within 1se of minimum:
##
## lambda = 0.2751573
## number of terms = 5
## mean cv error (se) = 9.757455 (0.7622478)
##
## cv error type : Mean-Squared Error
##
## rule coefficient description
## (Intercept) 10.48717592 1
## rule2 1.18133248 Ozone <= 45
## rule48 -0.56453456 Temp > 72 & Solar.R <= 258
## rule28 -0.51357673 Temp > 73 & Ozone > 22
## Ozone -0.01910646 7 <= Ozone <= 115.6
## rule40 0.01440472 Temp <= 81
We obtain a larger number of rules, and slightly lower cross-validated mean squared error. However, this model cannot really be compared with the listwise deletion model, because they are computed over different sets of observations.
We first perform multiple imputation by chained equations, using the predictive mean matching method. We generate 5 imputed datasets:
We create a list
with imputed datasets:
To deal with imputed data, we have two options: Stacking the
imputed datasets or pooling the resulting ensembles. The former
is likely most beneficial for retaining interpretability and therefore
an experimental (!) version has been implemented in package
pre
.
Function mi_pre
implements a so-called stacking
approach (see also Wood, White, & Royston
(2008)), where imputed datasets are combined into one large
dataset. In addition to adjustments of the sampling procedures,
adjustments to observation weight are made to counter the artificial
inflation of sample size. Function mi_pre
takes a list of
imputed datasets as input data, because it is assumed imputation has
already been performed.
Although the option to use the fraction of complete data for
computing observation weight is provided through argument
compl_frac
of function mi_pre
, users are not
advised to use it. For example, Du et al.
(2022) write: “An alternative weight specification, proposed in
Wan et al. (2015), is $o_i =
\frac{f_i}{D}$, where fi is the number
of observed predictors out of the total number of predictors for subject
i […] upweighting subjects
with less missingness and downweighting subjects with more missingness
can, in some sense, be viewed as making the optimization more like
complete-case analysis, which might be problematic for Missing at Random
(MAR) and Missing not at Random (MNAR) scenarios.”
We fit a rule ensemble to the imputed data using stacking:
All S3
methods for objects of class pre
are
also applicable to the ensembles resulting from application of function
mi_pre
, e.g.:
##
## Final ensemble with cv error within 1se of minimum:
##
## lambda = 0.3397143
## number of terms = 6
## mean cv error (se) = 9.821413 (1.103229)
##
## cv error type : Mean-Squared Error
## rule coefficient description
## 49 (Intercept) 9.44439871 1
## 6 rule6 1.29750892 Ozone <= 45
## 20 rule23 0.17877849 Temp <= 73
## 2 rule2 0.15142541 Ozone <= 21
## 5 rule5 0.13556989 Ozone <= 47
## 10 rule11 0.03251171 Ozone <= 52
## 50 Ozone -0.01401860 7 <= Ozone <= 118
Computation of partial dependence plots can become computationally
prohibitive, especially with multiply-imputed data. To reduce
computational load, function mi_mean
computes the average
imputed dataset, which can then be supplied to the newdata
arguments of functions singleplot
or pairplot
to speed up computation of partial dependence plots:
## Loading required namespace: interp
To apply pooling, we create a custom function that fits PREs to several datasets contained in a list:
pre.agg <- function(datasets, ...) {
result <- list()
for (i in 1:length(datasets)) {
result[[i]] <- pre(datasets[[i]], ...)
}
result
}
We apply the new function:
Note that we can use the ellipsis (...
) to pass
arguments to pre
(see ?pre
for an overview of
arguments that can be specified).
We now define print
, summary
,
predict
and coef
methods to extract results
from the fitted ensemble. Again, we can use the ellipsis
(...
) to pass arguments to the print
,
summary
, predict
and coef
methods
of function pre
(see e.g., ?pre:::print.pre
for more info):
print.agg <- function(object, ...) {
result <- list()
sink("NULL")
for (i in 1:length(object)) {
result[[i]] <- print(object[[i]], ...)
}
sink()
print(result)
}
print.agg(airq.agg) ## results suppressed for space considerations
summary.agg <- function(object, ...) {
for (i in 1:length(object)) summary(object[[i]], ...)
}
summary.agg(airq.agg) ## results suppressed for space considerations
For averaging over predictions, there is only one option for continuous outcomes. For non-continuous outcomes, we can average over the linear predictor, or over the predicted values on the scale of the response. I am not sure which would be more appropriate; the resulting predicted values will not be identical, but highly correlated, though.
predict.agg <- function(object, newdata, ...) {
rowMeans(sapply(object, predict, newdata = newdata, ...))
}
agg_preds <- predict.agg(airq.agg, newdata = airquality[1:4, ])
agg_preds
## 1 2 3 4
## 10.42757 10.59272 10.93324 11.00302
Finally, the coef
method should return the averaged /
aggregated final PRE. That is, it returns:
One averaged intercept;
All rules and linear terms, with their coefficients scaled by the number of datasets;
In the presence of identical rules and linear terms, it aggregates those rules and their coefficients into one rule / term, and adds together the scaled coefficients.
Note that linear terms of the same predictors, which obtained
different winsorizing points across imputed datasets will be retained
seperately and will not be aggregated. Note also that the labels of
rules and variables may overlap between different datasets (e.g., the
label rule 12
may appear multiple times in the aggregated
ensemble, but each rule 12
will have different
conditions).
coef.agg <- function(object, ...) {
coefs <- coef(object[[1]], ...)
coefs <- coefs[coefs$coefficient != 0, ]
for (i in 2:length(object)) {
coefs_tmp <- coef(object[[i]], ...)
coefs <- rbind(coefs, coefs_tmp[coefs_tmp$coefficient != 0, ])
}
## Divide coefficients by the number of datasets:
coefs$coefficient <- coefs$coefficient / length(object)
## Identify identical rules:
duplicates <- which(duplicated(coefs$description))
for (i in duplicates) {
first_match <- which(coefs$description == coefs$description[i])[1]
## Add the coefficients:
coefs$coefficient[first_match] <-
coefs$coefficient[first_match] + coefs$coefficient[i]
}
## Remove duplicates:
if (length(duplicates) > 0) coefs <- coefs[-duplicates, ]
## Check if there are- duplicate linear terms left and repeat:
duplicates <- which(duplicated(coefs$rule))
for (i in duplicates) {
first_match <- which(coefs$rule == coefs$rule[i])[1]
coefs$coefficient[first_match] <-
coefs$coefficient[first_match] + coefs$coefficient[i]
}
if (length(duplicates) > 0) coefs <- coefs[-duplicates, ]
## Return results:
coefs
}
coef.agg(airq.agg)
## rule coefficient description
## 65 (Intercept) 9.433571984 1
## 29 rule32 0.097328032 Temp > 73 & Ozone > 23
## 3 rule3 0.968746033 Ozone <= 45
## 7 rule7 0.040507812 Ozone > 14 & Solar.R <= 238
## 66 Ozone -0.006089757 7 <= Ozone <= 115.6
## 6 rule6 0.074835353 Ozone <= 59
## 25 rule26 -0.067771974 Temp > 63 & Ozone > 14
## 17 rule17 -0.038092316 Temp > 75 & Ozone > 47
## 1 rule1 0.090749273 Ozone <= 21
## 58 rule62 -0.277510244 Ozone > 45 & Solar.R <= 275
## 36 rule39 -0.046938032 Ozone > 14 & Solar.R <= 201
## 40 rule43 -0.035213431 Temp > 72 & Solar.R <= 255
## 51 rule5 0.031821453 Ozone <= 52
## 10 rule10 0.070056367 Temp <= 73
## 19 rule22 0.066962331 Ozone <= 63
## 14 rule15 0.001044073 Ozone <= 45 & Solar.R > 212
We have obtained a final ensemble of 15 terms.
We compare performance using 10-fold cross validation. We evaluate predictive accuracy and the number of selected rules. We only evaluate accuracy for observations that have no missing values.
k <- 10
set.seed(43)
fold_ids <- sample(1:k, size = nrow(airquality), replace = TRUE)
observed <- c()
for (i in 1:k) {
## Separate training and test data
test <- airquality[fold_ids == i, ]
test <- test[!is.na(test$Ozone), ]
test <- test[!is.na(test$Solar.R), ]
observed <- c(observed, test$Wind)
}
preds <- data.frame(observed)
preds$LWD <- preds$SI <- preds$MI <- preds$observed
nterms <- matrix(nrow = k, ncol = 3)
colnames(nterms) <- c("LWD", "SI", "MI")
row <- 1
for (i in 1:k) {
if (i > 1) row <- row + nrow(test)
## Separate training and test data
train <- airquality[fold_ids != i, ]
test <- airquality[fold_ids == i, ]
test <- test[!is.na(test$Ozone), ]
test <- test[!is.na(test$Solar.R), ]
## Fit and evaluate listwise deletion
premod <- pre(Wind ~ ., data = train)
preds$LWD[row:(row+nrow(test)-1)] <- predict(premod, newdata = test)
tmp <- print(premod)
nterms[i, "LWD"] <- nrow(tmp) - 1
## Fit and evaluate single imputation
imp0 <- train
imp0$Solar.R[is.na(imp0$Solar.R)] <- mean(imp0$Solar.R, na.rm=TRUE)
imp0$Ozone[is.na(imp0$Ozone)] <- mean(imp0$Ozone, na.rm=TRUE)
premod.imp0 <- pre(Wind ~., data = imp0)
imp1 <- test
imp1$Solar.R[is.na(imp1$Solar.R)] <- mean(imp0$Solar.R, na.rm=TRUE)
imp1$Ozone[is.na(imp1$Ozone)] <- mean(imp0$Ozone, na.rm=TRUE)
preds$SI[row:(row+nrow(test)-1)] <- predict(premod.imp0, newdata = imp1)
tmp <- print(premod.imp0)
nterms[i, "SI"] <- nrow(tmp) - 1
## Perform multiple imputation
imp <- mice(train, m = 5)
imp1 <- complete(imp, action = "all", include = FALSE)
airq.agg <- pre.agg(imp1, formula = Wind ~ .)
preds$MI[row:(row+nrow(test)-1)] <- predict.agg(airq.agg, newdata = test)
nterms[i, "MI"] <- nrow(coef.agg(airq.agg)) - 1
}
## observed MI SI LWD
## 0.000000 9.462657 10.087872 9.824913
## observed MI SI LWD
## 0.000000 1.472118 1.538254 1.499660
## [1] 12.65732
Interestingly, we see that all three methods yield similar predictions and accuracy, and explain about 20% of variance in the response. Multiple imputation performed best, followed by listwise deletion, followed by single imputation. Taking into account the standard errors, however, these differences are not significant. Also, this simple evaluation on only a single dataset should not be taken too seriously. The better performance of multiple imputation does come at the cost of increased complexity:
In line with findings of Josse et al.
(2019), we expect MIA to work better for rules than mean
imputation. In future versions of the package
pre
, we plan to implement MIA (for the
rules) and combine it with mean imputation (for the linear terms).
In case you obtained different results, these results were obtained using the following:
## R version 4.4.1 (2024-06-14)
## Platform: x86_64-pc-linux-gnu
## Running under: Ubuntu 24.04.1 LTS
##
## Matrix products: default
## BLAS: /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3
## LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.26.so; LAPACK version 3.12.0
##
## locale:
## [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
## [3] LC_TIME=en_US.UTF-8 LC_COLLATE=C
## [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
## [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
## [9] LC_ADDRESS=C LC_TELEPHONE=C
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
##
## time zone: Etc/UTC
## tzcode source: system (glibc)
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] pre_1.0.7 mice_3.16.0 rmarkdown_2.28
##
## loaded via a namespace (and not attached):
## [1] shape_1.4.6.1 xfun_0.48 bslib_0.8.0 lattice_0.22-6
## [5] vctrs_0.6.5 tools_4.4.1 generics_0.1.3 tibble_3.2.1
## [9] fansi_1.0.6 highr_0.11 pan_1.9 pkgconfig_2.0.3
## [13] jomo_2.7-6 Matrix_1.7-0 lifecycle_1.0.4 deldir_2.0-4
## [17] compiler_4.4.1 stringr_1.5.1 MatrixModels_0.5-3 codetools_0.2-20
## [21] htmltools_0.5.8.1 sys_3.4.3 buildtools_1.0.0 sass_0.4.9
## [25] yaml_2.3.10 glmnet_4.1-8 Formula_1.2-5 pillar_1.9.0
## [29] nloptr_2.1.1 jquerylib_0.1.4 tidyr_1.3.1 MASS_7.3-61
## [33] cachem_1.1.0 iterators_1.0.14 plotmo_3.6.4 rpart_4.1.23
## [37] boot_1.3-31 earth_5.3.4 foreach_1.5.2 mitml_0.4-5
## [41] nlme_3.1-166 tidyselect_1.2.1 digest_0.6.37 mvtnorm_1.3-1
## [45] inum_1.0-5 stringi_1.8.4 dplyr_1.1.4 purrr_1.0.2
## [49] maketools_1.3.1 splines_4.4.1 fastmap_1.2.0 grid_4.4.1
## [53] cli_3.6.3 magrittr_2.0.3 survival_3.7-0 utf8_1.2.4
## [57] broom_1.0.7 libcoin_1.0-10 backports_1.5.0 plotrix_3.8-4
## [61] interp_1.1-6 nnet_7.3-19 lme4_1.1-35.5 evaluate_1.0.1
## [65] knitr_1.48 rlang_1.1.4 Rcpp_1.0.13 partykit_1.2-22
## [69] glue_1.8.0 minqa_1.2.8 jsonlite_1.8.9 R6_2.5.1