Comparison using WAIC
P(Model 1 better than Model 2)=Φ(elpd_diff/se_diff)
prob_model_better <- function(elpd_diff, se_diff) {
z <- elpd_diff / se_diff
p <- pnorm(z)
return(p)
}
elpd_diff <- 5
se_diff <- 3
prob_model_better(elpd_diff, se_diff)
Result:
[1] 0.952
So there is approximately 95% probability that model 1 predicts better than model 2.
It is not the probability that the model is better — only that its predictive accuracy is higher.
You can use also the function Probability_Best_Model_WAIC() in HelpersMG packages
How to cite this method:
Model predictive performance was evaluated using leave-one-out cross-validation (PSIS-LOO). Uncertainty in model comparison was assessed using resampling of the pointwise log predictive densities following the approach described by Vehtari, Gelman and Gabry (2017) and Yao et al. (2018).
Vehtari, A., Gelman, A., & Gabry, J. (2017). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing, 27(5), 1413-1432. https://doi.org/10.1007/s11222-016-9696-4
Yao, Y., Vehtari, A., Simpson, D., & Gelman, A. (2018). Using stacking to average Bayesian predictive distributions (with discussion). Bayesian Analysis, 13(3), 917-1007. https://doi.org/10.1214/17-BA1091918
Commentaires
Enregistrer un commentaire