Articles

Affichage des articles du mars, 2026

Comparison using WAIC

  P ( Model 1 better than Model 2 ) = Φ ( elpd_diff ​/ se_diff) prob_model_better <- function(elpd_diff, se_diff) {   z <- elpd_diff / se_diff   p <- pnorm(z)   return(p) } elpd_diff <- 5 se_diff <- 3 prob_model_better ( elpd_diff , se_diff ) Result: [1] 0.952 So there is approximately  95% probability that model 1 predicts better than model 2 . It is  not  the probability that the model is  better  — only that its  predictive accuracy is higher . You can use also the function Probability_Best_Model_WAIC() in HelpersMG packages How to cite this method: Model predictive performance was evaluated using leave-one-out cross-validation (PSIS-LOO). Uncertainty in model comparison was assessed using resampling of the pointwise log predictive densities following the approach described by Vehtari, Gelman and Gabry (2017) and Yao et al. (2018). Vehtari, A., Gelman, A., & Gabry, J. (2017). Practical...

When your autocorrelation becomes negative

The negative values appearing at larger lags in the sample autocorrelation function are usually not evidence of true negative dependence but arise from the finite-sample constraint that the estimated autocorrelations must balance around zero once the sample mean has been removed. Positive autocorrelation at small lags forces the estimated ACF at larger lags to become slightly negative  because the correlations must balance around zero in a finite sample .