Here, we only keep the variables that will be used (either as predictor or response) in the analyses.
2 Number of Purkinje cell bodies (per 250x103 µm3)
2.1 Model fitting & diagnostics
Here, we chose to model N_CC, which are counts, with a Generalized Poisson family (which can handle both over and under-dispersion). We use a random intercept per Mouse to account for pseudo-replication.
N_CC_mod<-glmmTMB(N_CC~Condition+(1|Mouse), family =genpois("log"), data =calb_data$clean, REML =TRUE)
2.1.1 Residual diagnostics
Checking the model’s quality of fit through the behavior of its residuals:
# Overdispersion test
dispersion ratio = 0.935
Pearson's Chi-Squared = 248.704
p-value = 0.77
2.1.2 Predictive checks
Checking the model’s quality of fit by emulate Bayesian Posterior Predictive Checks (PPC): we simulate predictions from the model and plot how accurately they match the observed data, or statistics of the observed data:
N_CC_mod_dharma<-simulateResiduals(N_CC_mod, plot =FALSE, n =300, seed =getOption("seed"))N_CC_mod_dharma_t<-t(N_CC_mod_dharma$simulatedResponse)
ppc_plots(N_CC_mod, simulations =N_CC_mod_dharma_t, term ="Condition", is_count =FALSE)
ppc_stat_plots(N_CC_mod, simulations =N_CC_mod_dharma_t, term ="Condition")
2.1.3 Potential outliers
According to the fitted model, the following observations are potential outliers:
However, we have already removed the data points we had a biological/theoretical reason to believe to be outliers before fitting our model.
- Confidence level used: 0.95
- Intervals are back-transformed from the log scale
2.2.3 Contrasts
emmeans(N_CC_mod, specs ="Condition", type ="response")|>contrast(method ="pairwise", adjust ="none", infer =TRUE)|>as.data.frame()|>gt()
contrast
ratio
SE
df
lower.CL
upper.CL
null
t.ratio
p.value
N / IH
1.049
0.04871
266
0.9569
1.149
1
1.021
0.3084
- Confidence level used: 0.95
- Intervals are back-transformed from the log scale
- Tests are performed on the log scale
make_signif_boxplot(N_CC_mod, "Condition")
3 Purkinje marked volume (10-4 µm3) per Purkinje cell
3.1 Model fitting & diagnostics
Here, we chose to model Vol_PC_per_cell, which is a strictly positive continuous measure, with a Gamma family. We use a random intercept per Mouse to account for pseudo-replication.
Vol_PC_mod<-glmmTMB(Vol_PC_per_cell~Condition+(1|Mouse), family =Gamma("log"), data =calb_data$clean, REML =TRUE)
3.1.1 Residual diagnostics
Checking the model’s quality of fit through the behavior of its residuals:
Checking the model’s quality of fit by emulate Bayesian Posterior Predictive Checks (PPC): we simulate predictions from the model and plot how accurately they match the observed data, or statistics of the observed data:
Vol_PC_mod_dharma<-DHARMa::simulateResiduals(Vol_PC_mod, plot =FALSE, n =300, seed =getOption("seed"))Vol_PC_mod_dharma_t<-t(Vol_PC_mod_dharma$simulatedResponse)
ppc_plots(Vol_PC_mod, simulations =Vol_PC_mod_dharma_t, term ="Condition")
ppc_stat_plots(Vol_PC_mod, simulations =Vol_PC_mod_dharma_t, term ="Condition")
3.1.3 Potential outliers
According to the fitted model, the following observations are potential outliers:
However, we have already removed the data points we had a biological/theoretical reason to believe to be outliers before fitting our model.