Ming Yang Jeremy Tan, Rahul Biswas
The Akaike information criterion (AIC) has been used as a statistical
criterion to compare the appropriateness of different dark energy candidate
models underlying a particular data set. Under suitable conditions, the AIC is
an indirect estimate of the Kullback-Leibler divergence D(T//A) of a candidate
model A with respect to the truth T. Thus, a dark energy model with a smaller
AIC is ranked as a better model, since it has a smaller Kullback-Leibler
discrepancy with T. In this paper, we explore the impact of statistical errors
in estimating the AIC during model comparison. Using a parametric bootstrap
technique, we study the distribution of AIC differences between a set of
candidate models due to different realizations of noise in the data and show
that the shape and spread of this distribution can be quite varied. We also
study the rate of success of the AIC procedure for different values of a
threshold parameter popularly used in the literature. For plausible choices of
true dark energy models, our studies suggest that investigating such
distributions of AIC differences in addition to the threshold is useful in
correctly interpreting comparisons of dark energy models using the AIC
technique.
View original:
http://arxiv.org/abs/1105.5745
No comments:
Post a Comment