Michelle Knights, Bruce A. Bassett, Melvin Varughese, Renée Hlozek, Martin Kunz, Mat Smith, James Newling
New supernova surveys such as the Dark Energy Survey, Pan-STARRS and the LSST will produce an unprecedented number of photometric supernova candidates, most with no spectroscopic follow-up. Avoiding biases in cosmological parameters due to the resulting inevitable contamination from non-Ia supernovae can be achieved with the BEAMS formalism, allowing the first fully photometric supernova cosmology studies. Here we extend BEAMS to deal with the case in which the supernovae are correlated. Doing this analytically requires evaluating 2^N terms in the posterior, where N is the number of supernova candidates. This `exponential catastrophe' is computationally unfeasible even for N of order 100. We circumvent the exponential catastrophe by marginalising numerically instead of analytically over the possible supernovae types: we augment the cosmological parameters with N discrete type parameters, tau_i, that we include in our MCMC analysis. We show that this deals well even with large correlations without a major increase in computational time, whereas ignoring the correlations can lead to significant biases. We then compare the numerical marginalisation technique with a perturbative expansion of the posterior based on the insight that future surveys will have exquisite light curves and hence the probability that a given candidate is a Type Ia will be close to unity or zero, for most objects. Although this perturbative approach changes computation of the posterior from a 2^N problem into an N^2 or N^3 one, we show that it leads to biases in general through a small number of misclassifications, implying that numerical marginalisation is superior.
View original:
http://arxiv.org/abs/1205.3493
No comments:
Post a Comment