A. C. Robin, X. Luri, C. Reylé, Y. Isasi, E. Grux, S. Blanco-Cuaresma, F. Arenou, C. Babusiaux, M. Belcheva, R. Drimmel, C. Jordi, A. Krone-Martins, E. Masana, J. C. Mauduit, F. Mignard, N. Mowlavi, B. Rocca-Volmerange, P. Sartoretti, E. Slezak, A. Sozzetti
Context. This study has been developed in the framework of the computational
simulations executed for the preparation of the ESA Gaia astrometric mission.
Aims. We focus on describing the objects and characteristics that Gaia will
potentially observe without taking into consideration instrumental effects
(detection efficiency, observing errors). Methods. The theoretical Universe
Model prepared for the Gaia simulation has been statistically analyzed at a
given time. Ingredients of the model are described, giving most attention to
the stellar content, the double and multiple stars, and variability. Results.
In this simulation the errors have not been included yet. Hence we estimate the
number of objects and their theoretical photometric, astrometric and
spectroscopic characteristics in the case that they are perfectly detected. We
show that Gaia will be able to potentially observe 1.1 billion of stars (single
or part of multiple star systems) of which about 2% are variable stars, 3% have
one or two exoplanets. At the extragalactic level, observations will be
potentially composed by several millions of galaxies, half million to 1 million
of quasars and about 50,000 supernovas that will occur during the 5 years of
mission. The simulated catalogue will be made publicly available by the DPAC on
the Gaia portal of the ESA web site http://www.rssd.esa.int/gaia/.
View original:
http://arxiv.org/abs/1202.0132
A couple of questions that came up when chatting about this in the office.
ReplyDelete1. How is the variability model actually implemented? I'm especially curious about the stochastic variability models.
2. It sounds like only one time slice was analyzed. Is there a plan to look at multiple slices to get at sensitivity of recovery as a function of variability type?
3. How was the usage of 20,000 hours divided between catalog generation and analysis tasks?