Laura Silva, Fabio Fontanot, Gian Luigi Granato
A serious concern for semi-analytical galaxy formation models, aiming to simulate multi-wavelength surveys and to thoroughly explore the model parameter space, is the extremely time consuming numerical solution of the radiative transfer of stellar radiation through dusty media. To overcome this problem, we have implemented an artificial neural network algorithm in the radiative transfer code GRASIL, in order to significantly speed up the computation of the infrared SED. The ANN we have implemented is of general use, in that its input neurons are defined as those quantities effectively determining the shape of the IR SED. Therefore, the training of the ANN can be performed with any model and then applied to other models. We made a blind test to check the algorithm, by applying a net trained with a standard chemical evolution model (i.e. CHE_EVO) to a mock catalogue extracted from the SAM MORGANA, and compared galaxy counts and evolution of the luminosity functions in several near-IR to sub-mm bands, and also the spectral differences for a large subset of randomly extracted models. The ANN is able to excellently approximate the full computation, but with a gain in CPU time by $\sim 2$ orders of magnitude. It is only advisable that the training covers reasonably well the range of values of the input neurons in the application. Indeed in the sub-mm at high redshift, a tiny fraction of models with some sensible input neurons out of the range of the trained net cause wrong answer by the ANN. These are extreme starbursting models with high optical depths, favorably selected by sub-mm observations, and difficult to predict a priori.
View original:
http://arxiv.org/abs/1203.6295
No comments:
Post a Comment