The mass of the cluster? Ask the neural network

The Coma cluster consists mainly of well-formed elliptical galaxies and a few young spiral galaxies. The cluster is part of a series of structures belonging to the Coma supercluster, one of the first large-scale structures in the universe to be discovered. Credits: NASA

Using artificial intelligence algorithms based on the architecture of artificial neural networks, a team of physicists led by Matthew Ho of Carnegie Mellon University (USA) calculated the mass of the Coma cluster: about one and a half million billion solar masses – a value that agrees with what was calculated by previous estimates. The result was published last month on Astronomy of Nature.

Artificial intelligence systems based on neural networks make this possible train the algorithms through data set input, so that they are then able to subsequently analyze new data independently, without needing to know any model. Galaxy clusters are complex physical N -body systems, and computing the dynamics of these structures in simulations requires computational times that grow exponentially with the complexity of the system. Neural networks allow solving problems of this type and predicting the dynamic properties of the cluster without directly computing the differential equations describing the model, thus allowing a reduction in computation times.

The Coma cluster is located about 350 million light-years away from Earth, in the direction of the constellation of the same name. This cluster, cataloged as Abell 1656, has had some significance in the history of astrophysics: it was studied in the 1930s by the Swiss astronomer Fritz Zwicky to predict the existence of dark matter. Applying the virial theorem, Zwicky noticed that it did not hold: the intrinsic velocities of the galaxies – determined thanks to the Doppler shift – were too high for Abell 1656 to remain in equilibrium. Zwicky thus hypothesized the existence of invisible dark matter, necessary to allow the cluster to remain in dynamic equilibrium and thus explain the orbital speed of the galaxies.

But let’s go back to artificial intelligence: architectures based on neural networks are also called systems deep learning, or deep learning. These algorithms are already used in multiple applications, such as face recognition and automatic speech recognition (such as the Alexa and Siri speech recognition apps), as well as in bioinformatics, to represent the structure of complex proteins. and in climate science models. Another application of deep learning which has long been discussed in the media is the AlphaGo software, developed by Google: in March 2016 it managed to beat a human at the board game for the first time. In that case, Lee Sedol, a world champion from South Korea, was beaten four times out of five by the algorithm. To be clear, the standard go grid contains 19 x 19 boxes: the number of all possible combinations in the grid has been estimated to be on the order of 10170nearly a hundred orders of magnitude greater than the estimated number of atoms in the universe, 1080.

Ho and colleagues used deep learning analogs to predict the mass of the Abell cluster 1656. To train their model they used data taken from some simulations of the distribution of matter in the universe. Thus, the algorithm learned to observe the characteristics of thousands of galaxy clusters whose mass is estimated by these models. The researchers then applied the model to a real system, whose mass is known, to compare the results and prove the reliability of the algorithm. This is one of the first examples of applying deep artificial intelligence to the study of large-scale structures in the universe.

Large-scale structure of the Millenium Simulation Project universe, made through an N-body simulation of 10 billion particles. The filaments and nodes of galaxy clusters are separated by voids of matter. Credits: Millennium Simulation Project

“Give it to her overview petabytes of data collected by space telescopes. A huge amount,” says Ho. “Impossible for human beings to analyze them directly by hand. Our team works to build models that can be reliable estimators of quantities such as mass, while at the same time trying to mitigate sources of error. Another important aspect is that the algorithms must be computationally efficient to process this huge stream of data. And that’s exactly what we’re trying to do: use machine learning to improve our analyzes and make them faster.” To produce the results, Ho’s team used resources from the Pittsburgh Supercomputing Center and data set different databases, such as that of CosmoSim, which collects data from thousands of different clusters red shiftand those produced by simulations N-body Ouch.

Clusters of galaxies appear as nodes in a vast network of matter that is more or less homogeneously distributed in the universe. Overview large-scale spectroscopies such as those performed by Desi, the Dark Energy Spectroscopic Instrument, have actually collected data from millions of galaxies (the Desi catalog is expected, by the end of 2026, to contain over 35 million objects). 10 billion light years away. In this way it is possible to reconstruct a three-dimensional map of the universe and through spectroscopic analyzes to measure it red shift. The analysis of the large-scale structure of the universe through the observations of Desi and others overview of galaxies it will then be necessary to compare the data with the results of simulations of the mass distribution in the various cosmological models. Artificial intelligence will help researchers process all this data and produce reliable predictions about the large-scale structures of the universe.

To learn more:

Leave a Comment