Tuning Black Hole Data for LISA to Hear

Title: Hierarchical Bayesian inference on an analytical model of the LISA massive black hole binary population

Authors: Vivienne Langen, Nicola Tamanini, Sylvain Marsat, and Elisa Bortolas

First Author’s Institution: Laboratoire des 2 Infini – Toulouse

Status: preprint on ArXiv

Gravitational waves are all the rage in astronomy. While the LIGOVirgoKAGRA (LVK) detectors have been observing stellar mass black hole mergers for nearly a decade and pulsar timing observed the cosmic gravitational wave background, perhaps no yet-to-be-realized gravitational wave mission has garnered as much attention as the Laser Interferometer Space Antenna, or LISA, which is designed to observe binary black hole mergers in the intermediate to heavy mass ranges (masses of about 105  – 108 solar masses, sometimes referred to as Massive Black Holes or MBHs) between what the LVK detectors and pulsar timing arrays observe.

However, LISA currently has a significant limitation compared to LIGO and pulsar timing – it has yet to collect data. Most astronomers agree this is because LISA isn’t scheduled to launch for another decade [citation needed]. The amount of research that has gone into LISA up to this point is a testament to how valuable LISA’s data and scientific discoveries are expected to be. But this raises a critical question: If LISA isn’t scheduled to fly for another decade, how do we know what it will see?

Luckily, it is common practice in astronomy to analyze tons of mock data long before any actual data is taken (which is a big part of convincing governments to invest billions of dollars to build the devices to collect that data). Many teams of researchers have been hard at work creating mock data sets of the objects LISA might see in the universe, and then testing mock analysis pipelines on that mock data to see if the analysis recovers the original data. Today’s authors do just that, with notable improvements over prior work.

Since LISA is designed to see a specific set of black hole mergers, the place to start this mock process is to predict the number and rate of these black hole mergers in the universe. The more manageable and less precise way to do this, which is what the authors do, is to make a few assumptions about how the universe grows and evolves. For example, we believe that most galaxies harbor massive black holes in their centers, and galaxies reside in much larger structures called dark matter halos. Dark matter halos are easier to track in large simulations like Millennium. If two halos merge in a/the large simulation, it is a reasonably safe assumption that the galaxies within those halos merge, and the black holes at the centers of those galaxies also merge. Similarly, the mass of a dark matter halo is related to the mass of the galaxy residing inside of it; the larger the halo mass, the larger the galaxy mass. The same is true for the mass of the galaxy and the mass of the black hole at its center. These are called mass scaling relations, and the authors use a mass scaling relation to create their mock catalog of MBH mergers from simulated halo mergers. 

Though similar approaches have been done by prior researchers, the authors add three improvements:

  1. They include the possibility that some halos may not have a central massive black hole (called the occupancy fraction). In their model, the authors allow the occupation fraction of black holes to depend on halo mass and redshift.
  2. They use a more complicated halo-black hole scaling relation called a broken power law, which is a popular model in astrophysics, in which a distribution of objects can be divided into two sub-populations governed by two distinct power laws). They also introduce a stochastic element to the scaling relationship. In one model, the mass scaling relationship is direct, or deterministic. An increase in halo mass leads to an increase in black hole mass. In the second stochastic model, an intrinsic scatter is added to the scaling relationship so more massive halos tend to host more massive black holes, but there are instances of more massive halos hosting less massive black holes in some cases. This is done to mimic the randomness of the actual universe.
  3. Most noteworthy, it introduces a time delay function between the halo and black hole mergers.

The nature of this delay time is one of the most open questions in black hole mergers. Out in the universe, when two halos and their galaxies merge, it takes additional time for their supermassive black holes to orbit and merge. The amount of time and the physics behind this delay time is still a much-studied question. Still, it is becoming more manageable to include in mock data. Their delay function incorporates dynamical friction, hardening (a process by which orbits become smaller and more circular), and gravitational wave emission. They run multiple models with and without the scaling relation stochasticity and the delay time to investigate what effects these two aspects of the model contribute to the analysis.

Table 1 (Table 1 from paper) –  Number of estimated massive black hole mergers LISA will see with modeling assumptions. On the left column, the delay vs. no-delay model refers to including or excluding a time delay factor between the halo merger and the black hole merger in the model. On the top row, the deterministic vs. stochastic scaling relation refers to the relationship between halo mass and BH mass. Fiducial and reduced rates refer to their calculated rate (fiducial) and a rate chosen to be one order of magnitude smaller (pessimistic) than the calculated rate. The authors include a pessimistic rate analysis to understand how well LISA recovers the original model parameters even if it would observe far fewer sources than predicted.

Figure 2 (figure 3 from paper)  -The total mass of MBH mergers vs. the redshift in which those mergers occurred. Light blue points are merged with the delay term (note the light blue points are at lower redshifts on average). The gray color bar represents LISA’s sensitivity to measuring mergers of those masses at those redshifts. The lighter the gray, the more likely LISA will “see” the source. The authors note that their sources have masses and are at redshifts that LISA will be most able to “see.”

Once they’ve created their mock data, they analyze it using the same Bayesian methods common in gravitational wave astrophysics to see if they can recover the population parameters in their initial data-generating model and find that they can do so.

Figure 3 (figure 5 from paper) – Recovered MBH population parameters for the no delay time model (top) and delay time model (bottom). Each variable on the x and y axes represents one of the initial population model’s four (top) or five (bottom) parameters. In both the top and bottom, red contours represent the deterministic model of the scaling relation, and green represents the stochastic model of the scaling relation. The main diagonals represent the recovered probability distribution of the individual values, and the off-diagonals represent a 2-D contour plot of the joint distributions of two model parameters. The vertical lines along the main diagonal, and the crosshairs with dots at the center in the off diagonals, represent the true values used in the model. Constraints are somewhat but not substantially worse for the delay time model (top) than the bottom, and the stochastic model recovers the original population parameters only slightly better than the deterministic model.

One of the main goals of this paper is to serve as a proof of concept. If one assumes a population model for MBHs, what LISA will be able to observe, and can a hierarchical Bayesian analysis recover that model’s parameters? By analyzing various possible models that vary delay times, scaling relations, and LISA’s number of observations, today’s authors demonstrate that recovering population parameters will likely be feasible. Like many studies with mock data, the authors note many assumptions and simplifications, notably that they do not account for LISA measurement uncertainties and additional uncertainties from gravitational wave lensing. Despite these limitations, today’s authors have contributed a newer, more complex mock data analysis as the next step in LISA’s development.

Astrobite edited by Sowkhya Shanbhog

Featured image credit: ESA/Hubble, N. Bartmann

About William Smith

Bill is a graduate student in the Astrophysics program at Vanderbilt University. He studies gravitational wave populations with a focus on how these populations can help inform cosmology as part of the Ligo Scientific Collaboration. Outside of astrophysics, he also enjoys swimming semi-competitively, music and dancing, cooking, and making the academy a better place for people to live and work.

Discover more from astrobites

Subscribe to get the latest posts to your email.

Leave a Reply

Astrobites is recruiting!Click here to apply!
+

Discover more from astrobites

Subscribe now to keep reading and get access to the full archive.

Continue reading