Authors: Xuheng Ding, Tommaso Treu, Anowar J. Shajib, Dandan Xu et al.
First Author’s Institution: Wuhan University, Wuhan, China
Status: Journal submission to come at completion of challenge, open access on arxiv
It’s not every day that the astronomical community gets an outright competition-style challenge to produce a result. But that’s exactly what today’s paper outlines. The goal? To promote the development of the best method the astronomical community can come up with to measure the local expansion rate of the universe, also known as the Hubble constant, H0.
Using Strong Gravitational Lensing to Measure H0
We’ve discussed some ways to measure H0 in past astrobites. Notably, recent years have seen some tension in measured values of H0, depending on whether the measurement used information from the velocities of receding galaxies (in the low-redshift, recent universe) or from the Cosmic Microwave Background (in the high-redshift, early universe). Today’s paper concerns an independent method that makes use of some handy properties of strongly gravitationally lensed systems. Strong gravitational lensing (or simply “strong lensing”) occurs when a very massive object like a galaxy cluster warps and bends the light from a galaxy behind it. This results in gorgeous pictures of multiply-imaged arcing galaxies, like that shown in Fig. 1.
A perhaps lesser-known (or maybe just less publicized) effect of strong lensing is the notion of “time delay.” Time delay refers to the differing arrival times of light that follows different trajectories on the way from the background source galaxy, around the lensing mass, and to our telescopes. We observe this effect as multiple images of the same object at different times. For an image such as Fig. 1, time delays are not measurable because there is not a time-varying component in the source galaxy — there are no clues in this particular image to enlighten us that photons emitted simultaneously arrive at different times. However, the story changes when a time-varying component is introduced — for example, if a supernova occurs in the source galaxy. In fact, astronomers observed such a phenomenon only a few years ago!
It is relatively straightforward to obtain precise time delay measurements for a strongly-lensed system, if the quality of the light curve data is good. Fortunately, good-quality light curve data will not be hard to obtain with the immense quantity of data sure to come from the Large Synoptic Survey Telescope (LSST). The question now becomes whether current methods that use this time delay data to map the gravitational potential of the lensing mass are good enough to produce precision cosmology measurements. Specifically, we are now in an era where a difference in the measurement of H0 of only a few km/s is significant, and the attainable precision in the measurement is affected by nuances in the strong lens models. That is, systematic errors are starting to dominate our measurements, which is certainly not desirable. This is exactly the motivation behind the challenge outlined in today’s paper.
Structure of the Challenge
The challenge was opened to anyone interested on January 8th, 2018, the day today’s paper appeared on the arXiv. To construct the challenge, the authors (aka the “Evil” team) created 50 simulated strong lensing images of active galactic nuclei (AGN). To ensure that subtleties in their simulation software would not significantly affect the simulated images (and therefore the results produced by the challengers), the authors generated simulated images using two independent codes (see Fig. 2). By assuming a particular value of H0, the authors calculated time delay data for each lensing system. These data along with other necessary information about the system (but not the chosen value of H0!) are provided to challenge participants (aka the “Good” teams) via GitHub, along with instructions on how to submit results. The task for the Good teams is to develop lens modeling software that can recover the value of H0 to sufficient precision (again, H0 is known only to the Evil team!). The challenge is divided into four “rungs,” where each rung gets progressively more realistic and nuanced. The rungs serve to illustrate where potential problems and/or bottlenecks are in the Good teams’ strong lens modeling software and are thus designed to be completed in the following order:
- Rung 0: A simple training exercise using two lensing systems. Rung 0 will help challengers understand the data format and work out any bugs that might affect future rungs. To that end, all parameters of the lensing systems and all cosmology parameters (including H0) are provided as a check for the Good teams.
- Rung 1: In this first real rung of the challenge, challengers calculate H0 from simulated images that used actual images of galaxies to generate realistic surface brightness distributions for the lensed AGN. Additionally, deflections due to line-of-sight mass other than the primarily lensing mass are also introduced and incorporated.
- Rung 2: Rung 2 builds on Rung 1 and tests the ability of the lens modeling software to calculate H0 while accounting for atmospheric effects (described by a point-spread function, or “PSF”).
- Rung 3: As the final rung of the challenge, Rung 3 is the most realistic and nuanced. In addition to the factors in Rungs 1 and 2, Rung 3 uses early-type galaxies from cosmological simulations to generate all of the observables of the system.
Challengers may submit multiple results for any rung; that is, the Good teams need not complete Rung 3 in order to submit estimates of H0 for Rungs 1 or 2, and they may also submit multiple estimates. When all is said and done, we should have a clear understanding of whether current strong lens modeling techniques are good enough to achieve a measurement of H0 with sub-1% uncertainties, given sufficient strong lensing time delay data. LSST is sure to give us the amount of data we need — the only question that remains is whether our techniques are up to the task.
The deadline to submit results? August 8th, 2018. Let the challenge begin!