What is the Hubble constant? It is just the constant in a law (the Hubble law) that tells you, at a given distance, how fast you should expect an object to be receding from you because the Universe is expanding (it is—see this post on the Friedmann equations for why!) It was first measured in 1927 by Edwin Hubble, and since then has been a key parameter in our understanding of the Universe. Indeed, one can roughly estimate the age of the Universe by just taking 1/Hubble constant—this is because doing so is equivalent to imagining how long it would have taken for the Universe to expand from a singular point to the size it is today, if it always followed the Hubble law.
The history of measurements of the Hubble constant is similar to the Wars of the Roses: for years, the house of Sandage and the house of Vaucouleurs fought bitterly, Sandage claiming a value near 50 km/s/Mpc and Vaucouleurs claiming 100 km/s/Mpc. Both scientists, of course, had mutually exclusive error bars. When Nobel Prize winning physicist Adam Riess was a graduate student, in the mid-1990s, he remembers hearing about the Hubble constant in class and thinking “This may be one of the problems we don’t solve for a long time.” Now, however, we live in a time of peace on the Hubble front, and the best current measurements claim on order 3% accuracy (Riess being involved in these using supernovae!) Note that the current value is somewhere around 70, give or take, so the smart money in the Sandage/Vaucouleurs debate would have been on taking the average!
In this context, I was surprised that the Planck value of H_0 was so much lower than both WMAP-9 and Riess’s supernovae results. Planck found 67.4 +/- 1.4 km/s/Mpc, significantly lower than Riess’s 73.4 or WMAP-9’s 70 +/- 2.2 km/s/Mpc. This may not seem like a big deal: after all, what’s a few km/s between friends? But it’s actually quite important, because lower values of the Hubble constant disfavor “phantom” dark energy models. These dark energy models treat dark energy as evolving with time (the amount per unit volume in the Universe changes with time), but have the rather disturbing property that they violate a certain kind of energy conservation and also cause the Universe to end in a singularity in finite time. A higher value of the Hubble constant favors these models, but a lower value disfavors them in comparison to both a cosmological constant model (the amount of dark energy per unit volume in the Universe is constant in time) and to quintessence models of dark energy (the dark energy density evolves in time, as with phantom models, but there is no violation of energy conservation).
As yet, the difference between Planck’s Hubble constant value and the other two mentioned (WMAP-9 and Riess’s supernovae result) is not at a statistically highly significant level. But ultimately, pinning down the Hubble constant is crucial to understanding the nature of dark energy, so any time a measurement is made that shows a noticeable difference from previous values, it’s worth listening! In closing, it is also worth noting that, if you take the median of all Hubble constant measurements done in the last 15 or so years, you get 68 km/s/Mpc. As with the Sandage/Vaucouleurs wars, I would bet the smart money on the median.
Latest posts by Zachary Slepian (see all)
- BICEP2 results: inflation and the tensor modes – March 17, 2014
- Galaxy in a Bottle: Simulating Spiral Galaxy Formation – December 26, 2013
- Stuck in neutral: how did the Universe become reionized? – October 25, 2013