What is the Hubble constant? It is just the constant in a law (the Hubble law) that tells you, at a given distance, how fast you should expect an object to be receding from you because the Universe is expanding (it is—see this post on the Friedmann equations for why!) It was first measured in 1927 by Edwin Hubble, and since then has been a key parameter in our understanding of the Universe. Indeed, one can roughly estimate the age of the Universe by just taking 1/Hubble constant—this is because doing so is equivalent to imagining how long it would have taken for the Universe to expand from a singular point to the size it is today, if it always followed the Hubble law.

The history of measurements of the Hubble constant is similar to the Wars of the Roses: for years, the house of Sandage and the house of Vaucouleurs fought bitterly, Sandage claiming a value near 50 km/s/Mpc and Vaucouleurs claiming 100 km/s/Mpc. Both scientists, of course, had mutually exclusive error bars. When Nobel Prize winning physicist Adam Riess was a graduate student, in the mid-1990s, he remembers hearing about the Hubble constant in class and thinking “This may be one of the problems we don’t solve for a long time.” Now, however, we live in a time of peace on the Hubble front, and the best current measurements claim on order 3% accuracy (Riess being involved in these using supernovae!) Note that the current value is somewhere around 70, give or take, so the smart money in the Sandage/Vaucouleurs debate would have been on taking the average!

In this context, I was surprised that the Planck value of H_0 was so much lower than both WMAP-9 and Riess’s supernovae results. Planck found 67.4 +/- 1.4 km/s/Mpc, significantly lower than Riess’s 73.4 or WMAP-9’s 70 +/- 2.2 km/s/Mpc. This may not seem like a big deal: after all, what’s a few km/s between friends? But it’s actually quite important, because lower values of the Hubble constant disfavor “phantom” dark energy models. These dark energy models treat dark energy as evolving with time (the amount per unit volume in the Universe changes with time), but have the rather disturbing property that they violate a certain kind of energy conservation and also cause the Universe to end in a singularity in finite time. A higher value of the Hubble constant favors these models, but a lower value disfavors them in comparison to both a cosmological constant model (the amount of dark energy per unit volume in the Universe is constant in time) and to quintessence models of dark energy (the dark energy density evolves in time, as with phantom models, but there is no violation of energy conservation).

As yet, the difference between Planck’s Hubble constant value and the other two mentioned (WMAP-9 and Riess’s supernovae result) is not at a statistically highly significant level. But ultimately, pinning down the Hubble constant is crucial to understanding the nature of dark energy, so any time a measurement is made that shows a noticeable difference from previous values, it’s worth listening! In closing, it is also worth noting that, if you take the median of all Hubble constant measurements done in the last 15 or so years, you get 68 km/s/Mpc. As with the Sandage/Vaucouleurs wars, I would bet the smart money on the median.

#### Zachary Slepian

#### Latest posts by Zachary Slepian (see all)

- How does structure grow? Understanding the Meszaros effect – July 17, 2014
- Growth of structure tells us how normal and dark matter scatter – July 11, 2014
- A new method for cosmic distances: using active galactic nuclei – May 29, 2014

“I would bet the smart money on the median”? Read e.g. this recent pre-Planck review and think again: the discrepancy between the locally determined Ho values from different techniques all clustering around 74 lately and the value dropping out of the Planck analysis is several sigma and may well point to deeper physical insights (which the Planck data per se do not).

Yes, this is also an exciting option. If the Planck value and SNe values are genuinely both correct, then it is possible that there is new physics there. For instance, if underdense regions of the Universe expand at a different rate than overdense ones, then a different distribution of such regions along the lines of sight used for the two sets of measurements could produce this effect.

An even “deeper” solution would be time-variable Dark Energy … this was already raised on Planck annoucement day during a NASA telecon with science journalists. In any case, one cannot and should not dismiss decades of direct H0 determination work (as the authors of the Planck paper XVI do) just because the Planck model fit is so nice.