In Part 1 of a series examining the apparent acceleration of the universe, we looked at determining astronomical distances using parallax and standard candles like Cephied variables. Two of the required properties for something to be a good standard candle include uniformity in intrinsic brightness in space and time as well as sufficient intrinsic brightness to be seen at ever greater distances. While Cepheids are great standard candles, their lower brightness means they can’t be used out to distances that sample the universe at a scale where we might see things like acceleration. What’s need is a brighter class of objects that can also function as standard candles. So here, in the “Empire” of the series (trying to figure out how to end Part 3 with dancing teddy bears and/or magical ghosts), we examine supernovae, what they are, their properties, and how they might be used as standard candles.
Supernova are massive stellar explosions, so bright that they can outshine the galaxy in which they occur. So they satisfy one of our criteria for a standard candle – they can be seen at large distances. How about uniformity? That requires a bit more explanation. I should note up front that there are broadly two classes of supernovae, Type I (largely binary systems) and Type II (core-collapse). Since astronomers seem to love taxonomy, there are apparently as many sub-classes as objects, but we’ll stick to these two types.
Stars are supported by nuclear fusion energy – they are constantly converting lighter elements to heavier elements in their cores and the energy produced balances the weight of the outer layers, resulting in stable objects. Stars spend most of their lives converting hydrogen to helium in their cores. This is the stable, long lived, portion of a stars life. Our sun is roughly half way through its main sequence lifetime and the stability of this phase is what has allowed the solar system to form and life to develop and evolve.
Of course, though it’s the most abundant element in the universe, the amount of hydrogen in a star is finite. As the hydrogen is used up, the core will contract, heat up and start fusing the helium ash in carbon, providing a new source of pressure support against gravity. Depending on the mass of the star, it may fuse only hydrogen before burning out (very low mass stars) or go all the way up through iron (very massive stars). Our sun will stop at fusing helium – it’s fate is a tale for a different day. But while more massive stars can continue fusing heavier elements to provide pressure support, there’s still an end point. When the core reaches iron, nuclear fusion becomes endothermic – rather than producing energy, it requires energy (a lot!), so fusion can no longer provide pressure support against gravity and the core can no longer support itself and contracts. It will reach a point where a quantum effect called electron degeneracy pressure can provide support for the upper layers and the star is once again stable. But electron degeneracy pressure can only support a core of a certain mass (roughly 1.4 times the mass of the sun, Chandrasekhar mass). In the core of a massive star, small amounts of nuclear fusion occur at the boundary between the layers, continuously depositing material onto the inert core. At some point, enough material accumulates on the core that it exceeds the magic 1.4 solar mass limit for electron pressure and it can no longer support itself so it collapses catastrophically and the outer layers come crashing in. Depending on the initial mass and other details, this is the process by which a neutron star or “stellar mass” black hole is formed. In the former case, electrons are forced into the nucleus, combine with protons and form neutrons, releasing neutrinos. The resulting neutron core is supported by neutron degeneracy pressure and stops collapsing. Of course, the outer layers don’t stop. They ‘bounce’ off the new neutron core. The huge flux of neutrinos traveling through the dense overlying layers and the bounce create a series of outward and inward traveling shock waves that completely disrupt the outer layers of the star (the core stays intact as a neutron star. Or not.) and result in the visible event. Stars that undergo this evolutionary sequence end their life as core-collapse (Type II) supernovae. Incidentally, the energy and pressure in the deflagration are such that elements heavier than iron are created – all elements beyond the iron peak are created in supernova explosions. (Takes deep hit) We are all star stuff, man. OK, OK, that was just a weak excuse to link some metal.
It may not be intuitively obvious at this point, but the process above violates our other criteria for standard candle, that of uniformity. While the core may be very similar, the visible explosion is caused by shock waves in layers that vary in mass (significantly), perhaps composition, and propagates into surrounding material of very different densities, along with many other variables. Hence, each core-collapse supernovae may have a unique brightness. So why are supernovae considered standard candles? Well, if there’s a Type II supernova, there must be a Type I supernova, right? And there is. Very crudely, Type Ia supernova occur on a bare core – what we’d call a white dwarf. These objects are the evolutionary end point of stars that only have enough mass for fusion to occur up to helium or carbon and oxygen. This is the fate of our sun incidentally; it will end its life as a carbon white dwarf, supported by electron degeneracy pressure, destined to slowly cool over billions of years.
To get to a Type 1a supernovae, we’re dealing with carbon and oxygen white dwarfs in close binary systems. With a nearby companion, the white dwarf may steal matter from its companion and hence grow in mass. As it approaches the Chandrasekhar mass of approximately 1.4 solar masses, the pressure will be enough to trigger carbon fusion. Within a matter of seconds, the white dwarf will undergo a runaway fusion reaction and the energy released in that reaction will be seen as a Type Ia supernova. Type Ia can also be formed in the merger of two white dwarfs. Here we satisfy, to some degree, the second criteria for a standard candle. The explosion always starts from the same place – a degenerate CO white dwarf with a mass of ~1.4 times that of the sun. Therefore, these explosions tend to have the same peak brightness. That is why Type Ia SN are considered standard candles and they are the objects used to measure the apparent acceleration of the universe. Parenthetically, Type II core collapse also start on roughly the same mass and composition core – they are just surrounded by a bunch of crap and that makes them much more variable in their dynamics and energy output.
The neat picture presented above is, of course, not so neat; Type Ia SN still have a range of observable physical characteristics: peak brightness, ‘colors’ (brightness at different wavelengths), decay times (how long it takes the brightness to decay by a certain amount). A lot of work has gone into calibrating them so as to be able to treat them as standard candles. One of the major steps is the observation that decay time is tightly correlated with peak brightness, a relationship thought to be driven by the characteristic of the decay of radioactive nickle. So if we sweep lots of hard work under the rug, we can assume that we know how bright SN Ia are intrinsically and use them as standard candles. So the observation of an accelerating universe based on SN Ia rests directly on all the work that’s gone into calibrating them.
One critical point – all the calibration of SN Ia is done ‘locally’. One uses SN Ia that occur in galaxies that are near enough for Cepheids or other methods to be used to determine the distance independently. With distance no longer an unknown, the intrinsic brightness can be inferred and correlated with other observable properties. We then assume that SN Ia observed at much larger distances are ‘identical’ to local SN Ia. If we remember that the further away we look, the further back in time we are looking, we realize that this assumption means that we are assuming that the properties of SN Ia aren’t different in the distant past compared to today. The ‘technical’ term for this is ‘luminosity evolution’ – How might the luminous properties of a class of objects change with time. I should emphasize that this is not referring to how a given object changes with time, but rather how the properties of a class of objects change depending on when/where members of that class formed. If I may be permitted a crude analogy, the average height of humans has changed through out our history; hundreds of years ago (or in North Korea…) humans were on average much smaller than we are now. So if you were to look at the size of underwear from 100’s of years ago, and assumed that the average height of humans as a class was constant, you might erroneously infer that ancient humans enjoyed continuous wedgies. Similarly, if we incorrectly assume that SN Ia that form billions of years ago are the same as those we observe today, we may make erroneous inferences about their distance. Critically, the papers that argue that observations of distant SN Ia require an accelerating universe, addressed this issue as carefully as possible and suggested that luminosity evolution of SN Ia was negligible or at the very least, insufficient to explain the observations.
If that’s not foreshadowing of Part 3, I don’t know what is.