Free Novel Read

What's Eating the Universe? Page 2


  3. Why is It Dark at Night?

  Unless you live in the Arctic, night follows day as reliably as, well, night follows day. Most people never spare it a thought. But it turns out that the familiar darkness of the night sky tells us something of (literally) cosmic profundity.

  The fact that the sun has set is only part of the night-time story. There is also the matter of the stars. We don’t normally notice the illumination from starlight because it’s so feeble. That’s simply because the stars are extremely far away. Take Sirius, the brightest star in the sky. It’s actually intrinsically about twenty-five times brighter than the sun, but the way it appears to us, the sun outshines it by a factor of 13 billion. Sirius is 80 trillion kilometres away, whereas the sun is only 150 million kilometres away. Put the sun next to Sirius and it would be dim by comparison.

  The brightness of a glowing object diminishes with distance in a precise manner: a star seen from twice as far away would appear one quarter as bright, from three times as far it is one ninth as bright, and so on. The farther out in space you look, the dimmer the stars appear. But set against that is the fact that at a greater distance there are more stars, and the growth in stellar population compensates for the dimming effect of distance. It’s estimated there are 400 billion stars in our galaxy alone, and there are billions of galaxies within the scope of modern telescopes. So why don’t we notice the combined light from all these glowing sources?

  The same basic conundrum troubled a little-known Swiss astronomer, Jean-Philippe Loys de Cheseaux, in the middle of the eighteenth century. It occurred to de Cheseaux that if, as Newton proposed, the universe is boundless, with stars scattered throughout space to infinity, then the night sky should be ablaze with starlight – and the Earth fried to a crisp. The same conclusion was drawn by the German astronomer Heinrich Olbers in 1823 and became dignified with the term ‘Olbers’ paradox’.

  The paradox would go away if the stars were distributed out to some particular finite distance, with nothing but a dark void beyond. The total amount of starlight might then not add up to much. While that’s true, it runs straight into the problem Newton fretted over: if the number of stars is finite, then what’s to stop them all from falling into the middle to form a huge messy agglomeration? So, it’s between the proverbial rock and a hard place: either the universe will collapse, or the sky should be a permanent incinerator.

  Figure 3. Olbers’ paradox. If the universe is infinitely populated with stars, every line of sight ought to intersect a star eventually, so there would be no dark parts in the night sky.

  However, there’s another escape route from Olbers’ paradox. Even if the stars do indeed populate space without limit, could they nevertheless be limited in time? Today it seems obvious that stars can’t go on shining for all eternity. Whatever their source of power, sooner or later they will run out of fuel and fade. This was not so obvious in the nineteenth century, though; nobody knew what made the stars shine. In fact, it wasn’t until the 1940s that astrophysicists pieced the story together.

  Granted that the stars can’t shine for ever, they are certainly shining now, so if there really are an infinite number of them why doesn’t their accumulated light turn night into day? To answer that one, we have to go back to 1676, and a landmark observation made by Ole Rømer, a Danish astronomer who studied the motion of Jupiter’s moon Io, discovered by Galileo. Io goes around the giant planet as regularly as hands go around a clock. Rømer spotted that the Jovian clock seemed to be running slow when Jupiter was on the far side of the sun from Earth. The mismatch could be explained by the fact that light would take a few hours to travel from Jupiter to Earth. Depending on where Jupiter and Earth were positioned in their orbits, the journey time would vary. Rømer knew the sizes of the planets’ orbits, so he used that information to work out the speed of light based on the careful timing of Io’s movements. The answer he gave was 214,000 kilometres per second. Considering the crude methods then available, that’s impressively close to the actual value of 299,792.458 kilometres per second.

  The speed of light is so fast it might as well be infinite for everyday purposes. But in astronomy it makes a big difference. The light from Sirius, for example, takes 8.6 years to get here, so when you see Sirius in the sky, you are seeing it as it was 8.6 years ago. If it exploded today, we wouldn’t know about it for nearly a decade. The farther away the star is, the deeper in the past we see it. The Andromeda galaxy, for example, faintly visible to the naked eye as a fuzzy patch of light, contains stars seen by us today as they were two and a half million years ago. The distance light travels in a year, known as a ‘light year’, is a convenient unit in astronomy: one light year is about 10 trillion kilometres.

  Back to Olbers’ paradox, then. The finite speed of light transforms the entire argument. Suppose a particular star has been shining for a billion years. If it’s more than a billion light years away we wouldn’t be able to see it anyway, because its first light would not have arrived on Earth yet. So even if the universe is infinite as Newton suggested, we could still see only a limited number of stars – those whose light has had time to reach us. The region beyond that would appear dark.

  Just in the last decade or so, the Hubble Space Telescope has actually been able to penetrate almost as far as the dark zone, beyond which there are no stars. It’s over 13 billion light years away and encloses a volume of space around Earth that encompasses about a trillion trillion stars. I just did a back-of-the-envelope calculation to estimate how much the combined light of all those stars adds up to, and it’s about the same in total as the light cast on Earth by Jupiter but spread evenly across the sky. No wonder we don’t notice it. At the end of 2020, astronomers announced that they had actually managed to measure the integrated starlight of the cosmos, using the New Horizons spacecraft which is zooming away from us beyond the orbit of Pluto (where it is very dark). The night sky turns out to be less than a ten-billionth as bright as the sun, but apparently twice as bright as predicted, for reasons as yet unclear.

  It’s remarkable that it took so long for astronomers to put two and two together in this way. At any time since the seventeenth century they could have drawn the obvious conclusion: the sky is dark at night because the universe cannot always have been the way it is now. Something very different – or perhaps nothing at all – must have preceded it.

  4. The Big Bang

  In Flagstaff, Arizona, there is a famous observatory built in 1894 by a rich businessman, Percival Lowell. His plan was to use the telescopes to look for Martians. In the latter half of the nineteenth century this wasn’t considered totally off the wall. Scientists openly discussed the possibility that Mars was inhabited, and astronomers searched eagerly for signs of life on the red planet. In 1877, an Italian astronomer, Giovanni Schiaparelli, said he could see straight lines, or ‘channels’, on the planet’s surface, and this stoked much speculation about ‘canals of Mars’. Mars fever was brilliantly captured in H. G. Wells’ 1898 science-fiction story The War of the Worlds. Fixated by the notion of Martian engineers, Lowell embarked on his own observations, producing elaborate maps of what turned out to be an entirely fanciful network of canals.

  While Lowell pursued his quixotic quest, rather more conventional astronomy was also being conducted at his observatory. By the late nineteenth century, telescopes had advanced to the point where they could probe far beyond the confines of our Milky Way galaxy. A big issue of the day concerned the large-scale organization of the cosmos. In particular, what were all those nebulae – wispy patches of light – seen scattered across the sky? Were they giant gas clouds located within our galaxy, or were they entire galaxies in their own right, too far away for the individual stars to be discerned?

  In 1909, a Lowell Observatory astronomer named Vesto Slipher set about examining the quality of the light from the mysterious nebulae. The only telescope he had was a relatively modest 24-inch instrument, so this was slow, painstaking, repetitive work. With no fancy electronic giz
mos of the sort astronomers have today, all observations had to be done by hand and eye, often with on-the-fly improvisations to coax the best out of the equipment. Slipher analysed the faint nebulous glows with a device called a spectroscope, designed to split light into its constituent colours. He laboured away night after night, often in freezing conditions, recording the results on film. In those days, an astronomer’s lot was not a happy one. Yet the compulsion that drives the Sliphers of this world is that dedicated toil in some small corner of a subject can unexpectedly strike gold. And that’s just what happened at the Lowell Observatory. By 1912 Slipher had assembled enough data to conclude that most nebulae were measurably redder in colour than the Milky Way. Why was that? An explanation was immediately apparent. When a light source is receding at great speed, the emitted light waves are stretched, shifting the wavelength towards the red end of the colour spectrum. Slipher therefore concluded that most nebulae are rushing away from us.

  With hindsight, we can see that 1912 marked the true birth of modern cosmology. But there was no fanfare, no press conference, just a careful technical paper buried in the Observatory’s bulletin. It took several years and many more observations before Slipher’s discovery gained prominence, eventually coming to the attention of a certain Edwin Hubble, a lawyer turned astronomer, rarely seen without a pipe. In 1924, Hubble used the big 100-inch telescope at Mount Wilson in California, then the world’s largest, to measure the distance to the Andromeda nebula by managing to detect individual stars, and in so doing proved that Andromeda is in fact another entire galaxy like the Milky Way. Hubble went on to estimate the distances to another twenty-three galaxies. Then he combined his results with Slipher’s red-shift measurements and glimpsed the outline of something systematic: the farther away the galaxy was located, the redder its light and the faster its recession. It seemed to be in proportion. The simplest interpretation of this pattern was that the universe was gradually expanding, growing larger over a timescale of billions of years (see ‘Hubble wars’, below). Hubble duly announced his results to the world in the New York Times on 23 November 1924. It was undoubtedly one the most momentous discoveries of the twentieth century, and plaudits were soon showered upon the pipe-smoking astronomer, leaving Vesto Slipher as the unsung hero of the expanding universe.

  With the realization that the universe isn’t just there – an unchanging collection of glowing oddments – but is a dynamic system, evolving with time, a host of questions arose about its trajectory. Where had this system come from and where was it heading? What determines how fast the universe is expanding? Could the expansion rate change with time? When did it begin, and will it go on for ever?

  Two implications of the expansion were immediately obvious. First, if the universe is getting bigger, it must previously have been smaller and denser. Second, the effect of gravity – a universal attractive force – would act like a brake on the expansion, slowing the recession of the galaxies as they pulled on each other. Because of this deceleration, the expansion must have been faster in the past. Hubble’s observations weren’t extensive or accurate enough to detect any change in the rate of expansion over the few million years that they encompassed. However, the braking effect of gravity was easy enough to study theoretically, and as early as 1921, the Russian astronomer Alexander Friedman had calculated precisely how the expansion rate would gradually slow, but his deliberations were mostly ignored. Astronomers were flirting with one of the most far-reaching discoveries in the history of science, but such was the tentative nature of these early results that no leading scientist was prepared to take the plunge and state the obvious implication. In the event, it fell to a young Belgian priest and theoretical physicist, Abbé Georges Lemaître, to come out and claim explicitly in 1927 that the universe we now observe must have begun billions of years ago as a ‘cosmic egg’, a state of enormous density that expanded with explosive force. This was the precursor of the modern ‘big bang’ theory.

  Hubble wars

  The rate of expansion of the universe is expressed as a number, known as the Hubble constant, or H. The man himself assigned H a value of 500. (In the peculiar system of units that astronomers prefer, that means a galaxy about 3.3 million light years away is receding on average at 500 kilometres per second.) Given H, and factoring in the braking effect of gravity, you can work out the age of the universe. Hubble’s original value put the universe at only about 2 billion years old – less than half the age of Earth! Astronomers lifted their game and produced estimates of H that progressively pushed back the inferred age, but they were divided into two warring factions. One touted a value of H of 180, the other 55, using the same methods and each insisting that the errors in measurement were far too small to close the gap. The discrepancy was an important issue because the smaller the number, the greater the age of the universe. In the 1980s, data from the Hubble Space Telescope put paid to the angst. H finally came out at 73 – a nice compromise – making the universe 13.8 billion years old by current estimates. But recently a new discrepancy has surfaced. Measurements of H using cosmic microwave background (CMB) data yield a value of only 67, implying an age for the universe of well over 14 billion years. Does this suggest something seriously amiss with our understanding of basic cosmology? Time will tell.

  You might have expected that a declaration of such profound importance would be a scientific, not to mention theological, sensation. Yet again, however, the response was muted. Hubble himself doubted Lemaître’s conclusion. Albert Einstein, by then the world’s greatest authority on gravitation and cosmology, was equally dismissive. ‘Your calculations are correct,’ he wrote to Lemaître, ‘but your physics is atrocious.’ Einstein had also shrugged Friedman’s earlier theoretical effort aside, and indeed he only accepted that the universe was actually expanding after visiting Hubble in California in 1931. After that, he did a U-turn and backed Lemaître’s work. In spite of this illustrious endorsement, speculation about the origin of the universe wasn’t taken very seriously in the 1930s; indeed, cosmology was hardly even a recognized subject.

  Happily, the theoretical work of Friedman and Lemaître wasn’t forgotten, though it took another two decades before it was revivified by George Gamow, a defector from the Soviet Union working in the United States. Gamow wasn’t an astronomer; he was a nuclear physicist. It was he who explained the type of radioactivity known as alpha decay. Gamow reasoned that the young, highly compressed universe must have been hot enough to permit nuclear reactions. Consequently, it would have glowed like a furnace. Which raised a fascinating possibility. Might a fading remnant of that primordial heat still pervade the universe today, forming a cosmic background of microwave radiation?

  Figure 4. The antenna that first detected the hiss of microwaves emanating from the birth of the universe.

  Did the big bang really happen?

  Not all astronomers accepted the link between the cosmic expansion and an explosive origin. Ironically, the familiar moniker ‘big bang’ was initially coined by the British astronomer Fred Hoyle in 1949, while dismissing the idea. Hoyle thought Lemaître’s model of a universe exploding into being was nonsense, and developed a completely different interpretation of Hubble’s observations, called the ‘steady state’ theory. The basic idea is that as the universe expands and the galaxies move apart, so new matter is continually created, gradually aggregating into fresh galaxies to fill the ever-growing gaps. As a result, on a very large scale the universe would look much the same for ever – a mix of old and new galaxies sustained by a process of continual replenishment. There would be no beginning and no end, and no hot, dense primordial state.

  Hoyle fought fiercely for his theory, marshalling a band of loyal supporters. For about twenty years the two theories rivalled each other for support. But then the knockout blow came. The discovery of the CMB had no credible explanation within the steady state model, and support for it rapidly dwindled.

  I went to work with Hoyle in Cambridge at this critical juncture. Here w
as this world-famous astronomer and public celebrity, known for his science-fiction novels as well as his research, strangely isolated, casting around for some way to rescue the essence of his theory, perhaps by treating the big bang as an interlude rather than an absolute origin. He gave up on the idea that particles of matter were continually created in a thin soup throughout space in favour of concentrated ‘creation centres’. In the 1960s, highly compact objects called quasars were discovered that fling out huge quantities of energetic material. Hoyle believed these energetic sources were cosmic spigots, pouring brand new matter into the universe. But the simplicity of the original steady state concept was lost, and in 1972 Hoyle resigned his Chair at Cambridge in disillusionment and became something of a recluse, tucked away in a remote cottage in Cumbria, where he filled his days fell-walking and sniping at the scientific establishment.

  Gamow was on the right track. In 1964, two scientists working on satellite communications at the Bell Laboratory in New Jersey accidentally came across this remnant heat, bathing the universe at a temperature of about 2.7 degrees above absolute zero (absolute zero is about − 273°C). It showed up as an annoying hiss in their receiver (see Figure 4). All attempts to explain it away as a defect in the equipment (including pigeon droppings in the antenna) failed, and the only explanation left was that the disturbance came from outer space. This was the big bang’s smoking gun.

  Suddenly, cosmology was propelled into the scientific mainstream, and began to attract some of the brightest minds in physics and mathematics – the likes of Roger Penrose and Stephen Hawking. The origin of the universe in a big bang was at last taken seriously and became the focus of intense theoretical analysis. Astronomers began to clamour for better observations of the cosmic microwave background – today referred to simply as the CMB – confident that it contained crucial clues about the early universe. Because of atmospheric absorption, a decent view of the CMB needs a satellite, and that’s what they got. In November 1989 NASA launched COBE, and, with it, cosmology’s golden age.