The fourth Oxford mini-course and workshop on the philosophy of cosmology took place last month. It was another event made possible by a grant from the Templeton foundation, which has funded two large projects on the philosophy of cosmology at Rutgers and Oxbridge (see a previous post on this blog about another such event, the Summer Institute for the Philosophy of Cosmology). It brought together physicists and philosophers and invited them to weigh in, over three days of lectures and one day of workshop, on the question of fine-tuning in cosmology. Here is a glimpse of some of the many topics that were discussed: the role of anthropic bias, theoretical frameworks that can allow one to talk about fine-tuning and multiverse, and the philosophical character of these discussions.1
In his introductory remarks, Joe Silk summed up the main object of inquiry: in the Big Bang theory, only a few numbers are needed to characterize the structure of the Standard Model. They characterize the properties of elementary particles, the basic forces and their relative strengths, the properties of space itself, the size, overall ‘texture’, and evolution of our universe (see e.g., Martin Rees‘s book). Is the universe fine-tuned? How do we get these numbers? Are they the result of selection effects? Or just mere coincidence?
What role for anthropic bias and selection effects?
Nick Bostrom argued that, in the absence of evidence to the contrary, one should make the assumption—akin to the Copernican principle—that one is a random sample from one’s reference class. This assumption, Bostrom explained, must be followed by other rules (namely the “self-indicating assumption”) in order to avoid paradoxical implications of the self-sampling assumption.
During a Q&A, John Barrow objected that the interest of anthropic selection effect is not to drive our theoretical inquiries, but rather to prevent us from drawing unwarranted conclusions from existing evidence. What might appear to be coincidences (e.g., the fine-tuning of physical constants for life) may just be due to selection effects. However, one could conclude from Jean-Philippe Uzan‘s talks that determining anthropic selection effects isn’t easy, since the observational constraints on the fundamental constants are such that it is difficult to determine an anthropic range.
Relatedly, one of the claims made by our own Chris Smeenk (drawing on the work e.g., of Radford Neal) was that anthropic reasoning can be captured by usual accounts of theory assessment, such as the principle of total evidence in Bayesian theory confirmation, thus undercutting the need for new principles.
Fine-tuning in the multiverse
Wondering about the fine-tuning of physical constants implies that their values could be the result of a more fundamental theory. One of the themes Nima Arkani-Hamed raised is how String Theory constitutes such a theory. But even though it is a unique theory, it has “a zillion solutions” (zillions of vacua). In this context, the multiverse—wherein different regions of the multiverse realize distinct vacua—immediately arises. But we don’t know even what the correct observables are for quantum cosmology, nor how to verify claims about quantum gravity.
In his overview of inflationary cosmology, Andrew Liddle argued that the multiverse and anthropic bounds are worth considering because Weinberg (1987) used them to predict a non-zero cosmological constant, which was later observed. But it’s not clear that Weinberg’s result (also discussed by John Peacock at the conference) counts as a prediction, nor is it obvious that it vindicates appeals to String Theory and anthropic bounds, because they could have accounted for very different observed values of the cosmological constant. In this respect, Chris Smeenk argued that the versatility of anthropic predictions in the multiverse (with regards to a choice of measure or to initial conditions in the early universe) may be a vice rather than a virtue.
Fay Dowker‘s talk on causal set cosmology made clear how this model may provide us with alternative solutions that can do away with fine-tuning (see also the work of Rafael Sorkin at Perimeter Institute).
At the edge of science?
To some, discussions about the multiverse may appear to be speculative at best (or “philosophical” as some physicists said at this conference). George Ellis and Bernard Carr debated whether a line should be drawn between science and philosophy and whether that distinction would tell us if questions about the multiverse are part of legitimate science. The panel of philosophers at the conference claimed that methodological discussions aren’t necessarily less speculative than empirical inquiry, particularly because an important part of them can be done a priori (see e.g., the work of Jesus Mosterín, who was at the conference, or Tim Maudlin‘s recent article on Aeon).
Regardless of worries about demarcation between science and philosophy, Andrew Liddle argued that whether or not the past success of appeals to the multiverse and anthropic constraints is genuine doesn’t undermine the scientific motivation for such inquiries. These investigations are merely the consequences or the extension of the domain of theories they build on. Moreover, the nature of the difficulties surrounding questions of fine-tuning, anthropics, and the multiverse—problems about predictive power and ability to discriminate between possible outcomes—already arise in areas of cosmology considered more standard, namely inflationary cosmology.
1For the talks mentioned here that weren’t filmed, I link to relevant articles. All videos about this conference are accessible at the Oxbridge Philosophy of Cosmology YouTube page.