Interview by Philippos Papayannopoulos
Michael Cuffaro is a philosopher of science who thinks about philosophical issues in quantum mechanics, about the nature of computing, about the interplay between them, about the notion of explanation in science, about complexity of algorithms, about Solomonoff Induction (I don’t know what this is either, but we’ll soon find out!), and about music and songwriting.
He is a postdoctoral research fellow at the Rotman Institute of Philosophy and an external member of the Munich Center for Mathematical Philosophy at LMU Munich. He holds a Bachelor’s degree in Computer Science, and a Master’s degree in Philosophy, both from Concordia University in Montreal. His did his philosophy Ph.D. at Western University and then he spent two years as a postdoctoral research fellow at the Munich Center for Mathematical Philosophy in Germany.
So, let’s go..!
PP: You have a Bachelor’s degree in Computer Science and a long professional experience in software development. What made you turn to philosophy after such a different background training?
MC: I’ve had an interest in philosophy for a very long time. While studying for my undergraduate degree in computer science I did a minor in philosophy. And then after graduating and beginning to work in industry I would occasionally enroll in philosophy courses in my spare time (about one a year if my work schedule would allow it —often it wouldn’t). Eventually I moved to another company which was physically a bit closer to Concordia University. I took the opportunity to sign up for two graduate seminars: one on 19th century language and logic with Greg Lavers, the other on the philosophy of cognitive science with Murray Clarke. I had never taken a graduate seminar before, and I was pretty much instantly hooked both by the topics (I fell head over heels for Frege) and by the advanced level of discussion in those classes. By the end of the semester I decided that I just had to apply to Concordia’s master’s program. I spoke to Murray and Greg about it, and they encouraged me, so I applied and was admitted. Even then, the thought hadn’t occurred to me to go on to do a Ph.D. Murray suggested it one day, and I thought, why not? Greg suggested Western. I applied, got in, and I’ve continued on from there.
I’ve never lost interest in computer science and programming, though. Partly that’s just because I absolutely love to code. But also it’s because I think computer science just is philosophy (but not vice versa!), in the sense that the entire field was kind of directly born out of the philosophical discussions about the foundations of mathematics that took place around the turn of the last century —and especially out of Alan Turing’s absolutely beautiful philosophical analysis of the concept of an effective method as carried out by a human being. Turing’s paper, in my opinion, is a shining example of the power of philosophy; it’s the kind of philosophical analysis that philosophers should be trying to emulate. Alan Turing was a philosopher of the very highest calibre.
My impression is that many in philosophy departments forget this, and think of the study of computation as something very foreign to what philosophy is. But it’s not. It’s absolutely not. From programming language design, to manipulationist frameworks, to the legal and ethical status of intellectual property, to mathematical logic, to quantum computation, to AI, to computer simulation, to ideas about explication, there is a treasure house of potential connections and contributions that can be made to topics that philosophers care about. And computer scientists and programmers themselves are, in my experience, very philosophically minded people who think about many of these topics every day —they have a lot in common with philosophers, and philosophers should be engaging with them more, both with more eminent figures like Scott Aaronson, Judea Pearl, Richard Stallman, and so on, and also just with the general community.
PP: Your research combines two apparently disparate philosophical areas, namely the philosophy of physics and the philosophy of computation. Traditionally, the questions addressed in these two areas are taken to be unrelated, in the sense that philosophy of physics is concerned, for example, with conceptual and interpretation issues in our physical theories, whereas philosophy of computation is mostly concerned with topics such as the nature of computing, computer ethics, AI, algorithmic complexity, information theory, etc. Your research however seems to connect these areas. Can you explain what the bridges are that your research builds between these two fields?
MC: Physics and computing connect in a bunch of different ways, but the particular bridge that I stand on mostly is quantum computation and information. It’s a field that I was introduced to, right after starting my Ph.D. at Western, by Wayne Myrvold in a wonderful class I took with him on the philosophy of quantum mechanics. Quantum computation and information is nice because on the one hand it’s just another branch of computer science. The point is to design algorithms and to build machines that can do things like solve equations and communicate messages. On the other hand, because we are explicitly considering what we can do with quantum systems in particular, we need to take account of the physics of those systems to see how it constrains us. Clearly this science is able to illuminate much in both physics and computer science, and elaborating on just how it does this is what I am in the business of doing.
Now, I don’t think quantum computation and information is going to help us at all with getting a clearer picture of the ontology of physics or of the world —or let me put it a little more weakly: this is not where I think the primary value of the study of quantum computation and information is. I’ve argued that it’s false, for example, that quantum computation provides evidence for the `Many worlds interpretation’ of quantum mechanics. I’m convinced that it’s also false that the world –the physical world– is really, deep down, made up of information in some way. Going in the other direction: quantum computation and information do not show us that computer science is, after all, just a branch of physics in any interesting sense.
But what I do think is that by reflecting on the interconnections between computer science and physics, as they are manifested in quantum computation and information theory, we can get a better understanding of just what it is we are doing in these two disciplines —what the basic presuppositions of the kinds of inquiry represented by these two disciplines are. Because if we don’t understand this then we are apt to get confused. For example, there are `no-go theorems’ –and they really are theorems– to the effect that certain types of correlations between quantum mechanical systems can’t be reproduced by certain kinds of classical physical theories. These theorems basically rule those classical theories out. Nevertheless, it’s possible (and practicable) to build classical physical systems to evade some of those theorems. The trick is that to do so you have to build in behaviour that would be too conspiratorial to believe if it were part of a physical theory. So what this shows is that results in quantum computation and information theory can’t just be blindly applied to interpretational debates in physics —the fact that we can build classical systems to evade them doesn’t show that the no-go theorems are invalid. Similarly what the no-go theorems say in the context of these interpretational debates shouldn’t be blindly applied to the kinds of things we are trying to achieve in quantum computation and information theory.
PP: Besides combing the philosophies of physics and computation, your research also draws connections between computation and problems in the general philosophy of science. Part of your interests has to do with the idea that examining characteristics of computer algorithms can help us in better understanding the concept of “scientific explanation” as well as illuminating various conceptions of “mechanisms” we have. This is an interesting idea and a way of approaching the subject which is different from the traditional one. Can you tell us a bit more about how this exactly works? Or, to put it more ‘provocatively’, why should philosophers of science, who are interested in explanation, also care about computability?
MC: Sure. Philosophers have, for a long time, debated about the concept of explanation as it is used in science: How do we recognise an explanation when we see one? What is an ideal explanation like? And so on. This debate is important because a proper account of explanation can help us to know when a question has been answered, what questions still need answering, and so on. So getting clear on what scientific explanation consists in can help to guide scientific research and help us to understand how science progresses.
One distinction that philosophers sometimes make is between `how-possibly’ explanation and `how-actually’ explanation. When we explain `how-actually’, we describe how something actually came about. For example, your mother might have asked you, when you were eight, how it is that your neighbour’s window broke. And if you were honest you’d describe how the baseball travelled a little further than you intended after you hit it with your bat, and how it flew toward the window and broke through it. If you’d had some scientific training at that time, that is, when you were eight, you might get into a little more detail, describing the trajectory of the ball and the physical properties of it and the window, etc., that caused the window to break.
A how-possibly explanation, on the other hand, is something like a, not necessarily causal, story about how something might have occurred. For example, you might ask me: how is it possible for the human eye to have evolved in the way it has? And if I had some knowledge of the evolutionary record and of the latest research on the topic, etc., I might then be able to give you some possible scenarios, each of which would represent a how-possibly explanation for the evolution of human vision. But note that I haven’t given you the actual story of how it occurred. Partly for this reason, many think that explaining how-possibly really isn’t explaining at all, or at best that it’s only a first step on the road to a how-actually explanation.
Lately I’ve been focusing on something that I’ve begun to call `algorithmic how-possibly’ explanation: the idea here is that, for some questions involving the properties of algorithms –for example, `why is algorithm A faster than algorithm B?’– it’s best to think of them as how-possibly rather than how-actually questions. Because if we want to answer the question of why algorithm A runs faster than algorithm B, the best thing to do is to look at how each algorithm is defined, that is, at their code. And when we do that we see that the code traces out a number of pathways through their respective possibility spaces. And when we consider the possible pathways traced out by algorithm A, and compare them to the possible pathways traced out by algorithm B, we understand how A is able to finish in fewer computational steps.
‘So what?’ you might say. Why should philosophers of science care? Well, they should care because many natural processes are algorithmic, and it’s plausible that many questions we want to ask about such processes can be fruitfully thought of as how-possibly questions, although I don’t think all of them can. And understanding what form a question takes is of course helpful in answering it.
PP: This is very interesting, indeed! In the opposite direction, now, you are also interested in how physics can illuminate some of the concepts and methods in the scientific field of computational complexity. Typically, these two disciplines follow different routes; to give a characteristic example, one could hardly find a physics course in a typical undergraduate computer science program. So, in what sense can physics inform us about complexity, and how do these two topics come together in your work?
MC: It depends on the computer science program! You’d be quite likely to find a physics course, or anyway a substantial amount of physics content, as part of the computer science program at the University of Waterloo or the University of Montreal, for example —at least in their graduate programs. In all seriousness, though, I do agree with you in the following sense: I think that some philosophers, and physicists and computer scientists, have made way too much of the apparent earth-shattering impact that physics –in particular physics as it’s used in quantum computing– has had on the foundations of computational complexity theory. For instance, you’ll often find claims to the effect that quantum computing overturns the foundations of computational complexity theory, forces a radical revision of its basic conceptual structure, and so on. I think this is hogwash.
The reason people say this is because generally, computer scientists like to strive for model-independent definitions. In the context of computational complexity, one way you can have such a thing is if you have one model –say the Turing model– that can solve any problem that any other model can, and just as easily. In that case you can just not bother to mention the model at all when you talk about how easy or hard something is. What I mean is that in every case where you say `this problem is computable easily by model M’, you can instead say `this model is computable easily by a Turing machine’, since a Turing machine can do everything that M can, and just as easily as M can do it. And if this is true for every model M, N, O, or whatever, then every one of those sentences which mentions one of those models can be changed so that it has `by a Turing machine’ at the end of it. But repeating ‘by a Turing machine’ at the end of every sentence is redundant. So from now on you can just not bother to mention a model at all anymore and say instead: `this problem is computable easily’. And there you have something like a model independent definition of `easily computable’.
Now you might object that that’s just linguistic trickery: Just because one hasn’t bothered to add `by a Turing machine model computer’ to one’s sentence doesn’t mean it isn’t implicit in what one is saying. It doesn’t mean that the concept `easily computable’ really is model independent. And of course if you objected this way you’d be right. But there’s more to the story. The Turing machine model, it’s claimed, is somehow special. Many computer scientists –for example Cobham thought so– think that the Turing machine model represents the minimal computational properties that any computational model should have. And if that’s true it seems kind of more legitimate to call complexity theoretic concepts model-independent if they are defined with respect to Turing machines.
Now here’s why people think that quantum computation overturns complexity theory’s foundations: If quantum computers are able to efficiently solve problems that Turing machines can’t, then all of this model independence talk seems to go out the window. Because at best you will have `model independence’ only in the first sense that I talked about; that is, every sentence will be of the form: `problem P is easy for a quantum computer’ and then you will just omit `by a quantum computer’ since it is redundant. But that’s not model independence in a very substantive sense. And so quantum computing seems to break the model independence of complexity-theoretic concepts.
But does this really break the complexity-theoretic paradigm? I want to argue that it doesn’t. Model independence is a nice-to-have feature of complexity theory, but it isn’t essential to the theory. Ultimately complexity theory’s fundamental aim isn’t to somehow `get at’ some `thing’ called `efficient computation’ that exists independently of us out there in the world. As Edmonds pointed out decades ago, the fundamental –normative– aim of the theory is rather to help us to design algorithms and to help us build machines to realise our practical purposes. For these purposes the model independence of our concepts is convenient, but not necessary.
So how do I think quantum computation, and physics in general, contribute to complexity theory? By reminding us of this.
PP: Currently you are working in collaboration with Prof. Markus Müller on a project titled: “Emergent Objective Reality: From Observers to Physics via Solomonoff Induction”. What exactly is this project about?
MC: I’ve only just started the project and am still learning about it from Markus, so I can’t give you a whole lot of detail at this point, but I’ll do my best to give you a general overview.
Traditionally, physical theories begin with an ontic picture of the world, in the sense that we presuppose the existence of an objective world out there that evolves according to physical law. We test our theories by experimenting on the world and comparing the results of our experiments with our theoretical predictions.
The problems arise when we have a theory whose predictions are inherently probabilistic, like quantum mechanics. To make sense of these probabilities from within the ontic picture, we seem to have two main options: we can take these probabilities to either reflect (a) objective features of the world, or (b) a limitation of our knowledge of the world. But in either case this raises some conceptual puzzles. For example, neither option seems to give a completely satisfactory explanation of the kinds of probabilistic correlations that you find in general in quantum mechanics between two or more systems. At least there is nowhere near a concensus that any particular option is satisfactory.
The emergent objective reality project starts by putting aside the fundamental assumption of an ontic picture of the world, and instead begins with the notion of an `observer’ as fundamental. If you’re familiar with Kant this should remind you of his `Copernican revolution’ in philosophical method. To be just slightly more precise, what we ask is the question: ‘What if the probabilities are actually fundamental, and physics as we know it is an emergent phenomenon?’ And the main tool we use to go from probabilities to physics is Solomonoff Induction, a method of inference that is borrowed from algorithmic information theory.
Now I want to emphasise that it’s clear –i.e., there are theorems that show– that we can’t capture all of physics this way. We can capture a surprising amount of it, though, and the point of the project is partly to –as a sort of a proof of principle– see just how much we can capture, and to consider what the philosophical and mathematical significance of this is. For example, there might be methodological implications: some of the methods developed in the project might prove useful in theory development and discovery. We will be exploring the metaphysical and epistemological implications of this line of inquiry as well —possible connections to the metaphysical viewpoint called ‘structural realism’, for example. And certainly many of the formal results obtained will be of independent interest to algorithmic information theorists.
PP: Besides the above research interests, you have also publications in other philosophical areas as well, such as the history of philosophy, or legal and political philosophy. What did spark your interest in such different philosophical areas?
MC: Actually, my interest in most of those areas predates my interest in the philosophy of physics and computer science. It’s hard to say exactly what it was that piqued my interest in them all. Partly I think it is Kant. Early on in my graduate student career I was introduced to Kant by my professor, Vladimir Zeman (who sadly passed away not that long ago). I read each of Kant’s three critiques. The first critique was the most difficult thing I have ever read in my life. That’s not hyperbole, and I am including all of the technical physics and computer science books I have ever read when I say that. The first critique was also probably the most rewarding thing I have ever read. It’s no secret that I remain basically a Kantian to this day, or maybe some sort of neo-Kantian. It’s been a few years now since I last contributed to Kant scholarship, but Kant’s philosophy permeates pretty much everything that I do philosophically. And I guess partly because he was so systematic and because he contributed to so many areas, through him I find that I have things to say about many different areas as well.
But the other, probably more basic reason, is that in all of the different areas that I have worked in, the particular topics that tend to attract me the most are the very conceptual/logical ones; for example the conceptual distinctions between legal positivism and natural law, the foundations of logic and mathematics, the ontological argument, social contract theory, the conceptual structure of physics and computer science, and so on. This is exactly the kind of subject matter, because it’s so abstract, that’s easiest to transpose from one domain to another.
PP: Reaching the end of this interview, I’d like to ask you about some other aspects of your life, as well. You play a variety of musical instruments (such as bass, guitar, etc.), you write music and lyrics, and you sing (and I became a fan of your music, I should admit). What role does music play in your life? Would you say that your musical and philosophical works are somehow related, or they seem to inhabit separate places of your mind?
MC: When I’m in London I jam about once a week or so with a few friends that I met when I first moved here years ago. Howard, the drummer, was talking one night a few years back about how daily life can be a drag sometimes, or something like that. But then he said, `But this time, you get for free. This time, you don’t have to pay for.’ What he meant by ‘this time’ was the time spent playing good music with good friends. That’s the role music plays in my life. I’ll always make time for it. That time doesn’t come at a cost.
I would not say that my music and my work in philosophy are related, nor do I think they should be. I love music for its own sake and philosophy for its own sake. Don’t get me wrong: obviously my sense of music and rhythm influences my work in philosophy (especially my writing style), and obviously working as a professional philosopher makes it kind of unavoidable that my thoughts and opinions on various philosophical topics find their way into the music that I play and compose. But the very thought of connecting these two worlds in a more conscious and substantive way just makes me recoil.
PP: What genre would you put your music under? What would you say your musical influences are?
MC: I honestly don’t know what genre my music falls under. Genres have become so specific nowadays and I’m not sure what label would fit. I’m not saying that nothing would fit, just that I don’t think it’s worth the mental effort to figure out what. I guess there is a bit of country in it, a bit of blues, a bit of ska, a bit of rock. Maybe other things too.
As a bass player my main influences are, first and foremost, Paul McCartney, then John Paul Jones, then Noel Redding, also Jimi Hendrix, strangely enough, since he’s actually a guitarist, and probably a few others but those are the ones that come to mind immediately. In terms of songwriting, sometimes I listen to something I’ve recorded and I find that a particular part reminds me a little of The Doors. Sometimes I hear The Beatles. Sometimes Pink Floyd. Sometimes Robert Johnson. Sometimes other people.
I love blues –I don’t mean a lot of the cookie-cutter stuff that passes as blues today– I mean Muddy Waters, Howlin’ Wolf, John Lee Hooker, Robert Johnson, The Band, Paul Butterfield, Jimi Hendrix, BB King, Stevie Ray Vaughan. I love `British blues’ a lot: Eric Clapton, Van Morrison, John Mayall, the Rolling Stones, the Beatles, Led Zeppelin, Pink Floyd. Don’t get me wrong, there is a lot of wonderful blues today too –if you listen to CBC on a Saturday night you will hear some of it– but I’m not up to date with the scene and can’t remember any artists’ names right now.
I love Bob Dylan.
PP: A nobel laureate, very recently! Putting music aside, one of your favourite quotes in general is “In much wisdom there is much sorrow, and he who stores up knowledge stores up grief.” To what degree does this quote from Ecclesiastes resonate with you? Isn’t that, in a sense, what philosophers and scientists try to do, to accumulate knowledge?
MC: Ecclesiastes is in many ways a very pessimistic book. I guess this goes without saying, given that quote. A big part of the book’s message is that the universe is utterly indifferent to our best efforts and intentions. As Qoholeth, its stated author, says, “All is vanity and a chase after the wind.” (1:14). Thus seeking after wisdom and knowledge, in the end, will invariably bring this message home to us all the more.
This does ring true to me, at least in some respects. But there is another message that I take from this quote as well: it’s that one should seek wisdom for its own sake, not because it will bring you happiness, but despite the fact that it may –or if Qoholeth is right: will— bring you unhappiness. Knowledge and especially wisdom are more important than happiness or unhappiness. I don’t know if that’s a more positive message than the first one or not, but it does resonate with me.
PP: Six desert island philosophical books and music albums?
MC: These lists are really tough to put together. If you asked me tomorrow, odds are they’d change a bit. But here they are as of October 20, 2016!
Philosophical books (in no particular order): Kant’s first critique, Frege’s Foundations of Arithmetic, Rawls’s Theory of Justice, Heisenberg’s Physics and Beyond, Cassirer’s Substance and Function, Arendt’s The Origins of Totalitarianism (I’ve so far only read a small bit of this last one, but a desert island is just the place to finish it, I think).
Albums (also in no particular order): Robert Johnson’s complete recordings, Abbey Road, The Last Waltz, Blonde on Blonde, L.A. Woman, Led Zeppelin III.
This is the second interview in our series introducing new postdocs for the 2016-2017 academic year. The first interview, was with Marc Holman. Additional interviews will be published soon.