I've occasionally pondered, or at least tried to ponder, the phenomenon known as "quantum weirdness"—the just-about inconceivable, Twilight Zone-y way physical reality can behave at a fine-grained level, the level of electrons and photons and other tiny things. But this weirdness has never seemed as tangible as it did after I finished reading your excellent new book A Shortcut Through Time: The Path to the Quantum Computer.
For starters, I hadn't realized that last year someone built a computer that actually harnesses quantum weirdness. Now, granted, "built" is a slightly misleading term. This "computer" consisted of seven atoms (a subnotebook, I guess you'd say) that stayed in existence long enough to find that the factors of 15 are, as I'd long suspected, three and five. As you stress in your book, it will be awhile before quantum computers show up in Dell ads, or even before we know if they're really practical.
Still, it seems to me that even the modest success of that seven-atom quantum computer drives home the metaphysical weirdness of quantum physics in a whole new way. (I mean "metaphysics" in the philosophical sense, not the Shirley MacLaine sense, though sometimes quantum metaphysics seems only slightly less weird than MacLaine metaphysics.) So, before we discuss the social implications of a future in which the massive power of quantum computing is commonplace (for example, currently invulnerable encryption schemes being easy to crack), I'd like to find out if you agree with me on this point: Would you say that quantum computing has already proved philosophically unsettling in a more dramatic and pointed way than those famously unsettling textbook quantum physics experiments?
For example, the classic "two-slits" experiment, which you recount very nicely in your book. I'll spare readers the details and cut to the apparent upshot: Until an electron's position is measured, the electron has no distinct existence as an electron per se. I've heard the electron's nebulous pre-measurement existence described in various ways: a) The electron is a wave, not a particle; b) the electron is in a number of different places at once; c) the electron is nowhere.
Now, as I understand it (and I don't, by the way), you can interpret proposition a in a way that is consistent with proposition b or in a way that is consistent with proposition c. If you opt for b—that the electron is in various places at once—then you can view the wave in a as an outline of the various places the electron is. And if you buy c—the electron is nowhere—then you can view the wave in a as a probability curve, defining the likelihood that the electron will show up in the various places it may show up once it actually comes into existence upon being measured.
Until I read your book, my layperson's intuition favored the latter interpretation, in which the pre-measurement electron doesn't exist. I know that's hard to imagine, but not as hard as imagining the electron being in more than one place at a given point in time. There are lots of things that don't exist until they exist: a snowflake, a car—but I don't know of any things that exist in more than one place at once.
However, as I reckon with quantum computing, I find myself trying to warm up to this idea of the electron's simultaneous existence in more than one place. Because one key to the power of quantum computing is that a single electron (or other tiny particle) can be involved in various different calculations at once. And this multitasking isn't like a person walking and chewing gum at the same time. It's like a person walking, running, and skipping at the same time and thus finding out which mode of transportation got it to its destination fastest.
Before I took quantum computers seriously, my reaction to the claim that a single particle was doing various different things at once might have been a sarcastic: "Oh, really? And tell me, what exactly are the different things that it's doing?" But now we seem to have a powerfully vivid answer: In the case of a quantum computer, at least, a given particle is involved in simultaneously trying out different solutions to a problem—like what pairs of numbers, when multiplied, yield the number 15.
So, George, do you buy what I'm saying: that quantum computing, to the extent that it's successful, makes more concrete, more undeniable, the weirdness of the quantum world—and that it favors particular interpretations of that weirdness?
I also have a bonus Shirley MacLaine question. The scenario I've been outlining, in which a particle has more than one existence at once, is, as I understand it, roughly what the revered physicist Richard Feynman meant by his "many paths" interpretation of the two-slits experiment. But, in addition, there's the really weird "many worlds" interpretation of the whole quantum situation. In this scenario, I gather, quantum fluctuations cause the universe to keep splitting into parallel, alternative universes. So, right now, in one such universe, there may be a George Johnson who didn't write a book on quantum computing and so isn't engaged in a Slatedialogue—not to mention the arid, meaningless universe in which Slate doesn't even exist.
You say in your book that one physicist, David Deutsch, thinks that if a quantum computer of any real size is ever built, the "many worlds" interpretation will have been proved true. Off the record: Do you think he's nuts?