The third condition for a machine takeover would be the existence of independent robots that could fuel, repair, and reproduce themselves without human help. That's far beyond the scope of anything that now exists. While our real-world robots have become very capable, they all still need humans. For instance, the Global Hawk drone, the replacement for the manned U-2 spy plane, has the ability to take off on its own, fly to a destination 3,000 miles away, and stay in the air for 24 hours as it hunts for a terrorist on the ground. Then it can fly back to where it started and land on its own. But none of this would be possible if there weren't humans on the ground to fill it with gas, repair any broken parts, and update its mission protocols.
Finally, a robot invasion could only succeed if humans had no useful fail-safes or ways to control the machines' decision-making. We would have to have lost any ability to override, intervene, or even shape the actions of the robots. Yet one has to hope that a generation that grew up on a diet of Terminator movies would see the utility of fail-safe mechanisms. Plus, there's the possibility that shoddy programming by humans will become our best line of defense: As many roboticists joke, just when the robots are poised to take over, their Microsoft software programs will probably freeze up and crash.
The counter to all of this, of course, is that a superintelligent machine would figure out a way around each of these barriers. In the Terminator story line, for example, the Skynet computer is able to manipulate and blackmail humans into doing the sorts of things it needs. It's also able to rewrite its own software, a scenario that may be not so far-fetched. There is much work today on "evolutionary" or self-educating artificial intelligence that can even begin to take on its own identity. Just as humanity ended up with both Gandhi and Hitler, there is no guarantee that our machines will evolve to feel only love and compassion.
Most importantly, we rarely take heed of the lessons of science fiction. The military routinely carries out research into systems against which writers and filmmakers have long warned. Indeed, the scientists are often directly inspired by those cautionary tales. For instance, H.G. Wells' dark fantasy of what he called an "atomic bomb" in the 1913 anti-war story The World Set Freeactually helped guide the thinkers behind the Manhattan Project. In my book, I mention how one robotics firm was asked a few years ago by the military whether it could design a robot that looked like the "Hunter-Killer robot of Terminator." (It wasn't such a silly request. The design would be quite useful for the sort of fights we face now in Iraq and Afghanistan.)
In my final judgment, however, The Terminator may not be the best guide for how a machine takeover might take place in the real world. Instead, another science fiction series, The Matrix, may be more useful. By this I don't mean that we can look forward to a future of humans living in jelly bubbles and Keanu Reeves' avatar running about in leather pants. Rather, the films give us a valuable metaphor for the technologic matrix in which we increasingly find ourselves enmeshed but barely notice. For all our pop-culture-stoked fears of living in a world where robots rule with an iron (or digital) fist, we already live in a world of technology that few of us even understand. It increasingly dominates how we live, work, communicate, and now even fight.
Why would machines ever need to plot a takeover when we already can't do anything important without them?