Future Tense

The Partnership of the Future

Microsoft’s CEO explores how humans and A.I. can work together to solve society’s greatest challenges.

From left, visually impaired Microsoft developer Saqib Shaikh stands next to CEO Satya Nadella during his keynote address at the 2016 Microsoft Build Developer Conference on March 30 in San Francisco.
From left, visually impaired Microsoft developer Saqib Shaikh stands next to CEO Satya Nadella during his keynote address at the 2016 Microsoft Build Developer Conference.

Justin Sullivan/Getty Images

Advanced machine learning, also known as artificial intelligence or just A.I., holds far greater promise than unsettling headlines about computers beating humans at games like Jeopardy!, chess, checkers, and Go. Ultimately, humans and machines will work together—not against one another. Computers may win at games, but imagine what’s possible when human and machine work together to solve society’s greatest challenges like beating disease, ignorance, and poverty.

Doing so, however, requires a bold and ambitious approach that goes beyond anything that can be achieved through incremental improvements to current technology. Now is the time for greater coordination and collaboration on A.I.

I caught a glimpse of what this might yield earlier this year while standing onstage with Saqib Shaikh, an engineer at Microsoft, who has developed technology to help compensate for the sight he lost at a very young age. Leveraging a range of leading-edge technologies, including visual recognition and advanced machine learning, Saqib and his colleagues created applications that run on a small computer that he wears like a pair of sunglasses. The technology disambiguates and interprets data in real time. In essence, technology paints a picture of the world for him audibly instead of visually. He experiences the world in richer ways, like connecting a noise on the street to a skateboarder or sudden silence in a meeting to what co-workers might be thinking. He can “read” a menu as his technology whispers in his ear. Perhaps most important to him, he finds his own loved ones in a bustling park where they’ve gathered for a picnic.

The beauty of machines and humans working in tandem gets lost in the discussion about whether A.I. is a good thing or a bad thing. Our perception of A.I. seems trapped somewhere between the haunting voice of HAL in 2001: A Space Odyssey and friendlier voices in today’s personal digital assistants—Cortana, Siri, and Alexa. We can daydream about how to use our spare time when machines drive us places, do our chores, help us make better decisions. Or we can fear a robot-induced massive economic dislocation later this century. Depending on whom you listen to, the so-called “singularity,” that moment when computer intelligence will surpass human intelligence, might occur by the year 2100—or it’s simply the stuff of science fiction.

I would argue that perhaps the most productive debate we can have isn’t one of good versus evil: The debate should be about the values instilled in the people and institutions creating this technology. In his book Machines of Loving Grace, John Markoff writes, “The best way to answer the hard questions about control in a world full of smart machines is by understanding the values of those who are actually building these systems.” It’s an intriguing question, and one that our industry must discuss and answer together.

At a developer conference earlier this year, I shared our approach to A.I. First, we want to build intelligence that augments human abilities and experiences. Ultimately, it’s not going to be about human vs. machine. We humans have creativity, empathy, emotion, physicality, and insight that can then be mixed with powerful A.I. computation—the ability to reason over large amounts of data and do pattern recognition more quickly—to help move society forward. Second, we also have to build trust directly into our technology. We must infuse technology with protections for privacy, transparency, and security. A.I. devices must be designed to detect new threats and devise appropriate protections as they evolve. And third, all of the technology we build must be inclusive and respectful to everyone.

This approach is a start, but we can go further.

Science-fiction writer Isaac Asimov also provides a good, though ultimately inadequate, start: In the 1940s, he conceived the “Three Laws of Robotics” to serve as an ethical code for the robots in his stories. Asimov’s Laws are hierarchical, with the first taking priority over the second and the second taking priority over the third. First, robots should never harm a human being through action or inaction. Second, they must obey human orders. Third, they must protect themselves. Each law cannot conflict with the other. While Asimov’s Laws have served as a convenient and instructive device, they don’t provide the values or design principles that researchers and tech companies should articulate. Nor do they speak to society about the capabilities humans must bring into this next era when A.I. and machine learning will drive ever-larger parts of our economy.

Computer pioneer Alan Kay quips, “The best way to predict the future is to invent it.” In the A.I. context, he’s basically saying, Stop predicting what the future will be like and create it in a principled way. I agree. Like any software design challenge, that principled approach begins with the platform you’re building upon. In software development terms A.I. is becoming a third “run time”—the next platform. In computer science, a run time is the system on top of which programmers build and execute applications. In other words, we wrote Office, with applications like Word and PowerPoint, for the PC. Today Office 365, which includes Office, Skype, and Outlook, are written for the web. In an A.I. and robotics world, these productivity and communication tools will be written for an entirely new platform, one that doesn’t just manage information but also learns from information and interacts with the physical world.

What that platform, or third run time, will look like is being built today. I remember reading Bill Gates’ “Internet Tidal Wave” memo in the spring of 1995. In it, he foresaw the internet’s impact on connectivity, hardware, software development, and commerce. More than 20 years later we are looking ahead to a new tidal wave—an A.I. tidal wave. So what are the universal design principles and values that should guide our thinking, design, and development?

A few people are taking the lead on this question. Cynthia Breazeal at the MIT Media Lab has devoted her life to exploring a more humanistic approach to artificial intelligence and robotics. She argues that technologists often ignore social and behavioral aspects of design. In a recent conversation, Cynthia said we are the most social and emotional of all the species, yet we spend little time thinking about empathy in the design of technology. She said, “After all, how we experience the world is through communications and collaboration. If we are interested in machines that work with us, then we can’t ignore the humanistic approach.”

To that end, I have reflected on what are the principles and goals, as an industry and a society, that we should discuss and debate.

A.I. must be designed to assist humanity: As we build more autonomous machines, we need to respect human autonomy. Collaborative robots, or co-bots, should do dangerous work like mining, thus creating a safety net and safeguards for human workers.

A.I. must be transparent: We should be aware of how the technology works and what its rules are. We want not just intelligent machines but intelligible machines. Not artificial intelligence but symbiotic intelligence. The tech will know things about humans, but the humans must know about the machines. People should have an understanding of how the technology sees and analyzes the world. Ethics and design go hand in hand.

A.I. must maximize efficiencies without destroying the dignity of people: It should preserve cultural commitments, empowering diversity. We need broader, deeper, and more diverse engagement of populations in the design of these systems. The tech industry should not dictate the values and virtues of this future.

A.I. must be designed for intelligent privacy—sophisticated protections that secure personal and group information in ways that earn trust.

A.I. must have algorithmic accountability so that humans can undo unintended harm. We must design these technologies for the expected and the unexpected.

A.I. must guard against bias, ensuring proper, and representative research so that the wrong heuristics cannot be used to discriminate.

But there are “musts” for humans, too—particularly when it comes to thinking clearly about the skills future generations must prioritize and cultivate. To stay relevant, our kids and their kids will need:

  • Empathy—Empathy, which is so difficult to replicate in machines, will be valuable in the human–A.I. world. Perceiving others’ thoughts and feelings, collaborating and building relationships will be critical.
  • Education—Some argue that because lifespans will increase, birth rates will decline, and thus spending on education will decline. But I believe that to create and manage innovations we cannot fathom today, we will need increased investment in education to attain higher level thinking and more equitable education outcomes. Developing the knowledge and skills needed to implement new technologies on a large scale is a difficult social problem that takes a long time to resolve. There is a direct connection between innovation, skills, wages, and wealth. The power loom was invented in 1810 but took 35 years to transform the clothing industry because there were not sufficient trained mechanics to meet demand.
  • Creativity—One of the most coveted human skills is creativity, and this won’t change. Machines will continue to enrich and augment our creativity. In a recent interview, novelist Jhumpa Lahiri was asked why an author with such a special voice in English chose to create a new literary voice in Italian, her third language: “Isn’t that the point of creativity, to keep searching?”
  • Judgment and accountability—We may be willing to accept a computer-generated diagnosis or legal decision, but we will still expect a human to be ultimately accountable for the outcomes.

And what is to become of equality? Will automation lead to greater or lesser equality? On the one hand, we’re told not to worry about it. Throughout history, substitutes for human labor have always made workers richer, not poorer. On the other hand, we’re told that economic displacement will be so extreme that entrepreneurs, engineers, and economists should adopt a “new grand challenge”—a promise to design only technology that complements rather than substitutes human labor. In other words, we business leaders must replace our labor-saving and automation mindset with a maker-and-creation mindset.

The trajectory of A.I. and its influence on society is only beginning. To truly grasp the meaning of this coming era will require in-depth, multi-constituent analysis. My colleague Eric Horvitz in Microsoft Research is a pioneer in the A.I. field and has been asking these questions himself for many years. Eric and his family have personally helped to fund Stanford University’s One Hundred Year Study; each year for the coming century, it will report on near-term and long-term socioeconomic, legal, and ethical issues that may come with the rise of competent intelligent computation, the changes in perceptions about machine intelligence, and likely changes in human-computer relationships.

While there is no clear road map for what lies ahead, in previous industrial revolutions we’ve seen society transition, not always smoothly, through a series of phases. First, we invent and design the technologies of transformation, which is where we are today. Second, we retrofit for the future. For example, drone pilots need training; old cars’ steering wheel might need to be removed as they are converted to autonomous vehicles. Third, we navigate distortion, dissonance, and dislocation. What is a radiologist’s job when the machines can read the X-ray better? What is the function of a lawyer when computers can detect patterns in millions of documents that no human can spot? But if we’ve incorporated the right values and design principles, and if we’ve prepared ourselves for the skills we as humans will need, humans and society can flourish.

Writing for the New York Times, cognitive scientist and philosopher Colin Allen concludes, “Just as we can envisage machines with increasing degrees of autonomy from human oversight, we can envisage machines whose controls involve increasing degrees of sensitivity to things that matter ethically. Not perfect machines, to be sure, but better.”

The most critical next step in our pursuit of A.I. is to agree on an ethical and empathic framework for its design.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.