Future Tense

Incubating Artificial Intelligence: A Future Tense Event Recap

Levi Tillemann, David Vladeck, Hilary Cain, Lisa Ellman, and Corey Owens

New America

Artificial intelligence is on its way to ubiquity—and we’re not ready. Self-driving cars, facial recognition technology, and teenage chatbots are only harbingers of what’s to come. Industries are exploring new ways to harness A.I. technology while policymakers are working with antiquated laws and regulations in order to keep up with innovation.

On March 24, Arizona State University’s School for the Future of Innovation in Society and Future Tense—a partnership of Slate, New America, and ASU—held an event in Washington, D.C., to explore new ways for policymakers, community members, and industry leaders to respond to and plan for the development and deployment of A.I. technology. At the event, Levi Tillemann, managing partner at Valence Strategic and an ASU Resilient Futures fellow at New America, stated, “Grappling with the promise and threats posed by artificial intelligence will be one of the major policy challenges of the 21st century.” Tillemann facilitated a conversation on this issue with key policymakers and experts in the fields of drones and self-driving cars—two sectors leading the way for harnessing A.I. technology for consumers.

The proliferation of A.I. will require a suitable policy framework to address the ethical, social, and economic challenges that will come with its spread. Hilary Cain, director of technology and innovation policy for Toyota, stressed the importance of regulation catching up with innovation. She noted that in 2015, 38,000 Americans lost their lives in traffic accidents. Research suggests that more than 90 percent of vehicle accidents can be attributed to human error, so autonomous vehicles could drastically decrease the number of lives lost in driving accidents annually. As the Google self-driving car’s recent run-in with a bus demonstrates, the technology isn’t ready for mass distribution yet—but we can’t judge the future of self-driving cars by a single test drive.

Like self-driving cars, drones are often misperceived by the American public. The most common concerns with drones relate to safety, privacy, and security, and we have already seen early regulatory responses from state lawmakers as well as the Federal Aviation Administration. Despite increased regulation, Lisa Ellman, partner and co-chair of global UAS practice at Hogan Lovells, said, “Drone fever is here and drones are here to stay, whether or not we have the policy to enable their use.” The solution she proposes is “polivation,” bringing policymakers together with innovators to ensure policy promotes innovation. This is especially important since innovations such as drones are redefining what qualifies as aircraft. Ellman pointed out that current FAA regulations require that all aircraft have a flight manual on board. I think both innovators and policymakers would agree that a manual on an unmanned aircraft is unnecessary unless, of course, it’s being delivered to someone.

But putting innovators and policymakers in the same room would require, well, a pretty big room. David Vladeck, professor of law at Georgetown University, explained that different agencies and government bodies at the local, state, and federal level regulate how we can use technology, apply standards to design, and enforce proscribed guidelines. These governing bodies work in the interest of our safety, security, and privacy, but not all are equipped to meet the needs of these evolving technological fields. For example, the FAA, which regulates aircraft, can establish where you can fly drones and how high, like it would other aircraft. But it’s not as well suited to address the privacy and security concerns affiliated with drone technology.

Corey Owens, head of public policy for drone manufacturer DJI, reminded us that we can’t let policy discussions distract us from A.I.’s tremendous potential. Artificial intelligence can monitor our health, assist in medical research, and replace humans in war. With these applications come a host of value judgments that we will have to trust technology to make on our behalf. Consider the case of a self-driving car faced with the dilemma of avoiding a catastrophic accident with multiple fatalities or swerving to hit a lone biker.

As we deploy devices that think and act without human interference, we must understand that how we develop policy and social norms in response to technological innovation will change us. A.I. will open a whole new world where the relationship between machines, industry, institutions, and people will never be the same.