Future Tense

Future Tense Newsletter: Evolved Consciousness and Its Discontents

480966268
Given time, A.I. may develop a moral consciousness of its own.

bestdesigns/thinkstock.com

Greetings, Future Tensers,

Conversations about artificial intelligence tend to fixate on the dangers such systems might present to human life. But what if we humans were the dangerous ones? That’s a possibility that ethicist Carissa Véliz raises in an article on the difficulty of recognizing A.I. sentience for this month’s Futurography course. “Because sentient beings can feel, they can be hurt, they have an interest in experiencing wellbeing, and therefore we owe them moral consideration,” Véliz writes. If we fail to take such considerations seriously, we risk “committing atrocities such as enslavement and murder” against the virtual minds we’re bringing into being.

Of course, even if we learn to act ethically toward our creations, we still need to calibrate their moral compasses. That problem may, however, take care of itself, Michael Chorost argues in Future Tense, if we simply give A.I. the opportunity to evolve under the proper conditions. Observing other species suggests that a capacity for mutual care grants evolutionary advantages. Accordingly, Chorost writes, “If moral intuitions confer fitness, and if organisms can pass on those intuitions to successors, then the species is on the road to having morality itself.” Maybe that’s when we’ll finally get A.I. that stops trying to force all the fun out of our busy schedules!

On the biomedical front, Dan Engber looked into the surprising difficulty of re-creating cancer research. Similar problems have struck various scientific fields, and the attempt to allay them may actually be making things worse by encouraging us to fixate on positive results. But as Monya Baker proposes, worrying over reproducibility may be making science better by reminding researchers to take more care as they set up their studies and analyze their results in the first place. If you’d like to know more, click over to Slate’s Facebook page on Thursday morning, where I’ll be talking about these issues with Rachel E. Gross in advance of Future Tense’s event on the topic (see below).

Here are some of the other stories we read this week while we were wondering about the design of the Apple car:

  • Computer Science: Before we dump millions of dollars into cybersecurity courses, we need to figure out what we want such curricula to achieve, argues Josephine Wolff.
  • E-Sports: Competitive video gamers exhibit a rich ingenuity that’s helping take us beyond recent conversations about video games as art.
  • Neurotech: A brain implant allowed a paralyzed man to regain much of his lost mobility, but FDA regulations may mean that he can’t keep it for long.
  • Internet law: Thanks to an absurdly broad statute, Matthew Keys was sentenced to two years in prison for contributing to a minor act of internet vandalism.

Events:

  • Biomedicine’s current reproducibility crisis is challenging the very idea that scientific knowledge expands as research studies build upon one another. Reliable studies show you should join Future Tense on Thursday, April 21, in Washington, D.C., to explore the debates about this issue. For more information and to RSVP, visit the New America website, where the event will also be webcast.
  • Join Future Tense from 6:30 till 8:30 p.m. on Tuesday, April 26, at Landmark’s E Street Cinema in Washington, D.C., to watch The Terminator with our experts Kevin Bankston, director of the Open Technology Institute at New America, and Sean Luke, director of the Autonomous Robotics Laboratory at George Mason University.

Shooting down a drone,

Jacob Brogan

for Future Tense