Future Tense
The Citizen's Guide to the Future

Oct. 18 2017 6:51 PM

Google’s A.I. Has Made Some Pretty Huge Leaps This Week

When DeepMind’s AlphaGo artificial intelligence defeated Lee Sedol, the Korean Go champion, for the first time last year, it stunned the world. Many, including Sedol himself, didn’t expect an AI to have mastered the complicated board game, but it won four out of five matches—proving it could compete with the best human players. More than a year has passed, and today’s AlphaGo makes last year’s version seem positively quaint.

Google’s latest AI efforts push beyond the limitations of their human developers. Its artificial intelligence algorithms are teaching themselves how to code and how to play the intricate, yet easy-to-learn ancient board game Go.


This has been quite the week for the company. On Monday, researchers announced that Google’s project AutoML had successfully taught itself to program machine learning software on its own. While it’s limited to basic programming tasks, the code AutoML created was, in some cases, better than the code written by its human counterparts. In a program designed to identify objects in a picture, the AI-created algorithm achieved a 43 percent success rate at the task. The human-developed code, by comparison, only scored 39 percent on the task.

On Wednesday, in a paper published in the journal Nature, DeepMind researchers revealed another remarkable achievement. The newest version of its Go-playing algorithm, dubbed AlphaGo Zero, was not only better than the original AlphaGo, which defeated the world’s best human player in May. This version had taught itself how to play the game. All on its own, given only the basic rules of the game. (The original, by comparison, learned from a database of 100,000 Go games.) According to Google’s researchers, AlphaGo Zero has achieved superhuman-level performance: It won 100–0 against its champion predecessor, AlphaGo.

But DeepMind’s developments go beyond just playing a board game exceedingly well. There are important implications that could positively impact AI in the near future.

“By not using human data—by not using human expertise in any fashion—we’ve actually removed the constraints of human knowledge,” AlphaGo Zero’s lead programmer, David Silver, said at a press conference.

Until now, modern AIs have largely relied on learning from vast data sets. The bigger the data set, the better. What AlphaGo Zero and AutoML prove is that a successful AI doesn’t necessarily need those human-supplied data sets—it can teach itself.

This could be important in the face of our current consumer-facing AI mess. Written by human programmers and taught on human-supplied data, algorithms (such as the ones Google and Facebook use to suggest articles you should read) are subject to the same defects as their human overlords. Without that human interference and influence, future AI’s could be far superior to what we’re seeing employed in the wild today. A dataset can be flawed or skewed—for example, a facial recognition algorithm that has trouble with black faces because their white programmers didn’t feed it a diverse enough set of images. AI, teaching itself, wouldn’t inherently be sexist or racist, or suffer from those kinds of unconscious biases.

In the case of AlphaGo Zero, its reinforcement-based learning is also good news for the computational power of advanced AI networks. Early AlphaGo versions operated on 48 Google-built TPUs. AlphaGo Zero works on only four. It’s far more efficient and practical than its predecessors. Paired with AutoML’s ability to develop its own machine learning algorithms, this could seriously speed up the pace of DeepMind’s AI-related discoveries.

And while playing the game of Go may seem like a silly endeavor for an AI, it actually makes a lot of sense. AlphaGo Zero has to sort through a lot of complicated information to decide what moves to make in a game. (There are approximately 10170 positions you can make on a Go board.) As DeepMind co-founder Demis Hassabis told the Verge, AlphaGo Zero could be reprogrammed to sort through other kinds of data instead. This could include particle physics, quantum chemistry, or drug discovery. Like with playing Go, AlphaGo Zero could end up uncovering new techniques humans have overlooked or come to conclusions we hadn’t yet explored.

There’s a lot of reason to fear AI, but DeepMind’s AI’s aren’t programming themselves to destroy the human race. They’re programming themselves in a way that will shift some of the tedium off of human developers’ shoulders and look at problems and data sets in a fresh new light. It’s astonishing to think how far AI has come in just the past few years, but it’s clear from this week that progress is going to come even faster now.

Oct. 18 2017 4:09 PM

Former Tesla Factory Workers Allege Racist Insults and Graffiti in Lawsuit

Three former Tesla employees filed a lawsuit Monday in a California state court alleging that the company’s Fremont factory was a “hotbed for racist behavior.”


Oct. 18 2017 3:46 PM

Last Month’s Jobs Report Hints at Our Climate-Changed Economy’s Future

After seven years on the up, the U.S. economy took a big hit in September. Some 33,000 jobs were lost, according to the latest monthly report issued by the Bureau of Labor Statistics. While there are plenty of factors at play, from the man in the White House to the insistent specter of nuclear war, experts attribute part of autumnal dip to extreme weather.

Because of natural disasters like Hurricanes Harvey and Irma, the government reported, a whopping 1.5 million Americans were unable to work in September. Business owners in Texas, Florida, and elsewhere put new hiring on hold, and many industrial plants in storm-ravaged communities are still offline. But, economists say, the take-home message is clear: Things will return to normal. As the New York Times reported, economists are certain the U.S. labor market is fundamentally strong.


Some climate change researchers aren’t so certain, though. As “normal” grows nebulous and once-rare weather events become stronger and more frequent, it’s hard not to wonder if September’s inclement job numbers are not a fluke, but a preview.

Moustapha Kamal Gueye of the UN’s International Labour Organization says it’s looking like a bit of both. “The question is, ‘How frequent will [external shocks] be in the future?’” he said. “If these happen once in a year, then one could discount it… But if they happen twice, five times a year, in many places around the world?” Well, then they’re not aberrations, they’re a new reality.

Depending on where you live and what kind of work you do, Gueye says this brave new world could look rather different. People who depend heavily on natural resources will be the hardest hit. In fact, they already are. In the Caribbean, at least 2.3 million people work in the $35 billion tourism industry, which largely relies on, well, a natural resource of sorts. This hurricane season has totally upended this crucial sector of the economy, causing cancelled reservations and ravaging infrastructure. Senegal, a coastal African nation which similar relies on tourism dollars, is also seeing a decline in its tourism, due to the diminishment of its major tourist attraction—its beaches—thanks to sea level rise, erosion, and resource mismanagement, as Reuters reported.

Agriculture is also threatened by the both the slow creep and sudden debilitating outbursts of climate change. Many jobs were burnt to a crisp in the recent California wine country wildfires, which are increasing in frequency as the West warms. And as overfishing, ocean acidification, and swelling dead zones destroy fish populations, the fishing industry is increasingly imperiled. Because neither the fisherman nor the farmer lives in a vacuum, the other people in their economic ecosystems are threatened. Gone are the bait and tackle salesman, the boat builder, and the shore-side restaurateur. Gone, too, could be a source of protein that a billion people rely on.

While nothing looks good, it doesn't actually look all bad. In areas that will be more regularly affected by natural disasters, we could see the rise of a rebuilding gig economy. After Hurricane Katrina, the Times reported, the job market took some time to rebound, but the task of putting New Orleans back together eventually stimulated employment:

Employment gains averaged 249,000 in the six months before the storm. After New Orleans found itself underwater, gains averaged 76,000 over the next couple of months before soaring to 341,000 in November 2005.

These aren’t as desirable as more permanent positions, but remaking damaged cities and disaster-proofing could become big business. As many people slowly trickle out of communities besieged by extreme weather, more resilient communities could see immense growth, stimulating their own climate-caused construction boom.

There’s also the question of the jobs “lost” to a greening economy. Trump campaigned in part on the premise the Paris Climate Accord were a “bad deal” for the country. This framing, that greening the economy is a job killer, is false: There are jobs lost in the fossil fuel industry, but the jobs gained in the renewable energy sector more than make up for that. As of September, the Bureau of Labor Statistics reports, just 52,000 Americans made up the entire coal industry. By contrast, Slate’s Dan Gross reported in June, Tesla was seeking to fill almost 2,000 jobs—and that was just one company. (As of May 2017, more than 800,000 Americans were employed in the renewable energy sector.) Last month, PRI’s The World reported that West Virginia, where many young people thought they’d grow up to be coal miners, many are actually finding work in the booming solar energy industry.

As the evidence and cost of climate change is piling up, our president may be willing to put his head in the sand, but ordinary Americans are starting to take notice: A recent poll suggested that 85 percent of Americans say they believe man-made climate change played some role in Hurricanes Harvey and Irma. Connecting the dots between climate change, natural disasters, and a depressed economy might take time, but as it becomes increasingly apparent, there will be an anxiety to follow. Trump ran on saving our economy—is he willing to acknowledge one of the major threats to it?

Oct. 18 2017 12:29 PM

Twitter Is Rolling Out Stricter Rules on Sexual Abuse, Violence, and Hate Speech

We now have some specifics on the new rules that Twitter CEO Jack Dorsey promised on Friday to curb abuse, harassment, and hate speech on the social media platform.


Oct. 18 2017 12:08 PM

Future Tense Newsletter: Unhappy Cybersecurity Awareness Month

Greetings, Future Tensers,

Just in time for Cybersecurity Awareness Month, Reps. Tom Graves, R–Georgia, and Kyrsten Sinema, D–Arizona, introduced a revised version of the Active Cyber Defense Certainty Act, which would allow companies to access computers that don’t belong to them in the name of self-defense. Josephine Wolff explains (again!) why “hacking back” is the worst cybersecurity policy that just won’t die, writing, “At its heart it would just serve as an excuse to let anyone access anyone else’s computer systems with impunity.” Are you feeling more cybersecure and aware yet?


With Wi-Fi security flaws leaving our communications exposed to eavesdroppers, Russia using antivirus software to spy on the U.S., and another Equifax security flub rounding out the news from the first half of Cybersecurity Awareness Month, it may seem like there is no end in sight to our cybersecurity woes. However, you may find comfort in Neel V. Patel’s proposition that the future of cybersecurity might look a lot like Snapchat. Or you may not.

Other things we read this week while booking our stay at the Airbnb apartment building near Disney World:

  • Online political ads: In light of recent allegations involving 2016 presidential campaign ads bought by Kremlin-backed forces, the Federal Election Commission is exploring how it can strengthen regulations around political advertisements online.
  • Halfway technology: David Guston writes that engaging in criticism of technology is a constructive enterprise necessary to ensure we are getting the most from our innovation system.
  • Future of mobility: Autonomous vehicle technology has the potential to revolutionize how people with disabilities get around, writes Srikanth Saripalli.
  • Forgotten history: A massive archive of science fiction culture at the University of Iowa is available to the public online as part of DIY History, a project that invites people to transcribe objects that can’t be read by a machine. Jacob Brogan shares what he discovered about the history of science fiction when he started transcribing.

Upcoming events in Washington, D.C., and New York:

Sneakers screening
Need a break from news about data breaches and election meddling? Join Future Tense and Alvaro Bedoya, founding executive director of the Center on Privacy and Technology at Georgetown Law, for a screening and discussion of the 1992 film Sneakers on Oct. 18 (tonight!) in Washington. RSVP for yourself and up to one guest.

The Water Will Come
Scientists and policymakers are fighting to hold back the devastating effects of a drowning world, as Jeff Goodell chronicles in his new book, The Water Will Come: Rising Seas, Sinking Cities, and the Remaking of the Civilized World. Join Future Tense for a discussion of the new book with Goodell and other experts in Washington on Oct. 24 and in New York on Oct. 25. RSVP to attend the event in New York or RSVP to attend the Washington event in person or online.

Poetry in Space
With space exploration no longer being monopolized by scientists and government agencies, artists are now getting in on the act. Join Future Tense for a happy hour conversation on Oct. 26 in Washington, with artists Juan José Diaz Infante (who launched the poetry-bearing Ulises I Mexican nanosatellite) and Tavares Strachan to discuss connecting the arts and sciences. RSVP to attend online or in person here.

More aware but feeling less secure,
Emily Fritcke
For Future Tense

Future Tense is a partnership of Slate, New America, and Arizona State University.

Oct. 17 2017 7:28 PM

Would Google's New Super-Secure Email Have Protected Hillary's Campaign?

A targeted phishing scam and an unfortunate typo helped hackers infiltrate the Gmail account of Hillary Clinton’s campaign manager John Podesta during the 2016 election season.

Podesta received an email requesting a password change, and suspecting a phishing attempt, he forwarded it to IT. But the IT guy accidentally said it was “a legitimate email” rather than “not a legitimate email,” and the rest is history: Podesta’s email was accessed, 60,000 Gmail messages were leaked, and the juiciest made their way into the press.


A hack like this is, and was, entirely preventable. And on Tuesday, Google announced new protections for high-profile individuals who could be targeted like Podesta.

“We took this unusual step because there is an overlooked minority of our users that are at particularly high risk of targeted online attacks,” Advanced Protection project manager Dario Salice wrote in Google’s introductory blog post about the feature. “These might be campaign staffers preparing for an upcoming election, journalists who need to protect the confidentiality of their sources, or people in abusive relationships seeking safety.”

It’s designed to make hacking and phishing attempts virtually impossible to execute. But would it have been enough to stop the Podesta hack and prevent future similar attacks?

Google isn’t the first company to make changes as we’ve learned how technology influenced the election. Amid growing revelations of how much Russia used social media and advertising to affect its outcome, Silicon Valley juggernauts have been trying to shore up their services against interference. Facebook began battling fake news with a campaign to educate users, utilize third-party fact checkers to verify stories, and show readers related articles alongside stories in their feed. Twitter has suspended accounts tied to Russia and changed how it handles abuse in its app.

Google, which has since discovered Russia-linked ads on its platforms, has adjusted its search algorithms and offered more opportunities to provide feedback about search and autocomplete results. And now with Advanced Protection, it’s trying to eliminate the threat of email hacks.

Advanced Protection does three things: It protects accounts against phishing, blocks fraudulent account access, and offers safeguards against sharing sensitive data with malicious applications.

The Advanced Protection Program incorporates a physical security key (a small USB or wireless device that costs around $25) to protect against phishing. The key, which participants need to buy themselves, uses public-key cryptography and digital signatures. Without the key, even someone with your password would be unable to access your account. Advanced Protection limits your Google data access to only Google apps and adds additional safeguards in the account recovery process to prevent someone from social engineering their way into your account. It also performs additional scans on files and attachments to ensure no malware is piggybacking on the download.

This is one of the highest levels of security a public company has offered to consumer-level users to date. And, a Google representative confirmed, Advanced Protection isn’t limited to only the créme de la créme of Google users. Anyone with a Gmail account can enroll in Advanced Protection and will be granted access. There’s no vetting process. It’s not an elite club.

If this security had existed a year ago, it could have stopped the hack of Hillary's campaign—in theory. Let’s say all of her campaign staff had Advanced Protection enabled, and Podesta had received the same phishing email, and the IT guy had made the same typo. The physical key would’ve prevented the hacker from accessing Podesta’s account. Unlike even a one-time code delivered via SMS, there’s no way for this security token to be hijacked over a carrier network or insecure WiFi.

But Advanced Protection is not without a fatal flaw: Like two-factor authentication and other security measures, it’s opt-in. It’s up to the user to join the program and take advantage of its additional layers of security. As the saying goes, you can lead a horse to water, but you cannot make it drink. And a security measure is only effective if you’ve put it in place.

Since Podesta’s email was able to be hacked with a simple password change, it’s clear he did not have two-factor authentication enabled. And if he, and other staff members, weren’t willing to take advantage of that simple, but important, security precaution, it’s unlikely they would have gone through the extra hoops required of Advanced Protection.

That’s one of the biggest issues with email security today: People are unwilling to undergo some degree of inconvenience in order to secure their accounts and information. Google has supported physical security keys for three years—but do you know anyone that uses one? It’s only in recent months that we’re seeing how truly important it is to lock down your information.

It’s possible the Podesta hack will motivate a large number of people—and in particular the people running, or hoping to run, our government—to get onboard with services like Advanced Protection. But if history is any indicator, the people who most need to get on board with Advanced Protection never will. And with their decision to choose convenience over security, we’ll continue to be susceptible to ever sneakier hacking attempts trying to undermine our country’s democratic way of life.

Oct. 17 2017 4:12 PM

Twitter Is Shutting Down a Conservative Group’s Automated Tweets

Twitter has restricted a web application crucial to the operations of a conservative group known for barraging the network with tweets to promote political messages.

Oct. 17 2017 1:46 PM

Unpopular Opinion: The “1 Like = 1 Unpopular Opinion” Meme Is Bad

Memes sweep through Twitter like ocean currents. Sometimes they help us vent collective frustrations in deeply personal ways, and on other occasions they become opportunities for self-referential cleverness. Though even the best memes grow old fast, this willingness to be goofy together makes the platform worth returning to.

One of the network’s latest memes, however, shows off the social network at its most exhaustingly annoying. The posts almost all start the same. A user tweets an image with white characters that say, “1 like = 1 unpopular opinion,” on a field of black. Though the graphic speaks for itself, a short message typically accompanies it. “Okay, let’s do this,” wrote Jeet Heer, one of the progenitors of the Twitter essay format. “Ok. I’ll give this a try,” offered Vox’s Matthew Yglesias.


Replying to their original tweet, these users proceed to offer a steady, numbered stream of abbreviated takes. “Short stories > novels,” read one of Heer’s contributions. “Dogs are bad,” Yglesias proposed. While those are debatable proclamations, in other cases, users stake out positions that approach something like a widespread consensus, as Yglesias did when he challenged the supposed delusions of Californians:

Many of these positions feel less like truly unpopular opinions than they do like curmudgeonly complaints. More often than not, they will only be truly “unpopular” within specific milieus. Criticizing the West Coast’s fast food, for example is sure to resonate affirmatively with Shake Shack and Five Guys–loving Northeasterners. (I, for the record have no opinion on In-and-Out Burger. Please don’t @ me.) Similarly, even those who live in San Francisco might agree that the city could improve its public sanitation.

The lists themselves, meanwhile, primarily serve to index a user’s social capital, showing both that others care what they think and that they think about a lot of things. Among those who reared themselves in the early-21st-century blogosphere, participating in this exercise isn’t so much a confession as it is a ritualistic act of self-affirmation. Indeed, for internet personalities like Yglesias (who used to work at Slate), nothing could be more on brand than a rejection of received wisdom. As the journalist Mark Harris joked, tweeting your unpopular opinions may not be all that different from just … tweeting.

In the broadest sense, the “1 like = 1 opinion” format is familiar and not entirely unwelcome. One variant, for example, found Twitter users suggesting books in response to likes. That approach yielded a cornucopia of compelling options, especially from the minds of enthusiastic readers like Slate contributor Isaac Butler. The mode works in large part because it plays off the infrastructure of Twitter itself, requiring a modicum of interaction from those on the sidelines: What could be simpler than clicking the mysterious heart button? Simultaneously, it demands almost virtuosic displays of knowledge and thoughtfulness from the site’s most popular contributors—displays that many are more than prepared to offer.

The “unpopular opinion” version of the meme, however, threatens to lay bare the underlying laziness of the convention. At its best, Twitter can open up unlikely pathways of communication between otherwise distinct individuals and communities. Tweet at a celebrity and sometimes the celebrity will tweet back. Enter into an argument with a stranger and sometimes (rarely, but sometimes!) you’ll make a new friend.

By contrast, unpopular opinions  are designed to be just that. They are often little more than showy humblebrags: To assert one (or 100) is to suggest that you stand out from the crowd, whether or not you truly do. The long lists assembled by personalities such as Yglesias and Heer threaten to devolve into performances of reflexive contrarianism. Freed of context, they discourage argument, evidencing little more than the pique of those who espouse them. In aggregate the meme’s popularity demonstrates only that the oppositional stance many attribute to Slate itself has colonized the internet as a whole.

There are, of course, ways to use the unpopular opinion meme well. Prior to its current surge, the writer Beth McColl effectively parodied it avant la lettre composing a thread in 2016 that included such gems as “crime is fine when i do it” and “let the donut crop replenish before eating more . otherwise more global warming will happen.” More recently, other Twitter users have offered some positions that are engagingly provocative.

The best of these provocations, however, frequently feel more like quick pitches than simple opinions. Whether or not you’re inclined to agree with them, you’ll likely find yourself hoping the writer would expand upon them. In some other cases, the opinions in a thread do build on one another, suggesting openings for more complex assertions. Heer included a pair of tweets about anti-racism and decolonization in Canada that reflect themes that he has explored before, even as they suggest the rough outlines of a promising argument. But tossing them into a list that also includes the claim “Superhero movies and comics should be made for kids,” potentially trivializes them, giving the impression that these thoughts are less worthwhile than they have the potential to be.

And that’s the rub of it. If an opinion is worth expressing, it’s worth letting it breathe on its own. I, for my own part, tried to do just that, tweeting my own distaste for the meme on Monday night and hoping to leave things there. To my dismay, Jack Hamilton, Slate’s pop critic, liked my brief note in order to make me “go again.” I responded with the opinion that Jack is a cool guy. Please fave if u agree.

Oct. 17 2017 12:38 PM

Are Self-Driving Cars the Future of Mobility for People With Disabilities?

This piece originally appeared in The Conversation.


Self-driving cars could revolutionize how disabled people get around their communities and even travel far from home. People who can’t see well or with physical or mental difficulties that prevent them from driving safely often rely on others—or local government or nonprofit agencies—to help them get around.


Autonomous vehicle technology on its own is not enough to help these people become more independent, but simultaneous advances in machine learning and artificial intelligence can enable these vehicles to understand spoken instructions, observe nearby surroundings and communicate with people. Together, these technologies can provide independent mobility with practical assistance that is specialized for each user’s abilities and needs.

A lot of the necessary technology already exists, at least in preliminary forms. Google has asked a blind person to test its autonomous vehicles. And Microsoft recently released an app called “Seeing AI” that helps visually impaired people better sense and understand the world around them. “Seeing AI” uses machine learning, natural language processing, and computer vision to understand the world and describe it in words to the user.

In the lab I run at Texas A&M, along with the Texas A&M Transportation Institute, we are developing protocols and algorithms for people with and without disabilities and autonomous vehicles to communicate with each other in words, sound, and on electronic displays. Our self-driving shuttle has given rides to 124 people, totaling 60 miles of travel. We are finding that this type of service would be more helpful than current transportation options for disabled people.

Under the Americans with Disabilities Act of 1990, all public transit agencies must offer transportation services to people with physical handicaps, visual or mental conditions, or injuries that prevent them from driving on their own. In most communities, this type of transport, typically called “paratransit,” is sort of like an extra-helpful taxi service run by public transit. Riders make reservations in advance for rides to, say, grocery stores and medical appointments. The vehicles are usually wheelchair-accessible and are driven by trained operators who can help riders board, find seats, and get off at the right stop.

Like taxis, paratransit can be costly. A Government Accountability Office report from 2012 provides the only reliable nationwide estimates. Those numbers suggest that per trip, paratransit costs three to four times what mass transit costs. And the costs are increasing, as are the number of people needing to use paratransit. At the same time, federal, state, and local funding for transit authorities has stagnated.

In an attempt to meet some of the demand, many communities have reduced the geographic areas where paratransit is available and asked disabled people to use mass transit when possible. Other places have experimented with on-demand ride-hailing services like Uber and Lyft. But in many cases the drivers are not trained to help disabled people, and the vehicles are not usually wheelchair-accessible or otherwise suitable for certain riders.

Autonomous shuttles, like the one we’re testing on the Texas A&M campus, can be a solution for these problems of access and funding. We envision a fully integrated system in which users can connect to the dispatching system and create profiles that include information on their disabilities and communications preferences as well as any particular frequent destinations for trips (like a home address or a doctor’s office).

Then, when a rider requests a shuttle, the system would dispatch a vehicle that has any particular equipment the rider needs, like a wheelchair ramp or extra room, for instance, to allow a service dog to travel.

When the shuttle arrives to pick up the rider, it could scan the area with lasers, cameras, and radar to create a 3-D map of the area, merging those data with traffic and geographic information from various online sources like Google Maps and Waze. Based on all of those data, it would determine an appropriate boarding spot, identifying curb cuts that let wheelchairs and walkers pass easily as well as noting potential obstacles, like trash cans out for collection. The vehicle could even send a message to the rider’s smartphone to indicate where it’s waiting, and use facial recognition to identify the correct rider before allowing the person to ride.

During boarding, the ride and when the rider reached the destination, the vehicle could communicate any relevant information—such as estimated arrival time or details about detours—by interacting with the rider as appropriate and listening to the responses, or by displaying text on a screen and accepting typed input. That would allow the rider and the shuttle to interact no matter what the passenger’s abilities or limitations might be.

In our lab we are exploring various elements of rider-assistance systems, including automated wheelchair ramps and improved seating arrangements for multiple wheelchair-using passengers. We are also studying elements that affect safety, as well as riders’ trust in the vehicles. For example, we are currently developing machine-learning algorithms that behave like good human drivers do, mimicking how humans respond to unforeseen circumstances.

Self-driving cars present fundamentally new ways to think about transportation and accessibility. They have the potential to change neighborhoods and individuals’ lives—including people who are disabled and often both literally and figuratively left behind. With proper planning and research, autonomous vehicles can provide even more people with significantly more independence in their lives.

The Conversation

Oct. 17 2017 11:19 AM

The Future of Cybersecurity Might Look a Lot Like Snapchat

Snapchat isn’t just the favored social media platform of millennials everywhere—it’s also becoming an under-the-radar model for the future of cybersecurity. Think about it: At its most basic, Snapchat lets you send a picture or video message, deletes it in just seconds, and makes it impossible to retrieve it afterward. A self-destructing message as a perfect way of safeguarding data and information—who knew it was going to take a generation of meme-obsessed weirdos to popularize a Mission: Impossible gimmick?

There is an increasing number of tools catered to mimicking the Snapchat model for professional means. Apps like Whisper, Confide, and Signal have become vogue among industries trafficking in sensitive information, especially among government sources leaking to reporters. But these apps still possess limitations, like the need for an internet connection or the lack of anonymity. They also have their flaws that tenacious (and creative) hackers can exploit.


So the only real way to ensure information can’t be stolen isn’t just to burn the message after reading—you have to burn the messenger as well. If a program handling sensitive information is deleted immediately afterward, it can’t be compromised later on by malicious parties. The original data is kept safe after it’s processed the way it was meant to be. This could obviously be a boon for businesses trying to keep trade secrets in-house, government agencies corresponding with operatives abroad, and private citizens trying to keep their social security numbers private. Anne Broadbent, a quantum computing researcher at the University of Ottawa who specializes in cybersecurity development and testing, says one-time programs could help safeguard other software as well by acting as gatekeepers to accessing valuable tools (for instance, creating a one-time password application that gives one access to military arsenal).

There’s just one problem: It’s extremely difficult to build a one-time program, at least by using conventional technologies. This would not be a way to keep office gossip on Slack about the boss’s weird ticks from being leaked. According to Broadbent, conventional information, including the code for a one-time program, can be too easily copied. For one-time programs to actually work on, say, a modern-day MacBook, you’d have to physically destroy the computer afterward to guarantee the single-serving software can’t be resurrected and run again and hacked. Not ideal.

So we have to move past conventional technologies, and the key might be quantum computing. In the quantum world, information is impossible to copy because it doesn’t exist in the static state we normally think about information. The thinking goes that if you can’t copy quantum information, a quantum computer could allow for the best opportunity to run a one-time program as envisioned.

Can this work? An international team of researchers thinks it can. A new paper currently out for peer review demonstrates a proof-of-concept for a one-time program running and deleting itself on a quantum-computing device.

Here’s how it works. Let’s say you worked at a very large credit bureau—maybe one of the largest in the nation, with a recent history of, say, putting millions of Americans’ financial information at risk—and wanted to run a program that would allow you to share—just once!—analyses of different individuals’ credit histories. Under a one-time framework, the program would run an analysis, make that information available to a client, and then delete that data along with the program to ensure total security on the company’s end.

Unfortunately, there’s a catch. Because quantum physics operates on probabilities and odds, the result of the analysis—and really, any resulting action the software takes with the data, even if it’s just to repackage it for delivery to someone else—isn’t 100 percent guaranteed to be accurate or successful. For this study, the researchers achieved a success probability of 75 percent, which isn’t bad for a first-of-a-kind trial. It’s certainly not ideal for someone trying to prove to the bank they should be allowed to buy a house, but perhaps it’s good enough for the transfer of information one wants to destroy anyway. And even if the analysis is off, the data will still be better safeguarded than nearly any other security measure.

The researchers added another key feature for safe measure: Even if the program didn’t delete itself automatically, it’s impossible to reverse-engineer the task in order to find the information that’s supposed to stay safe.

It’s unclear yet how the researchers intend to follow up on these results—the team is not taking any inquiries from the media until after the paper is formally peer-reviewed and published in a journal. But they will presumably want to minimize the probability the program produces an erroneous result. Broadbent, who wasn’t part of the research team but has read a preprint of the study, also emphasizes that there is always a trade-off between security and functionality, so the researchers will probably need to figure out a way to stretch the capabilities of a program while ensuring it still provides safety. She’s pretty optimistic, however, about the results.

Although this type of cybersecurity measure can only exist in a quantum computing process, the commercial sector, especially businesses and research firms handling enormous amounts of data, are beginning to rapidly adopt quantum computing technology, which suggests that it could become mainstream. The advent of one-time program security is probably closer than you think.