Future Tense
The Citizen's Guide to the Future

May 24 2016 2:50 PM

Even the Most Well-Intentioned Hashtags (#YesAllHashtags) Quickly Devolve Into Kitsch

A Twitter bio is an opportunity to inform and play. As it does with tweets themselves, the site caps bios at 140 characters, but the platform’s users still manage to cram a great deal of personality into that compact space. Donald Trump’s fans frequently nod to their even less savory political beliefs, for instance. Freelance journalists, meanwhile, will often name publications that they’ve written for, both to brag and to promote their own availability.

And then there’s Twitter co-founder and CEO Jack Dorsey. For months now, Dorsey’s bio has simply read “#withMalala!” That hashtag is a reference to a social media drive inspired by Malala Yousafazi, the teenage human rights activist who received the Nobel Peace Prize in 2014. An advocacy campaign for an advocacy campaign, #withMalala “is a global digital art project” supported by the Malala Fund in which users submit images to a gallery that “support[s] Malala’s global campaign to guarantee 12 years of free, quality, and safe education to millions of girls worldwide.” None of this information appears in Dorsey’s bio—the hashtag stands alone.


Dorsey probably needs no introduction on the site, but the brevity of this message is nevertheless odd. Composing a telling Twitter bio is like choosing the perfect Facebook photo: If you don’t bother to write one, you’d better be cool enough to pull it off. Some get away with an icily brief self-description (say “writer,” with the first letter pointedly lower case) while others are just badass enough to manage no bio at all. Dorsey’s enthusiastic proclamation, by contrast, is just kind of dorky.

The #withMalala campaign has a worthy goal, of course, whatever one might say about the project’s execution. It’s also feels authentic to Dorsey, who’s marched in protests and otherwise trumpeted his political commitments. Though some have dismissed hashtag activism as mere virtue signaling, this message may well be very real, and very deeply felt, for Dorsey. But it’s still tempting to make fun of him for it because the way he’s conveying that message still feels so silly—and because that silliness gets at the nature of the modern internet.

By design, hashtags are concessions to the excess of our technological present, an excess embodied more fully by Twitter than virtually any other online destination. The site’s users compose and send thousands of messages per second. Words pile up too fast to be read, meaning that something always gets lost: Follow too many accounts, and your feed will devolve into chaos. But if you try to dive into the larger conversation, you’ll soon be lost, unable to discern who’s speaking, and what they’re speaking about.

Over the years, Twitter has deployed a number of features to help users sort through its morass of content. In late 2015, it debuted Moments, curated narratives that help explain what users are discussing on the site, and in 2016 it started organizing users’ timelines algorithmically. While both of these much mocked and maligned features are better than most acknowledge, neither is quite as functional as a simple hashtag, which allows you to see at a glance everyone what everyone’s saying about a particular topic.

Precisely insofar as they pare away at our information overload, hashtags also tacitly acknowledge that communication is ephemeral on the internet. They help direct our attention in the flux of our online experiences. Sometimes they can broaden your field of view—like many others, I use them to see what other fans are saying during live television broadcasts, for example—but they’re still tied to the immediacy of perception itself.

It’s this capacity of the hashtag that makes Dorsey’s bio—and other lingering messages like it—so dopey. Even the best hashtags work because they’re tied to the specificity of an event; urgency makes them effective. Later, when we’ve had the opportunity to dig through the rubble of experience, we no longer need them quite so much. With the distance of retrospection, even the most useful hashtags feel like so much kitsch. While the Mala Fund describes the still in-progress #withMalala as “a 12-month social action and advocacy campaign,” in hashtag form it forever feels like yesterday’s news.

Someone who continues to use a hashtag that has outlived its moment is like a friend who insists on recounting his fast-fading dreams. It represents a fundamental failure to read the room—or, as the case may be, how the internet works. There’s nothing necessarily wrong with the Malala Fund’s campaign, and there’s arguably something noble about trumpeting its importance. But hashtags are the wrong way to promote such a program, since they make it feel like a project of merely temporary significance.

There’s a certain irony to seeing Dorsey fall into this trap, not least of all because he helped engineer the snare. Twitter succeeds—to the extent that it does—by embracing the internet’s culture of impermanence. Dorsey looks silly because he doesn’t grasp the way his own platform works. Of course, Dorsey isn’t the only public figure with politicized hashtags in place of a proper bio. There’s also Donald Trump, who allows himself two: #MakeAmericaGreatAgain and #Trump2016. Let us hope that they too soon feel like so much forgotten internet fluff.

May 23 2016 6:13 PM

FBI Says the Sketchy Software It Uses in Investigations Isn't Malware

The Federal Bureau of Investigation has been sneaking surreptitious code onto computers for years as part of its inquiries. This software is often referred to as "malware," a portmanteau of "malicious" and "software," because targets don't know that they've downloaded the programs or that some part of their digital lives is being monitored. It seems like a pretty accurate descriptor, but in testimony last week the FBI objected to the characterization.

As Motherboard spotted, FBI special agent Daniel Alfin said Thursday that software used to identify thousands of anonymous users on the child porn site Playpen shouldn't be labeled as malware. The comment came during court testimony for a case against one of the identified Playpen users. He said that the “Network Investigative Technique” (NIT) used on Playpen “was court-authorized and made no changes to the security settings of the target computers to which it was deployed. As such, I do not believe it is appropriate to describe its operation as 'malicious.' ”


Malware is a concept, not a codified industry standard, so there's no official definition. The networking company Cisco describes malware as "code or software that is specifically designed to damage, disrupt, steal, or in general inflict some other 'bad' or illegitimate action on data, hosts, or networks." Massachusetts Institute of Technology's Information Systems and Technology Department describes it as "Any software that gets installed on your machine and performs unwanted tasks, often for some third party's benefit."

A clandestine FBI surveillance tool seems to squarely fit these definitions, though Alfin made it clear that the NIT didn't cause damage or harm to anyone's computers. The FBI may be getting sensitive about defending its digital investigation practices as situations like the Apple/FBI dispute call the agency's methods into question. The bureau will have to work hard to soften the image of secret spy software, especially in the wake of Edward Snowden's revelations about the National Security Administration. Whether or not it's a fair comparison, Americans are bound to feel like they've seen where these things can go.

May 19 2016 9:13 PM

What Glenn Beck Gets Right About Facebook and Bias

Glenn Beck, of all people, may have just helped to defuse a controversy over allegations of liberal bias at Facebook.

After sitting down with Facebook CEO Mark Zuckerberg and a host of prominent conservatives this week, the pundit wrote a blog post provocatively titled “What disturbed me about the Facebook meeting.” It’s a clever headline because it sets you up to expect a diatribe against the social network. Instead, Beck points the finger at his fellow conservatives for overreacting to one ex-Facebook contractor’s anonymous allegations. Those allegations, published by the tech blog Gizmodo last week, incited a media frenzy and unleashed a wave of conservative outrage in which Facebook became the latest emblem of the vast liberal media conspiracy. (Here’s a primer on the whole convoluted controversy.)


Beck came away from the meeting impressed by Zuckerberg—but by his fellow conservatives, not so much. “I looked around the room, I heard the complaints, I listened to the perspectives, and not a single person in the room shared evidence of any wrongdoing,” he writes. Instead, he goes on:

I sat there looking around and heard things like:
1) Facebook has a very liberal workforce. Has Facebook considered diversity in their hiring practice? The country is 2% Mormon. Maybe Facebook’s company should better reflect that reality.
2) Maybe Facebook should consider a six-month training program to help their biased and liberal workforce understand and respect conservative opinions and values.
3) We need to see strong and specific steps to right this wrong.
It was like affirmative action for conservatives. When did conservatives start demanding quotas AND diversity training AND less people from Ivy League Colleges.
I sat there, looking around the room at ‘our side’ wondering, ‘Who are we?’ …
What happened to us? When did we become them?

He adds:

The overall tenor, to me, felt like the Salem Witch Trial: ‘Facebook, you must admit that you are screwing us, because if not, it proves you are screwing us.’

Beck gets some important things right here. For the conservative politicians and talking heads who fanned this firestorm, it was never about “evidence.” (It rarely is.) It was about seizing an opportunity to stoke resentment and mistrust of the media. That resentment and mistrust is crucial to causes like convincing people that climate change is a hoax or that Donald Trump is qualified for the presidency.

That the controversy is largely the product of cynical conservative grandstanding is not Beck’s only insight. He also recognizes that it is very much in Facebook’s own business interests to appeal to conservatives every bit as much as liberals, and he sees that Facebook is smart enough to recognize that, too. He writes that Facebook has in fact been as much a boon for conservative media as it has for liberal outlets. (If there’s anyone suffering from Facebook’s reign over the media, it’s probably centrists and nonpartisan organizations, whose messages tend to be less conducive to reflexive likes and clicks.) He recognizes that Facebook’s “trending” news feature, which lies at the center of the controversy, is peripheral to what the platform is really about.

Beck is also right that big Silicon Valley tech companies and their leaders share some values in common with him:

These are people who want to innovate and disrupt, they want the government to stop regulating their businesses, they want small business to succeed, they value personal responsibility, etc. Why they are liberal? I don’t know, but in general, they’re not Progressives, at least not the folks I met with today (though I’m sure there were a few).

No, Mark Zuckerberg is not a progressive. That said, it’s no mystery why he and other Silicon Valley CEOs consider themselves liberal. They’re liberal mainly on social issues because social conservatism is rooted in fear and resistance to change, whereas Silicon Valley’s ethos is one of boldness and embrace of change. Still, Beck is not wrong to find some common ground.

But the biggest thing Beck gets right, at least partly, is that bias is human and natural, and that the key is not to deny one’s biases but to acknowledge them. Early in his post, Beck writes:

Before I dig in, since I’ll be talking about bias, let me share a bit about mine. I have been an avid Facebook user for about 8 years. I have 3.2 million followers. I consistently see high engagement on my Facebook page. We have begun using Facebook’s live video streaming platform and are encouraged by the results and plan on utilizing it more. The Facebook staff has always treated me and my staff kindly. They have been responsive, helpful, and available. I came into the meeting today wanting to believe that Facebook was a good, if not perfect, actor.

By acknowledging those biases, Beck allows us to better evaluate his arguments and understand how he arrived at his conclusions. In the same spirit, I should say here in case it wasn’t blindingly obvious that I am not a political conservative; that I’m generally not inclined to view Beck favorably; and that as a writer for an online opinion magazine I tend to view bias as something to be acknowledged and disclosed and confronted rather than denied. As for my views on Facebook, they’re mixed but based on my years of experience covering the company I regard it as generally well-intentioned but also, like most for-profit multinational corporations, deeply self-interested. That probably helps to explain why I’m so certain that as a matter of policy, Facebook would not intentionally suppress news of interest to conservatives, whose ad dollars are worth just as much as liberals’.

But to get back to Beck’s biases: It’s not hard to see, given his stated desire to think well of Facebook, how he came out of Wednesday’s meeting with Zuckerberg thinking just that. The fact that he saw “no evidence” of deeper problems at Facebook is also not surprising given that he was there for at most a few hours, in a carefully staged setting, no doubt surrounded by a phalanx of PR handlers.

And for all the things Beck got right, I think he also got a few wrong. Facebook is not as committed to “openness” as he seems to believe, and while Zuckerberg may be “earnest” in some ways he is certainly not guileless. Most significantly, Beck misses what I believe is Facebook’s own crucial role in creating the conditions for this controversy. Specifically, I’ve argued that it’s the opaqueness of Facebook’s product and its own refusal to admit the possibility of human bias that set it up for its public drubbing, silly though it may have been.

Several days into the story, after it had spiraled out of the company’s control, Facebook published the guidelines its curators rely on to decide which stories belong in the trending news section, and which sources to link to. (It did this only after the Guardian had published a leaked version of them.) That act of transparency, belated and grudging though it may have been, was exactly what the company needed to demystify its decision-making process. If only Facebook had been more open about this from the start, it might never have had to meet with Beck or his cohorts. Beck himself seems to get that. Why doesn’t Zuckerberg?

May 19 2016 12:03 PM

It’s So Hard to Get Good Digital Help These Days!

You don’t have to be a CEO to have an executive assistant anymore. Meet Siri, Alexa, Jibo, and Cortana, just four of the new artificially intelligent digital assistants from prominent technology companies designed to make your life easier. By a simple voice command they will answer your every question, restock your groceries, order lunch, remind you of your next appointment, and fire up the latest episode of your favorite TV show.

But before you welcome them into your home, you might want to ask where that friendly (why do they all tend to be female?!) voice is really coming from. Behind this technology are companies vying to bring you into the conversation age. Whoever succeeds will know us more intimately than ever before—our needs, thoughts, and desires—allowing them to exert even more influence over our lives. Which raises the question: will they help keep our secrets? 


Join Future Tense for a happy hour conversation in Washington, D.C., at 6 p.m. on June 8 to discuss the technology behind our new helpers and their cultural implications for society. Slate senior technology writer Will Oremus will discuss the topic with Brigid Schulte, director of New America’s Better Life Lab program and the Good Life Initiative, and author of Overwhelmed: Work, Love, and Play When No One Has the Time. The conversation will be moderated by Washington Post opinion writer Alexandra Petri. Refreshments will be served. For more information and to RSVP, visit the New America website.

May 18 2016 6:00 PM

Google Launches New Chat and Video Apps Alongside ... Existing Chat and Video Apps

There's Google Messenger, there's old Gchat, there's Hangouts, there's all of Hangouts' specialized video chat services, but today at its I/O developer conference, Google announced two new apps: Allo for messaging and Duo for video. Huh.

Both incorporate tech like photo vision and machine learning to make it easier to get information and services from Google without ever having to leave the apps. Both are connected to your phone number so you can use them more like Apple's iMessage and Facetime. Allo offers pre-written predictive responses to messages you recieve and lets you use Google's AI assistant to do things like browse restaurants and book a dinner reservation from within a chat. You can also chat with the assistant directly to find out whether your flight is delayed and what the weather is where you're going.


Duo offers HD video calling but can also adjust to a lower quality if you try to use it on a slow connection. It switches between mobile data and Wi-Fi when possible, and generally seems designed to work in varied conditions—not just places with top connectivity infrastructure.

Both apps will be available on Android and iOS this summer.

Duo offers end-to-end encryption on all calls, and Allo has an Incognito mode (similar to turning your Hangouts history with someone off or going "off the record") that is also fully end-to-end encrypted and has a seperate type of notifications. The FBI may not be happy about it, but these security measures are becoming more standard for communication apps.

Google seems to be debuting Allo and Duo as alternatives to their main offerings, similar to how their Inbox service for Gmail has been hanging out for a couple of years alongside main Gmail. For something like chat, though, where it's more convenient when everyone you know is on the same platform, it's unclear why Google would choose to fragment its users instead of just updating Hangouts with a bunch of new features.

May 18 2016 5:56 PM

The Google Home Is Like the Amazon Echo, Only Smarter. And Maybe Creepier.

For more than a year now, there has been a popular tech gadget that is the only one of its kind on the market. The Amazon Echo, a “smart speaker” that you control by voice, was the company’s end run around the smartphone industry, which it failed to break into with the Fire Phone. Widely viewed as quixotic upon release, the Echo gradually won over many of its critics, and a surprising number of consumers, with its dead-simple interface and just enough practical use cases to insinuate itself into one’s daily routines. It was only a matter of time before one of the other big companies copied it. And now Google has.

At its annual developer conference Wednesday, the company announced Google Home, a “smart speaker” that—well, I probably don’t need to repeat it. It does basically the same stuff the Echo does, plus or minus a few features. It’s also very similar in design, if perhaps a little friendlier-looking. It bears some resemblance to an air freshener, or perhaps a modernist salt shaker.


Usually when big tech companies copy each other’s ideas, they put up some pretense of originality. Google, to its credit, barely bothered to pass off Home as its own innovation. In fact, in a moment of honesty and magnanimity that is nearly unheard of in the world of tech product launches, Google CEO Sundar Pichai explicitly cited the Echo’s success, saying, “Credit to the team at Amazon for creating a lot of excitement in this space.”

I can think of a reason, beyond politeness or human decency, why Pichai might feel comfortable offering this sort of credit to a rival product before extolling the virtues of his own. It’s that he’s supremely confident that Google can beat Amazon at its own game.

Yes, Amazon has a head start in the “smart speaker” space, and the Echo offers more integrations with services like Spotify and Domino’s and 1-800-Flowers than the Google Home will at launch.

But what Google knows is that “speaker” isn’t the operative word here, and the Echo isn’t the real product. The operative word is “smart,” and the real product is the voice-control virtual-assistant software that animates the speaker. In Amazon’s case, that’s Alexa. In Google’s case, it’s the newly rebranded “Google assistant,” which builds on the company’s already successful Google Now software.

Viewed through this lens, it’s actually Google that’s the incumbent here, with years and years of experience developing industry-leading voice recognition, natural language understanding, and conversational search technology. What Amazon found with the Echo was really just a fresh use case for the type of software that Google has been building all along.

As a result, Google Home will enjoy two big advantages over the Echo right from the beginning. First, the virtual assistant that lives inside it (or, more precisely, that resides in Google’s server cloud), will be essentially the same one that already lives inside some 1.5 billion people’s Android devices. As a result, it will connect directly and seamlessly to the many Google services that people know and use, like Google Maps, Gmail, and Google calendar.

Second, Google assistant is likely to be far more intelligent than Alexa, in the sense that it will be better at both understanding your queries and answering them. Ask Alexa a question about the world, and it will recite an answer straight from Wikipedia, one of a very limited number of information sources to which it has access. Ask Google assistant a question about the world, and it will tap into all of the knowledge and power of Google search. Not only that, but it will draw on Google’s state-of-the-art “conversational search” technology, which intuits not only the denotative meaning of a given query, but some of the conversational context that surrounds it.

So, as Pichai demonstrated, Google assistant will not only answer the question, “What is Draymond Green’s jersey number?”, but if you then ask it, “Where did he go to college?”, it will recognize that “he” refers to Green and will answer that question too. Alexa simply can’t do that yet. Which is why Pichai was not exaggerating when he bragged that Google assistant will boast capabilities “far beyond what other assistants can do.”

Like the Echo, the Home will also serve as a remote control for various household devices: “Turn the lights on in Kevin’s room” was Pichai’s example. Here, too, Google enjoys an incumbent advantage, thanks to its 2014 acquisition of the smart thermostat company Nest.

All of which might make the Google Home sound far more appealing than the Echo. But don’t forget that virtually everything Google does has a shadow purpose that it doesn’t talk much about, which is to collect data on users’ behavior and harness it to build a hyper-detailed profile of their likes, dislikes, buying habits, and nonbuying habits.

People have given the Echo somewhat of a pass in the privacy department, despite its radically intrusive potential as a surveillance device. (It listens to literally everything you say in your own home.) That may be because Amazon has relatively limited access to the rest of our private information. Not so for Google, which will now be as privy to everything we say and do offline as it is to our online behavior. To the extent that Google assistant is smarter than Alexa, it’s also likely to be that much creepier.  

May 18 2016 11:30 AM

Future Tense Newsletter: The Real Problem With Surveillance

Greetings, Future Tensers,

For some reason, whenever people worry about drones, they seem to worry about sunbathers. In an article for this month’s Futurography course on the alleged creepiness of drones, Margot E. Kaminski traces some of those references to accidental exhibitionists, proposing that they zero in on many of our most prominent anxieties about surveillance. “The problem with letting the sunbather narrative dominate drone privacy coverage is that it provides a woefully incomplete account of the kinds of privacy concerns that drones raise,” Kaminski writes.


Some of those concerns get a little easier to understand when you look into the ways drones operate in the wild. Slate video producer/editor Aymann Ismail tried to peer in on the private moments of some willing participants (that’s right, his boss let him spy on her). He found that his quadcopter was so loud, it was hard to use it without attracting notice, confirming a critical point made by Faine Greenwood in her article about drone myths from last week.

If we’re really concerned about privacy, drones might not be the right target. A Philadelphia-based computer science professor made national news last week when he noticed that someone (no one’s really sure who) had disguised an unmarked police SUV loaded with license plate reader tech as a Google Maps car. Meanwhile, Belgian police warned that Facebook’s new emoji reactions make it easier for the social network to track its users. And Stanford researchers showed that you can discern a great deal of personal information about individuals from their phone metadata. If anything, the relative obviousness of drones makes them far less problematic than some of these other surveillance technologies.

Here are some of the other stories that we wish we could inscribe on nickel plates this week:

  • Government hacking: The Supreme Court recently approved a change to Rule 41 of the “Federal Rules of Criminal Procedure,” effectively giving the government permission to hack almost anyone’s devices, almost anywhere in the world.
  • Internet magic: The Simpsons image quotation site Frinkiac just made it easier than ever to make GIFs from the series. Frinkiac’s creators talked to me about how it works.
  • Social media: Do Facebook’s supposed political biases affect what you see online? Will Oremus explains the latest controversies.
  • Wildfires: Stephen J. Pyne argues that the tragic Fort McMurray fires aren’t just indicative of climate change. They speak to a larger more global shift toward what Pyne calls the Pyrocene.

Reconsidering my hashtags,

Jacob Brogan

for Future Tense

P.S.: Do you read the comments? Do you leave comments? Do you ignore the comments? Either way, we’d like to know about it! Please take a few minutes to complete our survey.

May 17 2016 6:25 PM

Why Does It Still Take Five Hours to Fly Cross-Country?! A Future Tense Event Recap.

In January 1959, the first transcontinental commercial jet trip flew from Los Angeles to New York City in five and a half hours. Today, the same trip will take a half hour to an hour longer (that is, if your flight isn’t delayed). A lot has changed since 1959—fares are less expensive, planes have reduced effects on the environment, and we’ve reached astonishing levels of safety—yet the speed hasn’t increased, and the romance of flying is gone. So why is the Concorde, the fastest commercial airliner ever built currently sitting in a museum collecting dust? And what’s next for aviation? On Wednesday, May 11, Future Tense—a partnership of Slate, New America, and Arizona State University—brought together industry experts, leaders, and innovators to weigh in on the future of flight at an event in Washington, D.C.

Greg Zacharias, chief scientist of the U.S. Air Force, joined NASA Deputy Administrator Dava Newman in conversation with moderator James Fallows, national correspondent for the Atlantic, to discuss the historic role the Air Force and NASA have played in driving the research and investment that gets adopted by the private sector and creates jobs in the U.S. economy. In February, NASA announced the arrival of a new era of cleaner, quieter, and faster aircraft. “New Aviation Horizons,” an initiative included in the president’s budget, will design, build and fly a series of X-planes, or experimental aircraft, during the next 10 years. Newman emphasized the importance of investing in such new initiatives to ensure the United States is a leader of this field. According to her, the public/private partnerships are stronger than they’ve ever been with the goal of “transition[ing] these technologies sooner, quicker, and cheaper” into commercial markets.


The private sector, however, faces the financial challenge of taking designs to market. Richard Aboulafia, vice president of analysis for the Teal Group Corp., reminded the audience that even with the support of public sector partnerships, the commercial aviation industry is a low-margin business. So private sector companies that aim to design paradigm-shifting planes face the additional challenge of making them economically viable. Joining Aboulafia in conversation were representatives of three companies—Airbus, Boom Technology Inc., and Lightcraft Technology Inc.—that are attempting to do just that. Airbus and Boom are aiming to build and market the next supersonic jet that can achieve what the Concorde could not, unmatched speed at a cost-effective price. Leik Myrabo’s lightcraft technology aspires to achieve speed and environmental sustainability within an entirely new infrastructure for air travel that includes light-ports and laser-projecting satellites.

But it’s not just about the cool new technology. David Lackner, vice president and head of research and technology for North America Airbus Group Innovations, reminded the audience that the industry must also grapple with existing policy and infrastructure. For instance, one of the greatest barriers to supersonic air travel is bans on flight over land. When supersonic jets travel at a speed of Mach 1 and above, they generate the sound we know as the sonic boom. Today, NASA is working with Lockheed Martin on a preliminary design for Quiet Supersonic Technology, aircraft that can fly at supersonic speeds while only registering a soft thump. As the technology moves to market, the public’s appetite will change and so will the policies that once limited supersonic travel. For example, when consumers realize they can travel faster from Los Angeles to Tokyo than from L.A. to New York due to regulations of supersonic travel over land, policymakers will feel the need to respond. Michelle Schwartz, chief of staff of the Federal Aviation Administration, said the FAA is more collaborative with industry than ever before and she understands that with “industry moving at the speed of Silicon Valley, FAA can’t be moving at the speed of government.”

But new technology won’t fix our aviation system. We still have other problems to deal with—like long lines at airports and an air traffic control system that needs modernization.  Justin Powell, principal at Arup Group, and Diana Pfiel, CTO of Resilient Ops Inc., believe that innovation in the private sector can respond to the infrastructure problems that affect passengers’ journeys. For instance, Pfiel and her team use crowdsourcing and data sharing to increase transparency and give passengers more control of their experience by identifying the source of delays in airports.

As Fallows noted, “Flight today is both a miracle and a frustration.” Perhaps in the future the romance of flying will once again return.

May 17 2016 6:15 PM

You Can Learn a Scary Amount From Someone’s Telephone Metadata

The National Security Agency used the Patriot Act to justify its large-scale telephone metadata collection program until the relevant sections expired last year. But the NSA and other government agencies can still access and collect metadata in various ways. Now, a new study from Stanford University is reasserting the problems this poses for individual privacy.

The study, published Monday in the Proceedings of the National Academy of Sciences, used a custom smartphone app to collect the telephone metadata (phone numbers called, duration of calls, etc.) of 823 participants. The app logged 251,788 calls and 1,234,231 text messages. All told, these communications involved 62,229 unique phone numbers.


The researchers analyzed the metadata using some automated techniques and some manual work. They found that they could establish personal details about the study participants fairly easily by sorting the data in different ways and with “limited resources—far below those available to a large business or intelligence agency.” Depending on what numbers people were calling/texting, how long they were on the calls, and who they called or texted next, the researchers could figure out things like people’s medical conditions, romantic relationship statuses, and identities.

Telephone metadata has been used relatively freely by government agencies on the assumption that it is anonymized and meaningless without context. Director of National Intelligence James Clapper said in 2013 that the agency’s bulk metadata collection program “does not allow the Government to listen in on anyone’s phone calls. The information acquired does not include the content of any communications or the identity of any subscriber.” But the study results reflect what privacy advocates have been saying for years: Collecting telephone metadata can erode individual privacy.

“Telephone metadata is densely interconnected, susceptible to re-identification, and enables highly sensitive inferences,” the researchers wrote. “The results of our study are unambiguous: there are significant privacy impacts associated with telephone metadata surveillance.”

The researchers are far from the first people to reach these conclusions, but their qualitative approach serves as a valuable reminder.

May 17 2016 12:54 PM

Belgian Police Caution Facebook Users About the Privacy Implications of “Likes”

In February, Facebook announced that the “like” button would be joined by five other emoji reaction options. For users who had wanted a dislike button, it was a welcome change. But people recognized from the start that Facebook could also benefit from giving its users this enhanced flexibility and choice by gaining more insight into their preferences and moods. And there could be some potential privacy issues there.

On Wednesday, the Belgian police department published a statement discouraging citizens from using Facebook’s reaction buttons. It explains that giving Facebook data about your opinions and mood allows the company to serve you ads based on what it thinks you will be most receptive to seeing in a particular moment or on a particular day.


My colleague Will Oremus wrote on Slate in February, “Facebook has come to believe that the key to its long-term success lies in gathering ever more and ever richer data on how its users react to the posts they see in their feed.” Limiting the reaction buttons to six choices instead of giving users the whole emoji library helps Facebook get a simplified and distilled idea of what its users think about things instead of having to wade through endless combinations.

“By limiting their number to six, Facebook is counting on the fact that you’ll express your thoughts more easily, which will allow the algorithms running in the background to track you better,” the Belgian police explain (as translated by Slates L.V. Anderson). “This will be a reason not to click too quickly if you want to protect your private life.”

Facebook’s “like” button has already been involved in legal questions about protected speech and the First Amendment. It feels like such a small thing to react to a photo of your friend’s new dog, but that tiny piece of data offers insight into who you are.