Future Tense
The Citizen's Guide to the Future

April 10 2017 5:38 PM

Someone Set Off All of Dallas’ Tornado Sirens Over the Weekend. But Was It a Hack?

Many computer security breaches are designed to stay under the radar, so they remain undetected and unmitigated for as long as possible. But every once and a while we see a breach that is meant to draw as much attention and wreak as much havoc as possible. If ever a security breach was designed to be difficult to ignore, it was the one that was exploited in Dallas last week to set off 156 emergency sirens—typically used to warn residents about tornadoes and other serious weather events—for more than an hour and a half on Friday night and into early Saturday morning, until the city finally unplugged and shut off the entire alert system.

It was an interesting security breach not just because it was loud but also because it targeted an emergency alert system. Those alert systems have become increasingly integrated into modern computing technology over the past few years. Where once upon people might have had to listen to the radio or watched TV news program to learn about school or other closings due to inclement weather, now every snow delay is simultaneously conveyed via multiple automated emails, text messages, and voicemails. Without your even asking or signing up, your phone may abruptly warn you about serious weather events in your area (sometimes even with a loud, rather sirenlike noise).

Of course, there are good reasons to make these alerts more widespread and more intrusive, and to convey them over multiple different channels. If there actually is an emergency—weather-related or otherwise—it’s important for as many people to find about it as quickly as possible. But it shouldn’t come as a surprise that as we’ve upgraded and advanced the technology behind our emergency alert systems, they’ve become increasingly vulnerable to compromise.

Which makes it all the more notable that the Dallas system was apparently compromised not because of any high-tech computer-based vulnerability, but rather by a more old-school technology: broadcast. According to the Dallas News, the compromise was perpetrated by broadcasting tones via radio or telephone signal on the specific frequency that was used to communicate with the warning sirens. Since the emergency shutdown on Saturday, officials have apparently added “some encryption” to the broadcast system to make it harder to manipulate.*

The officials investigating the incident believe it was perpetrated by someone locally within Dallas who had physical access to the central alert system hub that connected all of the sirens. If they’re right about that, it’s pretty good news—it’s easier to physically secure a main operations center, and it increases the odds of actually catching the perpetrator. An early alert system that someone needs physical access to in order to compromise is significantly more secure than one that can be activated remotely.

These lessons about manual fail-safe switches and the dangers of remote access are, in large part, the same ones we return to over and over again in discussions about how to protect the “critical infrastructure” from cybersecurity breaches. What precisely should be included in critical infrastructure has been the topic of long and ongoing debates—most recently, in January, when the Department of Homeland Security designated election equipment as critical infrastructure.

Those designations matter because the government gets more involved in handling the security of critical infrastructure systems (for instance, the electric grid and financial systems) than it does non-critical systems. These critical infrastructure systems are the ones DHS deems “so vital to the United States that their incapacitation or destruction would have a debilitating effect on security, national economic security, national public health or safety,” and they’ve outlined infrastructure elements in 16 different sectors that meet these criteria. Depending on who you ask, organizations responsible for protecting critical systems either receive more support and intelligence from the government, or are subject to more onerous regulations and requirements put in place by the government. Each of those 16 sectors has a National Infrastructure Protection Plan, and emergency alert systems are mentioned in both the Communications Sector and the Emergency Services NIPPs.

The Dallas siren breach serves as a reminder that critical infrastructure is more than just the few things that we usually think of as “critical” like the energy grid. On the one hand, the Dallas incident seems almost like a silly prank (at least to those of us whose sleep wasn’t disrupted). But if people stop taking those sirens and other alert systems seriously when they go off, then all of the effort that’s been put into updating them to be more effective and widespread will have backfired completely.

*Update, April 10, 6:45 p.m.: This piece was updated with new information about the investigation into the security breach.

April 6 2017 1:37 PM

Futurography Newsletter: Synthetic Biology and the Space Race

Hello, fellow Futurographers,

This month, Futurography is focusing on synthetic biology, an emerging field that draws on engineering and computer science principles to reshape the basic stuff of life itself. It’s our final course for the current academic year, and we’re excited to lead you through it. We’ve already got a cheat sheet that lays out some of the basic terminology, debates, and other readings. And we’re also opening with our usual conversational introduction to get you ready for the other articles that will be coming in the weeks ahead.

In the meantime, here are the articles we published last month in our unit on the new space race:

Once you’ve read all of those, take our quiz to find out how much you’ve learned. And check out what your fellow readers think about the topic in our survey write-up.

Synthetically yours,

Jacob Brogan

for Future Tense

April 6 2017 1:15 PM

What Slate Readers Think About the New Space Race

Throughout March, we published articles about the new space race as part of our ongoing project Futurography, which introduces readers to a new technological or scientific topic each month. We’ve covered a range of issues, but we’re also interested in what you have to say, so we’ve written up the results of our survey on the topic. Meanwhile, Futurography continues with our April course on the synthetic biology.

There was no universal agreement between readers on the question of what space projects are most exciting. Many enthused about the prospect of getting humans to Mars, while others were more intrigued by sending a lander to Europa—and perhaps even searching for life there. Some emphasized targets that are a little closer to home, such as setting up a colony on the surface of the moon. And though most readers took the question seriously, one joked (we hope!) that “Sex with green women” was the real goal.

Regardless of where we’re headed—or what we’re trying to do when we get there—most thought that some combination of robotic and crewed missions would be ideal. “The robotic exploration of the solar system is an absolute triumph, as is the Hubble. But human travel to space is going to happen, and we should be doing it,” one wrote. A few, though, were skeptical, offering opinions such as, “What added value do humans in space bring? Humans in space is a circus sideshow.” Or, as another put it, we should hold off on human exploration until we’re ready for serious extra-planetary colonization efforts, since, “Just sending people up to orbit the Earth does nothing.”

That said, most agreed that “efforts like landing a human on Mars” had potentially important political ramifications. “The first humans on Mars will either be American or Chinese. The political impacts could be large,” one proposed. Others suggested that the public enthusiasm drilled up by a major space mission might be the most important element: “Without public involvement, the political capital required for an effective space program will evaporate in favor of more immediate terrestrial concerns.” But some were concerned that it might be dangerous to let nationalist interests drive efforts in space. One such respondent wrote, “I’d like to think it could be done apolitically. No one owns the moon; no one should own mars.”

Many readers seemed convinced that more economically driven endeavors such as asteroid mining could yield real results. Indeed, one optimistically predicted, “Asteroid miners will be the first trillionaires.” Others were skeptical, suggesting that it would be decades before we see any real results. Another mused, “With the tremendous cost of sending things into deep space, I don’t see how we could get a decent return on our investment.” And a few worried about the potential risks, asking, for example, “What happens if there is a glut of asteroid minerals that crashes the base metals markets to the point that it's no longer profitable to launch rockets to mine asteroids?”

Readers were less divided on the question of new countries joining the space race, tending on the whole toward cautious optimism. While a few wrote that it was too early to say which efforts would be most successful, many singled out for praise projects underway in India and the United Arab Emirates. And others argued that it was good for any nation to give it a shot, since “new technology is always good for any country.” “These are very long gestation projects, so returns will take longer,” one wrote, and another suggested that they might be most effective “as incubators for high-tech engineering knowhow and carriers of national pride.” Even some of those who were unsure suggested that it might have merit, as did one who wrote, “[I]t’s a good thing to get everyone into the picture. Maybe that would be a first step towards a United Earth.”

Some of those who felt that less powerful nations shouldn’t be investing in space felt that these nations should instead be ceding the field to private companies. Indeed, many wrote that even NASA might need to take a backseat to commercial initiatives, though by-and-large they seemed to think that the future would entail a balance of public and private. “Governments have an important role to play in the future of space exploration, but it is a delicate role,” one said. Developing on a similar line of reasoning, another argued, “In particular, government space programs will provide the framework around which private space companies can build.” And as one more reader wrote, “[A]ll collaborative efforts are exciting including returning to the moon, going to Mars, and further studying the climatology of the home planet.”

This article is part of the new space race installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month, we’ll choose a new technology and break it down. Future Tense is a collaboration among Arizona State University, New America, and Slate.

April 5 2017 5:54 PM

Netizen Report: Online Battles Break Out Amid Elections in Armenia and Ecuador

The Netizen Report offers an international snapshot of challenges, victories, and emerging trends in internet rights around the world. It originally appears each week on Global Voices Advocacy. Afef Abrougui, Ellery Roberts Biddle, Rezwan Islam, Kevin Rothrock, Nevin Thompson, Laura Vidal, and Sarah Myers West contributed to this report.

new advox logo

Around the world, elections have become a time when internet censorship, online harassment, and the deliberate spread of misinformation escalate.

Last year, social media networks were blocked in Uganda and Montenegro during national elections. In the Gambia and Gabon, the internet was shut down altogether. Harassment of political candidates and activists alike was a hallmark of special parliamentary elections in Macedonia, presidential elections in the United States, and last week’s chief executive elections in Hong Kong. As elections approach in Iran, hardliners are pressuring President Rouhani’s administration to block the Telegram messaging app. And the tide of harassment is rising in Russia and France as both nations prepare to choose their next leader.

Elections in Armenia and Ecuador last week followed the trend. The lead-up to general elections in Armenia was marred by apparent attempts to thwart the online activities of journalists and at least one incident of fake news. A prominent civil society figure reported attempts by government-backed hackers to access his email account, and several Armenian journalists said their accounts on Twitter were suspended just before the vote. Their accounts have since been restored. Election monitors from the Organization for Security and Co-Operation in Europe said the elections, which resulted in the victory of the current ruling Republican Party of Armenia, were generally well-administered, but still tainted by credible accusations of vote-buying and undue influence on civil servants and employees of private companies.

Meanwhile in Ecuador, opposition party candidates and supporters faced a wave of social media account suspensions, as did media rights nongovernmental organizations. Pro-opposition websites and a handful of NGOs publishing voter education information noticed a sharp decline in web traffic on local internet service providers on the evening of the election, leading to suspicions of interference. The website for the internet access group Usuarios Digitales also experienced a distributed denial-of-service attack a few days before the election, as did the website of opposition candidate Guillermo Lasso.

LiveJournal seals deal with Russia, bans “political solicitation”
The blogging platform LiveJournal—which was acquired by the Russian company SUP Media in 2007—has now moved its data servers from California to Russia. This means that the website’s data will now be fully accessible to Russian police, in accordance with Russia’s recently enacted “anti-terrorist” legislation. Along with other restrictions, an updated LiveJournal user agreement prohibits “political solicitation” but does not define the meaning of the term.

Will Google go back to school in China?
Google Scholar may be the first Google service to return to China, according to a statement by senior Chinese lawmaker Liu Binjie. “China has been in touch with Google through various channels,” Binjie said, signaling hope that a part of Google’s business (which Binjie described as “service functions that do not involve [politically] sensitive information”) would be the first to return to China and be gradually followed by others. Google pulled its services from the Chinese market in 2010 due to a conflict over the country’s strict censorship rules, but it has expressed interest in returning to the market. Binjie’s remarks did not make clear whether censorship would be implemented in any of Google’s services.

Bangladesh regulator rejects Facebook bedtime ban
The Bangladesh Telecommunication Regulatory Commission rejected a proposal to block Facebook between the hours of midnight to 6 a.m., a measure proposed by lawmakers who argued that Facebook was “dimming the working capabilities of the youths” and should therefore be inaccessible during bedtime hours. The BTRC instead has suggested that parental controls and privacy and security features would be a better method to achieve this objective. The BTRC’s recommendation now leaves the final decision up to the government.

UAE intellectuals and activists face threats in court and online
Prominent academic Nasser bin Ghaith was sentenced by a United Arab Emirates court to 10 years in prison for “posting false information” on Twitter about UAE leaders “in order to harm the reputation and stature of the State.” According to Amnesty International, bin Ghaith said on Twitter that he had not been given a fair trial in an earlier case in which he and four others were prosecuted for “publicly insulting” the country’s leaders in online comments, in what is known as the UAE Five case. He may appeal the sentence within 30 days. Ahmed Mansoor, another member of the UAE Five, was arrested the week prior.

Supporters of Mansoor who tweeted about his arrest were attacked for defending a “traitor that deserves to die.” Global Voices volunteers analyzed the interactions between supporters of Mansoor and those who attacked them and found that some of the most influential tweets critical of Mansoor came from accounts that appear to belong to government employees or affiliates.

Stingrays, Canada-style
An investigation by the Canadian Broadcasting Corporation found that IMSI catchers—devices that spy on cellphones within a geographic area—are being used around Ottawa’s Parliament Hill. Also known as Stingrays, IMSI catchers work by mimicking a cellphone tower and scooping up the data available within an area, typically of all devices within a radius of about a third of a mile. Though CBC journalists confirmed that the IMSI catchers were being persistently used around Parliament, they were not able to determine who was using them.

Indian developers take on “fake news” on WhatsApp, Facebook
Two Indian technologists, Bal Krishn Birla and Shammas Oliyath, are working to build a website, Check4spam.com, that will help to detect fake messages spread using WhatsApp and Facebook. The initiative aims to curb the spread of false and malicious information and to “make life easy for the common man and life trouble for spammers.”

April 5 2017 4:12 PM

Free Police Body Cameras Come With a Price

On Wednesday, Taser International announced it is changing its name to Axon—and that it is offering every police department in the United States free body cameras, plus free software and data storage for one year. This announcement is a big deal, but not because it’s a great boon to policing. It isn’t.

Since the Ferguson protests in August 2014, lawmakers, watchdog groups, and even many police chiefs have embraced police body cameras as a tool of accountability. The Department of Justice has offered millions to local departments to purchase them. At first, this widespread enthusiasm seemed justified. A body camera on every cop would, in theory, record every controversial police encounter, and its very presence would deter misconduct.

But body cameras have not entirely lived up to that promise. Regulations about how, when, and whether to use them vary widely by jurisdiction. Many police departments have adopted the technology first, intending to figure out the details later. The absence of clear or uniform regulations has prompted concerns that body cameras are becoming surveillance tools of the police rather than an assurance of accountability to the public. In other words, when there are few limits on what can be recorded, it may be that everything and everyone will be.

Even as a surveillance tool, we should expect that the police, along with state and local governments, would be the ones to decide what these tools should do. But that hasn’t proven to be the case, either.

With body cameras, procurement is policy. The model and manufacturer a police department chooses will determine how it’s used in the field. That’s because choices about substantive issues like data production, storage, and sharing are issues of design. Who should be able to decide whether a body camera should be turned on: the officer or someone at the precinct? Should the camera feature a video or audio buffer constantly recording what are often crucial seconds before anyone hits the record button? Should the camera record the video or livestream it? These decisions are made by private companies, not public police departments.

No one is more dominant in this field than the organization formerly known as Taser International, which already controls at least three-quarters of the existing police body camera business, according to a July 2016 New York Times article. Building on its existing relationships with 17,000 of the country’s police departments established through its electric stun gun business, TAxon division has cornered the market on body camera contracts. In fact, the body cameras themselves aren’t nearly as profitable as Axon’s cloud service, which stores the massive amount of data generated by police body cameras and offers departments the software to analyze it. Axon’s cloud service subsidiary, Evidence.com, requires police to purchase yearly subscriptions. Every incentive exists to lock in law enforcement agencies early for these recurring, long-term services. Axon isn’t a really body camera company; it’s building a law enforcement platform.

In becoming the biggest vendor of police body cameras, Axon is exerting an undue influence on policing itself. Important questions about how body cameras operate and how their resulting data should be treated have been outsourced to a private company. Community oversight over policing is impossible when critical decisions about a surveillance technology have already been made by a vendor.

And Axon is eager to remain dominant in the body camera marketplace. Axon CEO Rick Smith has said that he expects to have facial recognition technology in his cameras sometime in the near future. The company recently acquired two artificial intelligence firms, which suggest that the company intends to apply AI to sift through the petabytes of data in its possession. Police body cameras would be instrumental in collecting “one of the richest treasure troves you could imagine” of data for applications like predictive policing. Whether to incorporate these capabilities are private, not public choices.

The usual mechanisms for police oversight don’t apply here. Want to find out what plans Axon has for future applications of their technology? Axon, like other technology vendors contracting with the police, isn’t subject to public records laws. Axon is a private company beholden to its shareholders, not the communities whose police officers adopt their cameras. Curious about the algorithm used to identify suspicious behavior in body camera video? Like other technology companies providing services to the government, Axon will likely invoke concerns about trade secrets in order to keep such information non-public. (I reached out to Axon for comment, but at the time of publication, there has been no response.)

Body cameras require a careful balancing of interests between privacy and law enforcement needs. The resulting data from body cameras, stored for ever-decreasing costs, might one day serve as a virtual time machine for the police, allowing them to watch the movements of people who were not targets at the time their movements were captured. The keys to that time machine ought to be held by governments accountable to their constituents, but it seems increasingly more likely that they’ll be held by Axon.

April 5 2017 11:42 AM

Future Tense Newsletter: What Exactly Is Synthetic Biology?

Greetings, Future Tensers,

It’s a new month, which means a new Futurography—the series in which we at Future Tense introduce you to the technologies that will define tomorrow. Each month, we choose a new tech topic and break it down, and April’s topic will be synthetic biology. Jacob Brogan gets us started with a conversational introduction explaining what synthetic biology is exactly and a cheat sheet to guide us through the key players, the big debates, and the lingo we should know.

In the past week, we also wrapped up the Futurography unit on the new space race, with pieces on why the United Arab Emirates is building a space program from scratch and how international collaborations in space reflect politics on Earth. Once you’ve read everything from the unit, you can test your knowledge by taking our quiz and then share your thoughts on the topic by completing our reader survey.

Other stories we read this week while imagining what fun could be had with a Cards Against Humanity expansion pack inspired by members of Congress’ browser histories:

  • Internet censorship: Future Tense fellow Emily Parker writes that Russian authorities want to mimic China’s approach to the internet—but it may be too late.
  • Patent law history: Charles Duan takes us back to the early days of the motion picture industry to teach tech companies a lesson on what not to do when trying to control how the public uses their products.
  • Twitter replies: Twitter didn’t ruin itself by changing how replies work—it improved how people engage on the platform, says Will Oremus.
  • Internet privacy: This week President Trump made it easier for internet service providers to collect, mine, and sell customer information. Sharad Goel and Arvind Narayanan explain why we shouldn’t be comforted by ISPs’ promises to protect customer privacy.

Watching tweets burn,
Emily Fritcke
For Future Tense

Future Tense is a partnership of Slate, New America, and Arizona State University.

April 4 2017 6:08 PM

Why You Shouldn’t Be Comforted by Internet Providers’ Promises to Protect Your Privacy

This week President Trump signed a congressional resolution to repeal protections—scheduled to go into effect in December 2017—that would have prevented internet service providers like Comcast, AT&T, and Verizon from collecting, mining, and selling customer information without permission. Internet providers have sought to assure customers their privacy will still be protected. Comcast, for example, wrote that it has “no plans” to “sell our broadband customers’ individual web browsing history.”

But let’s be clear: Despite such declarations, letting internet providers monetize sensitive web browsing data is bad for consumers.

Let’s leave aside the fact that “no plans” is not the same as “never will,” and that selling a specific individual’s history is—despite stunts trying to buy records for members of Congress—an admittedly unlikely outcome. More worrisome is the possibility that governments order internet providers to turn over their records in certain cases. The Federal Communications Commission rules would not have stopped the government from requesting data from ISPs, of course—but ISPs collect a lot more information precisely because they can monetize it, making it accessible to law enforcement. Though such requests for information might first be justified by national security, it’s not hard to imagine a world in which routine government background checks involve scrutinizing a job applicant’s online behavior. Data breaches carried out by domestic or foreign hackers, or by disgruntled employees, are an even more immediate threat to collecting and storing sensitive web records, exposing users to blackmail and scams.

In the near term, internet providers may monetize web browsing records by selling anonymized user data to advertisers in bulk. It’s unlikely, however, that these companies would be able to fully decouple browsing records from personal details. In a paper to be presented this week, we show—in collaboration with our Stanford colleagues Jessica Su and Ansh Shukla—that “anonymous” web browsing records often contain an indelible mark of one’s identity. We recruited nearly 400 users to send us their web browsing data stripped of any overt personal identifiers. In 70 percent of cases we could identify the individual from their web history alone.

Proponents of deregulation argue that companies like Google and Facebook, which are not internet providers, were never barred from collecting and selling user data. FCC Chairman Ajit Pai decried this inconsistency as improper government intrusion in “picking winners and losers.” But Pai’s sentiment is misguided. Leveling the playing field by dismantling online privacy is a convoluted way to help consumers. It would be better to hold all companies to a higher standard, limiting the scale and scope of the data they can collect, store, and sell—protections that are already mandated in the European Union.

Internet providers also hold a unique position in the web ecosystem: They are the gateway to online activity, and there are few practical steps consumers can take to prevent surveillance. For example, using a virtual private network, or VPN, simply shifts privacy concerns from internet to VPN providers. In contrast, we found that consumers can employ ad blockers and other browser tools to prevent companies like Google and Facebook from tracking their online behavior.

Further, one-third of Americans have no choice in internet provider. When given a choice in online services, many do opt for privacy: The nontracking search engine DuckDuckGo has millions of users, and privacy worries likely contributed to the meteoric rise of social networks such as Snapchat and messaging apps like Signal. When competition is limited and the market doesn’t address consumer needs, government regulation is a natural solution.

Some insist that stricter regulation of internet companies would break the web. They claim that without detailed data on individuals, online advertising would be less effective, and less effective ads would mean less revenue for website operators, forcing sites out of business and ultimately hurting consumers. The reality, though, is complicated. In another recent study—with Ceren Budak, Justin Rao, and Giorgos Zervas—we show that highly targeted banner ads are not as big a source of revenue as one might expect. Many online news sites indeed recognize the limits of targeted advertising and have started charging readers for premium access. (Slate, for instance, has a membership program called Slate Plus.) Reasonable privacy policies won't fundamentally alter the economics of the web.

One bright spot in the impending repeal is that website operators may be catalyzed to take simple and long overdue steps to improve user privacy and security. For example, adult sites have already started to encrypt traffic to their domains, masking information from internet providers and other snoopers. A free and easy-to-use tool called HTTPS Everywhere ensures browsers use encryption when possible, even when sites don’t enable it by default. Though a welcome change, encryption is a partial solution: Internet providers can still record visits to sites, they just can’t monitor what users view there.

Online privacy regulation is unlikely to improve in the current political climate. Absent government intervention, the burden is on consumers to demand internet providers and websites respect their privacy.

April 3 2017 5:59 PM

Worried About Your Online Privacy? Mozilla’s Executive Director Has Tips for You.

For those of us who believe privacy is an essential part of a healthy internet, it’s been a bruising few weeks. Congress recently voted to overturn Federal Communications Commission rules that would have helped put people in control of how their internet service providers handle private data. As a result, Washington made the already-uncertain future of online privacy that much more bleak.

It’s a setback in a privacy fight that’s growing more and more challenging. The web today is largely an ecosystem where our personal information is currency. The “free” products and content we engage with everyday may not lighten our wallets, but there’s a steep cost: Our most intimate searches, clicks, and habits are tracked and transacted.

Exchanging personal data for a service or app isn’t inherently bad. But too often, this system is broken. Users aren’t in control of how their data is being used: According to Pew, 91 percent of American adults say consumers have lost control of how personal information is collected and used by companies. And according to a recent survey from Mozilla, where I work as executive director, more than 90 percent of participants admitted they don’t know much about protecting themselves online.

Meanwhile, the internet is expanding exponentially, threading its way through our cars and cities and appliances. The web today is where we flirt, gossip, commiserate, and plug tawdry questions into search bars. And there’s no shortage of companies who want to watch.

All this is why last week’s news is so jarring. The FCC rules, had they not been dismantled, would have gone into effect later this year and offered some relief to those 91 percent of Americans in the Pew poll. But no longer.

The repeal should be a call to action. And not just to badger our lawmakers. It should be an impetus to take online privacy into our own hands.

The first step is privacy education. We teach typing skills at schools and we teach JavaScript, but what about the vast in-between? Like educating about the business models behind app stores, or how to manage browser settings, or how to manage passwords safely? VPN and PGP needn’t be intimidating words. (A VPN is a virtual private network, and PGP is pretty good privacy, an encryption program.)

This education needs to come from a network of experts, civil society organizations and media who understand the value of a healthy internet. The good news: It’s starting to happen.

Last week, the nonprofit and open-source community FreeCodeCamp explained VPNs in plain English and sounded off on the importance of HTTPS. Meanwhile, the Electronic Frontier Foundation is offering surveillance self-defense kits. They’re rich resources that detail everything from encrypting your iPhone to deleting personal data off Linux, Mac, and Window machines. At Mozilla, we’re demystifying online privacy in our Internet Health Report.

These education efforts go beyond tech organizations. Last month, Consumer Reports shared a playbook for choosing a secure messaging app, with input from security researchers and privacy activists. And after the congressional vote on the FCC rules, Vogue published a dispatch on encrypted messaging apps and VPNs.

After education comes action—everyday internet users intentionally choosing tools that value privacy. This means selecting messaging apps where encryption is the default, like Signal and WhatsApp. And consider tools that block invasive online tracking and collection of your personal data, like Disconnect. Use Tor, the free software that routes internet traffic through a labyrinthine system of relays, allowing you to surf the web anonymously. And exercise what control you do have online by managing preferences for Google, Yahoo, and Facebook ads.

Congress’ repeal of these FCC rules leaves internet users in the lurch on privacy. But it can be fixed—at least partly—through education and individual action. The silver lining to the recent news in Washington? Online privacy is finally getting the attention it deserves.

For more on protecting your digital privacy, read Future Tense’s series on cybersecurity self-defense.

March 31 2017 10:40 AM

Everyone Wants to Buy Congress’ Browser Histories, but It Probably Isn’t Possible

Shortly before Congress voted to reverse an Federal Communications Commission privacy rule allowing internet service providers to sell customer data without consent, Cards Against Humanity co-creator Max Temkin tweeted, “If this shit passes I will buy the browser history of every congressman and congressional aide and publish it. cc @SpeakerRyan.”

It’s a nice thought and got some approving media attention. If elected officials are OK with Cox, Comcast, Verizon, Time Warner, and AT&T selling users’ browser history, location data, and more, then why shouldn’t Congress get their comeuppance?

The biggest problem: As Temkin admitted on Reddit, they “don’t know if there will be any data to buy, how it will work, or what will be available.”

You can say that again. It’s not entirely clear how the data will be sold or how easy it will be to identify individuals within bulk datasets sold to advertisers or correlate them with other existing datasets. And there’s no clear-cut way to tie specific searches, for example, to specific members of Congress. Recode senior editor Tony Romm, who tweeted that “stupid stunts” like Temkin’s cheapen political discourse also tweeted, in part, that “you cannot call up Comcast and be like, ‘hi can I buy lawmakers’ browsing histories. that’s a no.’ ”

The Telecommunications Act prohibits carriers from disclosing individually identifiable information in most circumstances, so information sold to the highest bidder would come in aggregate form in order to comply with the law. So basically, a whole bunch of people’s individual datasets would be sold together, with a file on each person (minus the name and some other information). This isn’t to say that extracting identifiable information is impossible, which is one reason why privacy advocates are upset about this. Researchers have shown that anonymous data can often be reverse-engineered. But it’s still tricky. And even if you think you can extrapolate which web history belongs to which member of Congress, verifying that information it isn’t exactly a cakewalk.

This hasn’t stopped various GoFundMe campaigns from cropping up. Privacy activist Adam McElhaney from Chattanooga, Tennessee, has already raised $187,925 as of Friday morning, far exceeding his $10,000 goal. (He did offer to donate funds to the Electronic Frontier Foundation if he “can’t buy the data in the end for whatever reason” and said refunds would be possible, too.) Actor Misha Collins had higher aspirations but has only raised $78,710 of his $500 million goal at the time of writing. He, too, offered to donate proceeds to a nonprofit organization (the American Civil Liberties Union) if purchasing data is impossible or if there is a surplus. To Temkin’s credit, he hasn’t used his star power or the virality of his tweet to collect donations, though he did offer to match up to $10,000 of donations to the Electronic Frontier Foundation.

Temkin is urging people to be very skeptical of GoFundMe projects to buy the data, but he also continues to insist that he and Cards Against Humanity will do whatever they can to acquire said data, and publish it, should it become available. On Reddit, he noted that this could take a long time and may require FOIA requests or purchasing browsing data for ZIP codes where congressional office buildings are located. But it’s unclear what exactly Temkin would want to FOIA, since Congress (along with the federal courts and parts of the Executive Office) are not subject to FOIA. ­­

The fact that collecting individual search data isn’t a piece of cake doesn’t make the resolution any less awful. If recent history has taught us anything, it’s that any collected data is vulnerable to hacking. And just because information is distributed in aggregate form doesn’t mean it’s sorted that way. As EFF points out, there are still many creepy things your ISP can do in light of these repealed privacy protections, including tracking and recording HTTP traffic, tracking top-level domain names for HTTPS sites you visit, and injecting invasive ads based on your browsing history.

Fighting the effects of this legislation—whether that’s through state laws, more widespread adoption of HTTPS (which makes your web browsing more secure), individual use of virtual private networks, or donations to nonprofits like EFF—is commendable. But putting full faith in creating a database of search data information on congressional representatives may lead to disappointment, because it’s probably not going to happen.

March 30 2017 5:35 PM

No, Twitter Didn’t Just Ruin Twitter by Changing How Replies Work. It Improved Itself.

Twitter changed one of its oldest features on Thursday: the “at-reply” or “@reply,” which is now just a reply.

With the change, the text of a reply will no longer begin with the Twitter usernames of the people it’s replying to, and those handles will no longer count against the 140-character limit. Instead, the names of the other users involved in a conversation will appear in blue, small-type metadata above the reply itself.

If that sounds confusing, it really isn’t. What’s confusing is the way replies used to work. The at-reply was a hack that users came up with early in Twitter’s history because the service lacked a built-in reply feature. Twitter soon adopted and formalized the convention, and co-founder Ev Williams explained its intricacies in a 2008 blog post. Even back then, he recognized that the system remained confusing, and pledged to keep working toward a cleaner implementation.

Nine years and three CEOs later, replies on Twitter finally work roughly the way you might expect replies to work on a modern social networking service.