Future Tense
The Citizen's Guide to the Future

July 21 2017 3:34 PM

YouTube Starts Redirecting People who Search for Certain Keywords to Anti-Terrorist Videos

On Thursday, YouTube announced a new effort to push back against terrorist recruitment efforts on the site. As the company announced in a blog post, “[W]hen people search for certain keywords on YouTube, we will display a playlist of videos debunking violent extremist recruiting narratives.” Arising out of partnerships with nongovernmental organizations, this new feature is part of a larger project called the Redirect Method, an effort specifically targeted at those vulnerable to ISIS’s messaging.

It’s also part of a larger YouTube strategy, one that  Google (YouTube’s corporate parent) counsel Kent Walker laid out last month in a blog post. That announcement came in part out of a response to an advertiser boycott earlier in the year, one driven by companies frustrated to find their own clips running in front of terrorist videos. In response, as Variety reported at the time, Google claimed that it would “be taking new steps to improve controls advertisers have over where their ads appear on YouTube.”

Advertisement

But as Walker explained in his June post, the company was “pledging to take four additional steps” as it worked to actively combat extremism on its platform: It was stepping up technological-identification of terrorist videos, increasing human flagging of such content, more aggressively some videos that don’t directly violate the terms of service, and “expand[ing] its role in counter-radicalisation efforts.” This newly announced redirection strategy seem to be a a product of that fourth and final prong.

In framing both the problem and its approach to it, Google is careful to avoid rhetoric that would suggest it intends to engage in censorship. That’s less of a concern in Europe, where courts have found that free speech laws do not protect extremist videos. But tech companies walk a finer line in the United States, “where free speech rules are broader,” as the Verge observes in a post on related efforts to rein in terrorist content.

As it grapples with this potential concern, YouTube appears to be stressing that it stands in opposition to those who would silence others. Note, for example, how Walker opens his blog post with the phrase, “Terrorism is an attack on open societies, and addressing the threat posed by violence and hate is a critical challenge for us all.” If terrorists oppose “open societies,” then any attempt to combat them should be in the service of defending openness, a conceit that grows fuzzy if technology companies are seen to be silencing some of their users.

In this sense, YouTube’s embrace of the redirect method looks like a smart strategy. It is, as it makes clear, actively removing content that violates its terms of service. But it also gives the impression of a company more focused on drowning out ugly voices than in actively eliminating them. Here, there’s a small but potentially important detail in its announcement: As it moves ahead, YouTube hopes to collaborate “with expert NGOs on developing new video content designed to counter violent extremist messaging at different parts of the radicalization funnel.” Significantly, redirection has the potential to reach those who come looking for terrorist videos, whether or not they’re present on the site.

All that said, it remains to be seen how effective the redirect method will be.

As the Verge reports, “An earlier pilot of the Redirect Method led to 320,000 individuals viewing ‘over half a million minutes of the 116 videos we selected to refute ISIS’s recruiting themes.’ ” While that’s promising, it may run aground against the ways that terrorists get around YouTube’s existing content restrictions. In a long article on the topic, Motherboard writes, “[I]n order to prevent users from flagging explicit or inflammatory extremist videos, terrorist media groups and disseminators like The Upload Knights and AQ’s As-Sahab Media Foundation often label YouTube videos as ‘unlisted,’ meaning that the videos cannot be searched—only accessed if you are given the link.” If potential recruits are finding extremist material by other means, search redirects may not make that much of a difference.

July 21 2017 11:01 AM

Twitter Claims Its Changes Have Led to "Significantly Less Abuse." But Will They Be Enough?

Twitter, like many other social media platforms, can be a cruel place when people choose to make it one. Rude quips abound, as do, more troublingly, threats of assault and death.

For years, users have been calling on the company to make the site safer. And now, at least according to a recent blog post from Twitter’s General Manager Ed Ho, some of their appeals have been answered.

Advertisement

Back in January, Ho tweeted a thread about the social media network’s ramped-up efforts to tackle the issue, writing, “Making Twitter a safer place is our primary focus and we are now moving with more urgency than ever.” He admitted that the company didn’t move fast enough to address abuse in the past, and said that his team would start speedily rolling out product changes. That particular week, he tweeted, they were introducing overdue fixes to muting, blocking, and preventing repeat offenders from creating new accounts.

On Thursday, Ho’s blog detailed some of the other efforts Twitter has made in the past six months. Among others, he wrote, the company convened a Trust and Safety Council that included “safety advocates, academics, researchers, grassroots advocacy organizations and nonprofits focusing on a range of online safety issues—from child protection and media literacy to hate speech and gender-based harassment” to help Twitter tailor new policies and features.  The team also conducted research and made algorithmic changes like producing better search results and collapsing potentially abusive or low-quality tweets.

In the post, Ho seemed confident that the reforms were leading to substantial progress.

“While there is still much work to be done,” he wrote, “people are experiencing significantly less abuse on Twitter today than they were six months ago.”

But we’ll have trust to the company’s word on the metrics. Twitter hasn’t released any internal data yet, though Ho did disclose a few positive measures.

For one, he said, Twitter is taking daily action against abusive accounts at 10 times the rate it did this time last year. It also began imposing temporary limits on abusive accounts that he said resulted to 25 percent fewer abuse reports from those users. Of the accounts put on probation, 65 percent don’t have to be restricted again.

Yet on some other significant measures, the company remains opaque. Twitter hasn’t been as vocal about how other major changes, such as its new “algorithmic timeline,” have changed the nature of abuse, discourse, and engagement on the site. Nor has it addressed why its moderators are still missing some flagrant abusers.

As Slate’s Will Oremus detailed last year, Twitter also hasn’t explained how, exactly, it interprets its “hateful conduct” policy.  The site reportedly retrained moderation teams to enforce stricter anti-harassment policies last year. After the November election, it banned several alt-right figures, including Richard Spencer, for espousing racist views—though it declined to say what specific tweets led to the suspension.

Critics argue that this gives Twitter room for double standards. One prominent beneficiary: President Donald Trump. The commander in chief has posted content on his personal account that some think warrant a ban (does a GIF of him body slamming CNN ring a bell?).

In a meeting with journalists at Twitter’s San Francisco headquarters in July, a Recode reporter asked Vice President of Trust and Safety Del Harvey if the company treats Trump like everyone else’s.

“We apply our policies consistently. We have processes in place to deal with whomever the person may be,” Harvey told Recode. “The rules are the rules, we enforce them the same way for everybody.”

In April, Twitter co-founder and CEO Jack Dorsey also told Wired that his company held all users to the same standards, but added that company policy also accounted for “newsworthiness.” He said he thought it was important to “maintain open channels to our leaders, whether we like what they’re saying or not, because I don’t know of another way to hold them accountable.”

Though the post on safety updates this week said that users were experiencing significantly less abuse, it didn’t address whether individuals actually felt safer. Ho wrote that Twitter would continue to solicit feedback. He also said it would remain committed to making the site a safe place for free expression.

Its users will be the judges of that.

July 20 2017 6:02 PM

Netizen Report: Authorities in China and Indonesia Threaten to Ban Messaging Apps

The Netizen Report offers an international snapshot of challenges, victories, and emerging trends in internet rights around the world. It originally appears each week on Global Voices Advocacy. Ellery Roberts Biddle, Angel Carrion, Leila Nachawati, Inji Pennu, and Sarah Myers West contributed to this report.

new advox logo

On July 15, Indonesia’s Ministry of Communications and Information Technology threatened to ban the secure messaging app Telegram, reasoning that it is being used to “recruit Indonesians into militant groups and to spread hate and methods for carrying out attacks. ...”

Advertisement

As a partial measure, the government has already blocked access to 11 URLs offering the web version of Telegram. In response, Telegram has vowed to double its efforts to remove “terrorist” content from the platform by forming a team of moderators tasked with monitoring networks in Indonesia and removing such content as swiftly as possible.

Although Telegram may prefer this solution to being banned altogether, it may also increase the likelihood that the company will overcomply, which could lead to censorship of lawful speech.

On July 18, Facebook’s popular messaging app WhatsApp was blocked in China, following the funeral of Chinese Nobel laureate Liu Xiaobo. The world-renowned democracy advocate was sentenced to 11 years in prison in 2009 for “inciting subversion of state power” for his involvement in Charter 08, a manifesto that called for democratic reforms in China, and died of liver cancer on July 15.

Liu’s passing brought a new wave of censorship of social media, also affecting conversations on WeChat and Sina Weibo. Before his death, discussion of Liu on WeChat was allowed as long it did not touch on sensitive topics. After his death any mention of his name has resulted in the blocking of messages, including images sent over one-to-one chat. Until this week, WhatsApp had been the only Facebook app product accessible in the country.

Turkey detains human rights defenders with no charges
On July 5, 10 human rights defenders who were arrested while attending a digital security and information management workshop in Istanbul. On July 18, they received a preliminary ruling: Four of the defenders were released on bail, and the remaining six will be held in pre-trial detention while they are assessed for charges. They have been detained on accusations that they “aided an armed terror group,” though the authorities have cited no evidence to support this accusation and it is unclear whether they are formally charged. Protests have been held around the world calling for their release.

Ethiopia’s resistance musicians face arrest, censorship
Ethiopian authorities are now cracking down on musicians. Seven producers of and performers in a popular YouTube video were arrested several weeks ago, and last week they were charged with terrorism for producing music videos and “uploading them on YouTub.” Musicians—such as Seenaa Solomon, a well-known singer who is among those recently charged—became an important source of inspiration and provided the soundtrack to the resistance movement against government plans to expand the capital, Addis Ababa, into the Oromo region. The plan led to wide-scale protests and a violent crackdown between 2014-2016. Despite her jailing, Solomon’s music continues to flourish on YouTube.

In UAE, another arrest for “showing sympathy” with Qatar on social media
An Emirati man was arrested for showing sympathy with Qatar on social media. Ghanem Abdullah Mattar was detained after posting a video urging Emirati citizens to show respect for their “Qatari brothers” during the UAE’s blockade of Qatar. The UAE has criminalized any show of sympathy toward Qatar, punishable by a jail sentence of up to 15 years and a fine of up to $13,500. Mattar’s whereabouts since his arrest remain unknown.

Bangladesh’s ICT Act spawns record number of lawsuits against journalists
More than 20 journalists have been sued over the past four months in Bangladesh under the country’s controversial Information and Communications Act, which prohibits digital messages that “deteriorate” law and order, “prejudice the image of the state or person,” or “hurt religious beliefs.” The minister of law, justice and parliamentary affairs pledged in May to eliminate the Section 57 of the law, which has been used to file these lawsuits, but has shown no progress on this thus far. Nearly 700 cases have been filed under the law since it was amended in 2013.

China forces citizens in ethnic minority region to install mobile spyware
On July 10, mobile phone users in the Tianshan District of Urumqi City received a mobile phone notification from the district government instructing them to install a surveillance application called Jingwang (or “Web Cleansing”). The notification from police said the application would locate and track the sources and distribution paths of terrorists, along with “illegal religious” activity and “harmful information,” including videos, images, ebooks, and documents. Among other things, the application can negate the password requirement of a Windows operating system and access the computer hard disk with no restrictions.

Can Australia strong-arm U.S. tech giants into weakening their security standards?
The Australian government proposed a new cybersecurity law that would force Facebook and Google to give government agencies access to encrypted messages. The law, which Australian Prime Minister Malcolm Turnbull said would be modeled on the U.K.’s Investigatory Powers Act, would grant the Australian government expansive surveillance authority and require companies provide them “appropriate assistance” in investigations. When asked how the government plans to prevent users from turning to other software not controlled by tech companies that could turn over data, Turnbull asserted the laws of Australia would override the laws of mathematics.

July 20 2017 5:52 PM

Congress Is Considering Letting 100,000 Self-Driving Cars Hit the Road

Most people have probably never even seen a self-driving car, but that could soon change.

A House subcommittee voted late Wednesday to allow up to 100,000 self-driving automobiles onto American roadways.

Advertisement

These new robo-cars won’t have to meet existing safety standards for manned automobiles, but manufacturers will have to petition the National Highway Safety and Transportation Bureau, a federal agency tasked with reducing vehicle-related crashes, for an exemption, explained Bryant Walker Smith, a law professor and self-driving car expert at Stanford University. That means that automakers will need to make a clear case that their self-driving technology is safe enough to drive alongside cars with humans at the wheel.

If passed, the bill would permit autonomous cars to drive on U.S. roads before we have established benchmarks for what it means for self-driving technology to be designed safely. Instead of, say, regulators creating some baseline rules for safe self-driving cars, this legislation proposes that automakers self-certify that their autonomous cars are OK to drive on public roads.

The legislation would also bar states from passing rules to regulate self-driving cars, ostensibly to prevent a patchwork of legislation across the country. States can continue to make licensing, registration, and maintenance requirements for self-driving cars, though, which leaves them some room to control how the technology is deployed within their borders.

Ryan Calo, a law professor who specializes in technology policy at University of Washington, is concerned about how this legislation could play out. He thinks that regulatory agencies don’t necessarily have the expertise in robotics and artificial intelligence to determine whether an automaker’s self-driving car exemption will not be dangerous when the rubber hits the road. “This is an area where it’s especially important to make sure the technology is safe before it gets deployed,” says Calo.

If passed, automakers could ask for their self-driving car to not include a brake pedal, for example, because the vehicle will brake with software or a button, rather than the typical pedal currently required.

According to a statement from Rep. Debbie Dingell, a Democrat from Michigan who voted to pass the bill, she was motivated to back the proposal because human drivers kill a lot of people. More than 35,000 people died on American roadways in 2015, up nearly 8 percent from 2014, according to federal data. In 2016, traffic deaths rose another 6 percent. In fact, the past two years represent the highest uptick in automobile related deaths in more than half a century.

Automakers gunning to bring new self-driving tech to market regularly contend that if the human element is taken out of the equation, thousands of lives could be saved. After all, robots can’t drive drunk, text while driving, or do any of the other idiotic things that humans get up to while behind the wheel. Some 90 percent of vehicle crashes can be traced to human error, says Walker Smith.

But there’s actually no data to back the claim that self-driving cars will lead to fewer vehicle related deaths—after all, autonomous cars have yet to be deployed at any meaningful scale. And when the high-tech cars do hit the streets, things can go awry.

Take what happened in San Francisco earlier this year, when one of Uber’s self-driving cars ran a red light  the very first day it was on the road. Then there was the 2016 incident in Florida, when a person behind the wheel of their Tesla in Autopilot driving mode died after crashing into a tractor-trailer and ignoring the car’s multiple warnings to take the wheel.

If this particular legislation doesn’t pass, some other proposal to open roadways to more self-driving cars probably will soon. Even if the technology is ultimately safer than manned cars, when the rubber hits the road, it could get messy.

July 20 2017 3:11 PM

U.S. Customs and Border Protection Says It Doesn’t Look at the Cloud When Searching Digital Devices

Agents on the U.S. border have always had more leniency when it comes to searching people’s belongings. After Trump’s immigration ban was announced in late January, reports circulated that travelers’ personal digital devices were increasingly getting searched when they tried to enter the country.

In response to those stories, Sen. Ron Wyden from Oregon introduced a bill that, among other things, would require Customs and Border Protection to get a warrant before searching travelers’ digital devices.

Advertisement

Back in February, he asked the Department of Homeland Security questions regarding this issue and then followed up with CBP. Now it looks like CBP might be changing some of its ways.

On July 12, NBC published a document—dated June 20—that said CBP looks only at what’s physically on a laptop, smartphone, tablet, or other device. According to the document, agents don’t use travelers’ personal devices to look at information stored on the cloud during checks.

The release appears to be a response to Sen. Wyden’s questions to the agency.

On July 17, the Electronic Frontier Foundation, a digital rights group, published its thoughts about the new information. It reported that this represents a change from CBP’s 2009 policy, which “does not prohibit border agents from using those devices to search travelers’ cloud content,” but instead allows agents to search information they find at the border. The EFF interpreted the 2009 rules to mean agents were allowed to look at cloud content.

Border Patrol agents have certainly taken advantage of the vagueness of the previous policy.

In November 2015, BuzzFeed reported the story of a journalist who was detained before a flight to Miami. Police reportedly looked through his phone and data, including emails with sources and intimate photos.

And then came the immigration ban. In January 2017, Megan Yegani, an immigration lawyer, tweeted about Border Patrol checks: “US Border patrol is deciding reentry for green card holders on a case by case basis - questions abt political views, chking facebook, etc.” Her tweet went viral.

In February 2017, the Associated Press reported that the American Civil Liberties Union and the EFF “have noticed an uptick in complaints about searches of digital devices by border agents.” But the AP also said that the numbers were on the rise before Trump was inaugurated: the number of electronic media searches increased to 23,877 in 2016 from 4,764 in 2015

The EFF seems pleased with this recent announcement, but it’s also being a little cautious.

“EFF will monitor whether actual CBP practice lives up to this salutary new policy. To help ensure that border agents follow it, CBP should publish it,” the organization wrote.

As a next step, the EFF would like CBP to release information about how often it conducts searches for other agencies and to tell the public whether agents actually advise travelers that they have a right not to tell a border agent the passwords to their devices.

July 20 2017 1:29 PM

Two of the Six Missing Members of Burundi’s Robotics Team Spotted Crossing Into Canada

The FIRST Global Challenge robotics competition is making headlines again after six teens from the team representing Burundi disappeared. The mentor and chaperone for the team, Canesius Bindaba, informed FIRST organizers on Tuesday evening that he could not find the two girls and four boys, whose ages range from 16 to 18. They were last seen at 5 p.m. Tuesday, right before the competition’s closing ceremony. FIRST President Joe Sestak subsequently called Washington police, who began searching and tweeted out missing persons notices.

 

July 20 2017 8:47 AM

This Ride-Sharing App Now Offers Matchmaking, Too

Careem, a popular ride-sharing app in the Middle East, North Africa, and South Asia, is introducing a new feature in a bid to attract lovelorn humans in Pakistan. In the wee hours on Wednesday morning, the company sent out text notifications and email alerts to its users in Pakistan offering them the coveted opportunity to find their “Halal” lover on their next trip.

“Your Rishta (match) has arrived, you are no longer to be alone, from now on your status will be taken,” said the email advertisement. Careem says the feature allows riders to opt for a “rishta aunty”—a matchmaker to accompany them on their rides and connect them with potential mates from her network of friends and family.

Advertisement

While Pakistan is no stranger to Tinder and nosy family relatives engaged in the lucrative business of arranged marriages, this is the first time a ride-sharing app has merged with the matchmaking industry. Hopefully it will be the last, too.

Sanaa Jatoi, a friend of mine and a frequent Careem rider from Pakistan, told me in a Facebook chat that when she saw the notification, “I kind of panicked … I just wanted Careem credits, not an aunty. … I legit thought there was an aunty waiting for me downstairs.”

Careem’s foray in matchmaking also generated bewildered reactions on Twitter. “Careem now offers a ‘rishta aunty’ to accompany you on your ride….because my mom wasn’t enough! #DesiProblems,” one angry customer tweeted.

A staff writer at Express Tribune, a newspaper in Pakistan, experimented with feature to see exactly how it works. According to the article, the rishta aunty was already sitting in the car when the ride arrived. She proceeded to interrogate the writer and his accompanying friends to gauge their particular personalities and preferences for women:

She spoke fondly about the wonderful world of rishta aunties where the demand and supply of good rishtas are infinite, and all you had to do to meet “the one for you” was to answer her unlimited questions about yourself. So there, she bombarded us with questions about what we did, where we lived, and whether we were actually serious about getting married. When we asked her what the appropriate age to get married was she responded, “There is no right time, marriage can happen anytime.. just look at those in villages.” To this, we replied that early-age marriages were not just a village phenomenon, but they happened quite often in cities as well.

The ride ended with the aunty handing out her WhatsApp number and email address.

While the ride-sharing app maybe getting its fair share of laughs here, the company has also recently been under fire with allegations of sexual harassment from its female passengers. A young girl from Lahore, Pakistan, alleged in June that she was harassed by a Careem driver after she requested a ride to work. After the report surfaced, company spokesman told Express Tribune that the safety and security of Careem customers is its top priority. While little is known of the Gulf-based company’s operations in Pakistan, it is reported that Careem’s rival Uber is offering mandatory seminars on sexual harassment to all of its drivers in Pakistan in the wake of such allegations.

In that context, Careem’s matchmaking stunt feels, well, a little less funny

July 19 2017 4:06 PM

New Form of Law Enforcement Investigation Hits Close to the Heart

Although it’s widely accepted that police use information from security devices or cellphone data to aid in tracking criminals, a new case has opened the possibility that police could use data that hits even closer to home—or maybe closer to heart. Last week an Ohio judge ruled that a man’s pacemaker data could be used against him in a criminal case, opening the door for a new type of electronic surveillance.

Police were investigating a fire at 59-year-old Ross Compton’s home in Middletown, Ohio, home as a potential arson. Compton claimed he was woken by the fire in the middle of the night, packed a few items in a suitcase, and broke a window to escape from the house. The Associated Press reports the police obtained a search warrant for his pacemaker data, which includes information about his heart rate and cardiac rhythms before, during, and after the fire.

July 19 2017 3:06 PM

The FBI Is Warning Parents About the Risks of Internet-Connected Toys Spying on Kids

The FBI sent a warning to parents earlier this week: Your children’s new internet-connected toy could be secretly spying on them.

"These toys typically contain sensors, microphones, cameras, data storage components, and other multimedia capabilities — including speech recognition and GPS options,” the agency wrote in its advisory on Monday, warning that these high-tech toys can be hacked to record video and audio of children unbeknownst to parents.

Advertisement

The FBI says that exposing this kind of information could open the doors to child identity fraud and put kids at risk for exploitation from criminals.

As more and more toys go to market that are packed with microphones and cameras, security researchers are finding ways to remotely break into them and collect sound recordings, video feeds, and other sensitive data is on the rise.

For instance, in February, Germany banned the smart-doll My Friend Cayla from being sold in the country and ordered all the dolls to be taken off the shelves. Germany’s telecomm regulator found that the doll could be hacked to record private conversations transmitted over the doll’s Bluetooth connection.

And back in December, a U.S. privacy watchdog, the Electronic Privacy Information Center, sent a compliant to the Federal Trade Commission about the security risks in My Friend Cayla. In response, Sen. Ed Markey, D-Mass., launched a congressional inquiry. The doll has not been banned in the U.S., though Markey noted that recording private conversations of kids 12 and under without parental consent is a violation of the Children's Online Privacy Protection Act.

And then there was the case of the cuddly, internet-connected stuffed animals called CloudPets from February. The bear is supposed to allow parents and kids to exchange cute messages recorded by the toy. But it turns out that Spiral Toys, the manufacturer, was storing the personal account information and voice recordings of Cloud Pet owners online in an easy-to-hack database. Two million personal recordings from the Cloud Pets were leaked online, according to Motherboard.

A researcher with the U.K.-based security firm Context later found that the teddy bear could be remotely turned on to collect audio to spy on kids. Though didn’t happen in the wild, hackers could theoretically use it to harass children playing with the doll.

Though this information has been coming out piecemeal, it’s a big deal that the FBI is calling attention to the problem of internet connected toys. "I think this is the first time the FBI has issued such warning," Tod Beardsley, director of research at cyber security firm Rapid7, told Reuters. He noted that this week’s FBI advisory could do a lot to raise awareness of the dangers of insecure internet-connected children’s toys. (Since parents always check with the FBI before buying their kids new toys, right?)

It’s true that anything connected to the internet can ostensibly be hacked. But that doesn’t mean parents need to abandon smart toys all together. Toys that are packed with artificial intelligence and microphones and internet connections can help teach young people how to code and help families with busy schedules stay connected. Those are all good things.

The FBI recommends that parents do their research before shelling out cash for a new smart toy to make sure that security problems with the device haven’t been reported. It’s also important to only go online with the toys over a secure internet connection and to ask where data collected from the toy is stored and how.

One concrete thing that parents can look out for when shopping for safe internet-connected toys are seals on the box that indicate that the they are compliant with children’s data protection and privacy laws. The Federal Trade Commission, the U.S. regulatory agency that handles consumer protection, has a certification program that allows manufacturers to get a seal to put on their box or website if the product has been found to properly protect children’s privacy. One such seal program, KidSAFE, has three levels of certification that websites and hardware companies can submit their product to for approval.

Of course, as with all new potentially privacy invasive technologies, the other option is to just go analog and stick to regular old teddy bears.

July 19 2017 2:28 PM

Tillerson Is Reportedly Considering Shuttering State’s Cyber Office. That’s a Horrid Idea.

There’s no shortage of unfilled, important jobs in the federal government these days. Now the State Department appears poised to add to that list at the end of the month when it loses Christopher Painter, the department’s coordinator for cyber issues since 2011. No replacement for Painter has been announced, and Politico reports that the State Department is considering downgrading the position or even closing the cyber office altogether.

It sometimes seems like just about every U.S. government agency has a cyber division or task force of some sort, but the shuttering—or even shrinking—of the State Department’s cybersecurity efforts would leave a void that no other agency could fill. In recent years, the State Department has played a vitally important role in shaping international debates and decisions about internet security and internet freedom. That the administration is even considering such a move suggests it has almost no understanding  of how integral international diplomacy is to trying to make real progress on these issues.

Advertisement

Unlike the Department of Homeland Security and the Defense Department—the two agencies in charge, respectively, of civilian and military cybersecurity efforts—the State Department has been one of relatively few voices in the U.S. government to acknowledge and grapple with the fact that trying to make the internet more secure often raises conflicting and contradictory priorities. In a speech at the Newseum in January 2010, then-Secretary of State Hillary Clinton tried to reconcile the importance of protecting critical computer systems and networks while enabling people everywhere in the world to have free access to online information, regardless of technical restrictions imposed by their governments.

“Countries or individuals that engage in cyber attacks should face consequences and international condemnation,” she said. She also blasted countries that “erected electronic barriers that prevent their people from accessing portions of the world’s networks” and pledged continued financial support from the State Department for tools that would help people in those countries circumvent those barriers, like Tor, a service that routes internet users through different servers to shield their online activity. These are two priorities fundamentally at odds with each other. How do you catch and punish people who break U.S. laws online, while also making it possible for people in other countries to violate their own governments’ restrictions on internet use without being caught and punished?

The State Department hasn’t resolved that tension, but it has, historically, understood that both of those goals are important. That understanding has put it at odds with other branches of the government at times, especially those whose primary focus is making it harder for people to get away with anonymous activity online. For instance, a Washington Post headline in 2013 pointed out, “The NSA is trying to crack Tor. The State Department is helping pay for it.” Both of those missions are important—finding vulnerabilities in online services that can be exploited for national security and intelligence purposes, and funding tools that help protect people’s anonymity online. And there are not a lot of other government agencies besides State that are likely to champion the second mission as well as the first. By understanding that there are multiple, often conflicting dimensions to achieving cybersecurity in an international context, and trying to straddle that divide, the State Department has played a unique role in the federal government’s cybersecurity efforts.

In 2012, at the World Conference on International Telecommunications in Dubai, at a moment when many countries—including Russia—were eager to see the Internet controlled more directly by governments, the State Department led and coordinated the U.S. delegation, which advocated for an approach to internet governance that included companies and civil society, as well as government representatives. Over the course of the past decade, the State Department has had a profound impact on how the U.S. government understands what online security should look like, as well as how best to achieve it.

It seems almost ridiculously obvious to point out that cybersecurity for a global internet requires international perspectives and engagement—requires, in other words, the involvement of high-level State Department officials. That means not just working with governments in other countries to come to international agreements about cybercrime and policing, but also understanding what online security means to people in other countries and helping to supply the appropriate tools. No other government agency is poised to fill that role.  Eliminating the State Department’s cyber office is likely to make the U.S. government’s stance on cybersecurity much narrower and less attuned to the complexities and contradictions of these issues. And a cybersecurity agenda dictated solely by domestic interests and priorities is unlikely to create internet policy that will be respected or accepted by other countries.

READ MORE STORIES