Future Tense
The Citizen's Guide to the Future

June 23 2017 12:19 PM

Netizen Report: Arrest and Web Censorship Spark Online Protests in Palestine

The Netizen Report offers an international snapshot of challenges, victories, and emerging trends in internet rights around the world. It originally appears each week on Global Voices Advocacy. Afef Abrougui, Mahsa Alimardani, Renata Avila, Ellery Roberts Biddle, Marwa Fatafta, Leila Nachawati, Dalia Othman, Elizabeth Rivera, and Sarah Myers West contributed to this report.

new advox logo

Censorship has been on the rise in Palestine in recent weeks. On June 12, officials from the Palestinian Authority demanded that internet service providers in the West Bank block a reported 22 websites, most of which are affiliated with the opposition Islamist party Hamas or are otherwise critical of President Mahmoud Abbas. The websites appear to be blocked only in the West Bank.


An anonymous official from the attorney general’s office said the sites were blocked for violating “rules of publication“ but did not offer further specification. The 1995 Press and Publication Law includes several vague restrictions on freedom of expression, including a rule that forbids the press from “contradict[ing] the principles of … national responsibility” or publishing material that is “inconsistent with morals.”

The Haifa-based Arab Center for Social Media Advancement, also known as 7amleh, denounced the order, saying, “[We] find that this move fully contradicts all international treaties and conventions, and marks a significant violation of the digital rights of segments of Palestinian society.”

Online, Palestinians have expressed frustration over the blocking and lack of transparency around the PA’s order. They have launched a campaign under an Arabic hashtag that translates to “no to blocking” and are demanding that the attorney general explain the decision in a public statement.

This spate of online censorship comes on the heels of the June 8 arrest of Nassar Jaradat, a young Palestinian Facebook user. The PA charged Jaradat with “insulting and defaming public officials” in a Facebook post critical of Jibril Al Rajoub, a prominent figure among PA leadership. In a recent interview with the Israeli news program Meet the Press, Al Rajoub said that the Western Wall in occupied East Jerusalem should “remain under Israeli sovereignty,” a statement denounced by many Palestinians.

In his Facebook post, Jaradat said of Al Rajoub’s statement: “To give what you don’t personally own to those who do not deserve it. This is the essence of deception and the terror of concession.”

Jaradat could risk anything from three weeks to two years in jail, in accordance with a provision on “defamation, insult and abasement” in the Jordanian Penal Code of 1960, which is still applicable in the West Bank.

Activists expose Mexico’s multimillion-dollar surveillance tech market
Mexican human rights lawyers, journalists, and anti-corruption activists were targeted by spyware acquired by the government, according to research published this week by a group of nongovernmental organizations from Mexico and Canada. The spyware was purchased by Mexican authorities from the Israeli company NSO Group, under an explicit agreement that it be used only to investigate criminals and terrorists. Among those targeted were prominent journalists, lawyers investigating the mass disappearance of 43 students, and an American lawyer representing victims of sexual abuse by the police.

The government has denied engaging in surveillance and communications operations against human rights defenders without prior judicial authorization. However, research by the University of Toronto’s Citizen Lab suggests that the choice and targets and style of targeting “provide strong evidence the targeting was conducted without proper oversight and judicial accountability.”

Twitter censors Venezuelan government supporters
Venezuela’s information minister reported last week that at least 180 Twitter accounts belonging to government supporters and government-sponsored media channels have been suspended from the U.S.-based platform. On June 17, President Nicolas Maduro made a public statement condemning the suspensions as an “expression of fascism” and vowing to open thousands of new accounts. “The battle on social media is very important,” he said. Although Twitter’s guidelines prohibit violent threats, harassment, and “hateful conduct,” the company’s implementation of these rules is known to be uneven and unpredictable.

Spy tech threatens Chinese jaywalkers
Chinese cities including Jiangbei, Jinan, and Suqian have implemented facial recognition software to shame and fine citizens for jaywalking. Once captured, their images appear on big screens at intersections and their information—including a headshot, name, age, home address, registration, and ID number—are uploaded to a police system.

Japan’s anti-conspiracy bill puts citizens under microscope
On June 15, Japan’s parliament ratified a controversial “anti-conspiracy” bill into law. There are fears the vague nature of the new law, which covers nearly 300 crimes, will erode civil liberties in Japan by providing authorities with broad surveillance powers, leaving the question of who can be monitored open to interpretation. Joseph Cannataci, U.N. special rapporteur on the right to privacy, has criticized the bill and expressed concern that may “legitimize and facilitate government surveillance of NGOs perceived to be acting against government interest.”

New Research

#EgyptCensors: Evidence of Recent Censorship Events in Egypt“—Open Observatory of Network Interference

June 22 2017 5:52 PM

Why the Los Angeles Times Accidentally Tweeted About an Earthquake That Happened 90 Years Ago

On Monday, a bot led to incorrect information going out on Twitter. That’s hardly unprecedented. But this case was different: It was the Los Angeles Times telling the public there had been a 6.8 magnitude earthquake in Isla Vista, California, even though no such quake had taken place.

The tweet was prompted by an alert from the U.S. Geological Survey that said the earthquake happened on June 29, 2025. But it was actually referring to a real earthquake that happened almost a century ago.


So how did this happen?

A system at USGS records ground motion, analyzes is compared to other motion in the area to detect where it’s coming from, and then declares if an earthquake is occurring. This week, researchers tried to update the location of an earthquake from 1925. Thanks to a bug with the software that sends out email updates, subscribers received an email notification telling them an earthquake had occurred in 2025.

The Los Angeles Times Quakebot received this information and did what it’s designed to do: It wrote up basic information about the quake (location and magnitude) and then tweeted it out. As Will Oremus explained in Slate in 2014, the bot is in place not to replace real, live journalists, but to make it easier to release quick information about emergency situations.

However, this small mistake led some to think an actual, fairly large earthquake had taken place. Had that had been true, that’s guaranteed to change your day around a bit, especially if you’re a local reporter.

Soon enough people were taking to Twitter to ask what was going on. The Los Angeles Times quickly deleted the tweet and published an article explaining what had occurred.

So does this flub suggest that we should stop depending on bots to alert people to earthquakes? Lucy Jones, a seismologist and USGS scientist emerita, doesn’t think so.

Previously, updates would have to be run by a person, and Jones said that takes too much time in the case of a real emergency.

“The only way to not have misinformation is to not have information. [The bot] is completely accurate if the USGS email’s accurate,” she said. “This is the first mistake of this type that we’ve seen … and now it’s been fixed.” She thinks the bot is still worthwhile, particularly because when we expect flawless information, we stop thinking about where that data came from.

“I’m a little concerned about how people think information just sort of magically appears,” Jones said. “We get so accustomed to the weather showing up on our watch now, you don’t think about the billions of dollars of satellites and people and computers that go in to do weather modeling.”

Thankfully, this mistake doesn’t seem like a career ending move for the Los Angeles Times bot. Hopefully other early detection systems can also stay in place, like the tsunami monitoring stations in the Pacific Ocean, which this administration seeks to cut U.S. funding for.

June 22 2017 12:21 PM

Study: Hispanic Americans Use the Internet Less Than Any Other Ethnic Group

Hispanics use the internet the least of any ethnic group, according to research from eMarketer.

The study found that 79.8 percent of Hispanics use the internet at least monthly from any device—cellphones, tablets, desktops, etc. That’s compared to 84.3 percent of whites, 83.6 percent of Asians, and 82.5 percent of blacks. The report also predicts the gap will continue to shrink, but Hispanics still won’t reach the same usage levels any time soon: By 2021, it anticipates that 82.6 percent of Hispanics and 86.2 of whites will use the internet monthly.


These numbers are pretty similar to a 2016 report from Pew, which put the rate of Latinos using the internet at 84 percent. But in that survey, black Americans were the group that use it the least, with only 81 percent usage. (The Pew survey referred to its survey participants as Latinos, while the eMarketer survey used the term Hispanics. While many Latinos are Spanish-speaking, and the populations are similar, these terms are not interchangeable.)

According to Pew, a large part of the difference in usage comes from disparities in education and English proficiency levels.

Like any other segment of the population, there is a generational divide when it comes to how Hispanics use and think about the internet. A separate survey conducted by Simmons Research showed varying feelings among Hispanics of different age groups about watching television vs. playing online. For young Hispanic age 18–34, 43.5 percent said they watch less TV on television sets because of the internet, compared with 29.2 among the 35-49 group and 16.7 with those over 50. This probably has something to do with the fact that way fewer Latinos 50 to 64 use the internet (just 67 percent) while their younger counterparts use it at a rate of 90 percent, according to another section from the Pew Research Center study released in 2016.

However, Latinos have had high rates of usage when it comes to other technologies. According to the same Pew 2016 report, Latinos are very likely to “own a smartphone, to live in a household without a landline phone where only a cellphone is available and to access the internet from a mobile device.”

While the percentage of Hispanics who use the internet has continued to rise steadily, adoption rates have risen slower for whites. Between 2009 and 2015, the rate among Latinos rose about 20 percentage points, while for whites it only rose about 8 percentage points.

This digital divide is important because differences between internet usage can very easily translate to disparities in everyday life. The Joan Ganz Cooney Center, which has a series about how low-income families access Federal Communications Commission programs, says, “internet service and digital technologies are critical for accessing a broad range of resources and opportunities.”

In a 2015 report titled “Aprendiendo en casa,” the center examined media as a resource for learning among Hispanic-Latino families. It found that parents believe children develop academic skills from using educational media, but they still want to know more about the media their kids can use.

The report profiled a young girl named Alicia, a 9-year-old of Ecuadorian descent whose name has been changed for privacy reason, who watches YouTube videos both to help her with her math homework and as a resource to teach her how to make dresses and accessories for her dolls. Her mother plays an active role in both these activities. The lesson here? An increase in technological resources can also help bridge the gap between generations.

June 21 2017 5:42 PM

DHS Is Starting to Scan Americans’ Faces Before They Get on International Flights

Air travel already features some attributes of a police state. Metal detectors. Bomb-sniffing dogs. Pat-downs. A gloved TSA agent peering at your toothpaste. But it could get worse. What if your check-in also involved a face recognition scan?

Decades ago, Congress mandated that federal authorities keep track of foreign nationals as they enter and leave the United States. If the government could record when every visitor stepped on and off of U.S. soil, so the thinking went, it could easily see whether a foreign national had overstayed a visa.


But in June of last year, without congressional authorization, and without consulting the public, the Department of Homeland Security started scanning the faces of Americans leaving the country, too.

You may have heard about new JetBlue or Delta programs that let passengers board their flights by submitting to a face recognition scan. Few realize, however, that these systems are actually the first phase of DHS’s “Biometric Exit” program.

For certain international flights from Atlanta and New York, DHS has partnered with Delta to bring mandatory face recognition scans to the boarding gate. The Delta system checks a passenger is supposed to be on the plane by comparing her face, captured by a kiosk at the boarding gate, to passenger manifest photos from State Department databases. It also checks passengers’ citizenship or immigration status. Meanwhile, in Boston, DHS has partnered with JetBlue to roll out a voluntary face recognition system for travelers flying to Aruba. In JetBlue’s case, you can actually get your face scanned instead of using a physical ticket.

While these systems differ in details, they have two things in common. First, they are laying the groundwork for a much broader, mandatory deployment of Biometric Exit across the country. Second, they scan the faces of everyone—including American citizens.

Treating U.S. citizens like foreign nationals contradicts years of congressional mandates. DHS has never consulted the American public about whether Americans should be subject to face recognition. That’s because Congress has never given Homeland Security permission to do it in the first place. Congress has passed Biometric Exit bills at least nine times. In each, it has been clear: This is a program meant for foreign nationals. In fact, when President Trump issued an executive order in January on Biometric Exit, it was actually reissued to clarify that it didn’t apply to American citizens.

Why should you care? Well, think of what could happen when DHS’s airport face recognition systems misfire. And they will. With an error rate that could be as high as 4 percent for the JetBlue system—and with countless people flying—false rejections will be a daily occurrence. That could mean missing your flight because the system fails to recognize you. The best research available indicates face recognition performs worse when an image is more than six years old. That’s a serious problem when your passport or driver’s license photo may be a decade old. Other research suggests that face recognition systems have a harder time matching the faces of African Americans, women, and children. When these systems make mistakes, will DHS subject you to the more intensive Secondary Screening? Will you be taken to an interrogation room? Will you be turned away altogether?

What’s even worse is there is good reason to think Homeland Security’s face recognition systems will be expanded.

Behind the scenes, DHS is already handling your face recognition photo in ways many travelers might find alarming. For instance, after JetBlue scans your face, your photo is temporarily stored in DHS’s threat modeling ecosystem. What is it doing there? While there is no indication right now that DHS is, for example, comparing your face against a hotlist of known or suspected terrorists, it’s easy to imagine DHS pulling the trigger. People with foreign-sounding names have already struggled for years with false matches on the No Fly List. Will people with “foreign-looking” faces encounter the same discrimination?

And this may only be the beginning. According to U.S. Customs and Border Protection’s John Wagner, Homeland Security is in internal negotiations to bring face recognition to the TSA security checkpoint.

What might mission creep look like? One possible scenario involves DHS deciding to search your face against state and local law enforcement databases. Would you be comfortable with your face being compared to the faces of wanted criminals simply because you flew home to see your parents? Or maybe DHS could decide to share your face with the FBI. That could mean your face being compared with unknown suspects in security camera footage. Imagine being investigated for a crime you didn’t commit because, while passing through the airport, an algorithm matched your face to a suspect in a grainy surveillance video.

Must Americans really submit to a perpetual line-up to fly?

The Center on Privacy & Technology at Georgetown Law will host a conference on the surveillance of immigrants on June 22.

June 21 2017 3:27 PM

Future Tense Newsletter: Are Social Networks Profiting From Terrorism?

Greetings, Future Tensers,

Earlier this year, the families of three victims of the San Bernardino terror attack filed suit against Facebook, YouTube, and Twitter. Their claim? That the companies share responsibility because they profited from propaganda that helped radicalize the perpetrators.


The new legal theory, writes Nina Iacono Brown, is based on federal laws that make it illegal to provide “material support” to terrorists—including “communications equipment.” Lawsuits related to the 2016 Pulse nightclub shooting, the 2016 Brussels airport bombing, and the 2015 Paris attacks take a similar tack and will soon put the theory to the test in court.

How terrorist organizations like ISIS harness social media to recruit and disseminate propaganda is also on the mind of British Prime Minister Theresa May. In the wake of recent attacks, both she and other European leaders are moving forward with plans “deprive the extremists of their safe spaces online.” However, as Molly Land explains, the policies would actually make us less safe in the long run. Moreover, she writes, such moves would act as an invitation for countries to censor and punish digital speech even more than they already do—a scary thought, considering what recently happened in Pakistan.

This week, our Slate writers have also been following the latest fallouts and rollouts from transportation companies Uber and Lyft. Jonathan Fischer bids good riddance to Uber CEO and founder Travis Kalanick. Henry Grabar has some questions about the company’s new tipping function. (We’ll add one more: Will we tip the robot drivers too?) Will Oremus explains how the new, not-exactly-novel Lyft Shuttle undermines city buses. And Rahul K. Parikh asks whether doctors should play along with the Uberization of health care.

Other things we read between trying to match Maluuba’s record Ms. Pac-Man score (damn it, Inky!):

On second thought: Should the Patent and Trademark Office be allowed to change its mind? Rochelle C. Dreyfuss explains how the Supreme Court will soon decide.

Blatant negligence: Josephine Wolff unpacks the story of the breach of 198 million voter files privately compiled for the GOP.

Back in the ring: A few weeks after seemingly leaving the cause behind, Netflix announced that it was rejoining the fight for net neutrality. Angelica Cabral writes why it may be best for business after all.

Rebooting the library: Chris Berdik explains how libraries are moving beyond quiet stacks to become “lively, high-tech hubs of collaborative learning.” Don’t worry, the books still have a place alongside the computers, robots, videos, circuitry kits, and 3-D printers.

Three years after the release their best-selling book, The Second Machine Age, MIT’s Erik Brynjolfsson and Andrew McAfee are back with a deep dive into the key forces driving our increasingly digital age. Join Future Tense on Thursday, June 29, in New York for a conversation with the pair about their latest book, Machine, Platform, Crowd, and about how to build a future that doesn’t leave humans behind. RSVP to attend here.

Prepping for what really happens after societal collapse,
Kirsten Berg
for Future Tense

Future Tense is a partnership of Slate, New America, and Arizona State University.

June 20 2017 6:03 PM

There Have Already Been 20 Government-Mandated Internet Shutdowns in India This Year

Back in April, 22 social networks and apps—including Facebook, WhatsApp, and Twitter—were banned in Kashmir as videos surfaced showing Indian forces engage violently with civilians.

This wasn’t an isolated incident. On Thursday, Human Rights Watch issued a report criticizing repeated shutdowns of internet and telecommunication networks this year. These aren’t country-wide disconnections; they’re local or statewide disruptions of service. Human Rights Watch identified 20 internet blockages in January-May 2017 alone. Since 2012, 79 such incidents have been reported in 14 of the country’s 29 states.  


According to Human Rights Watch, these shutdowns are intended to suppress criticism of the government and Indian Prime Minister Narendra Modi. The Indian government, on the other hand, claims that it sometimes has to suspend services because of its national security interests.

The report focuses on three instances in which government had imposed restrictions in order to stem protests taking place in the country. In June, the Maharashtra state government, which is controlled by India’s ruling Bharatiya Janata Party, temporarily shut down internet and mobile services as protests by farmers escalated in the area. Farmers were demonstrating in favor of debt forgiveness and fair prices for their produce

Human Rights Watch also noted that the government in the state of Jammu and Kashmir suspended communications in the area in June after a civilian was killed by security forces. Indian media earlier this month reported that authorities in Kashmir valley blocked access to these services after the separatist Kashmiri group, the Hurriyat Conference, called for statewide boycotts of businesses and schools in response to the killing.

“This was the fifth time the state government had suspended the mobile internet or broadband services in 2017 in a questionable attempt to prevent rumors from fueling violent clashes between government forces and street protesters,” Human Rights Watch said in a press release Thursday.

The report also discussed a similar incident in India’s most populous state, Uttar Pradesh, where the government proceeded to shut down internet services following protests by lower-caste Dalits.

David Kaye, the U.N. special apporteur on the promotion and protection of the right to freedom of opinion and expression, shared similar concerns in a report released earlier this month. He writes that these disconnections don’t just infringe on people’s right to free speech; they also hurt India’s thriving e-commerce market The Hindu reported in March that between 2014 and the end of 2015, internet shutdowns cost Indian businesses almost $1 billion.

Meanwhile, the country is in the midst of a campaign called “Digital India.” According to its website, the Digital India campaign is “a flagship programme of the Government of India with a vision to transform India into a digitally empowered society and knowledge economy.” Indian digital rights activist Mishi Choudhary wrote in a blog post in March that it is hard for the campaign to “get fully realized when state or local governments keep turning the Internet off.”

June 20 2017 11:07 AM

Colorado Ballot Measure Moves to Ban Smartphones for Kids Under 13

A ballot measure in Colorado to ban the sale of cell phones to children younger than 13 was just cleared by Colorado officials, the Associated Press reports. The possible ban, which is backed by Parents Against Underage Smartphones, now needs to get 300,000 voter signatures—if it does, it would appear on the 2018 ballot.

The movement is being led by Tim Farnum, an anesthesiologist and father who was inspired to start the group and the ballot initiative after he had to take his then–12-year-old son’s phone away because he felt the constant use of it was having a psychological effect on him, according to the Coloradoan.


Opponents of the ban feel smartphone use should remain a family matter and a law like this would allow the government to interfere too much in private citizen’s lives. Democratic state Sen. John Kefalas is against the ban.

“I know there have been different proposals out there regarding the internet and putting filters on websites that might put kids at risk. I think ultimately, this comes down to parents,” he said to the Coloradoan. “Making sure their kids are not putting themselves at risk."

If passed, the law would not quite ban smartphone use for kids—it would require stores that sell smartphones to show proof, in the form of a monthly statement to the Colorado Department of Revenue, that they asked customers the age of the phone’s primary user during a purchase. If retailers do not do this, they would receive a warning, and if the store becomes a repeat offender, they could face a $500 fine.

In 2011, the American Academy of Pediatrics issued guidelines that stated no children under the age of 2 should be watching television. The decision was controversial, and in October 2016, the AAP revisited it, stepping back from it and releasing a wider statement on how kids, primarily 5- to 18-year-olds, interact with various technologies.

That statement suggested that there are both negative and positive effects to media use for children. Among the positive effects, they listed “new ideas and knowledge acquisition, increased opportunities for social contact and support.” Some negative effects are added risk of exposure to “inaccurate, inappropriate, or unsafe content and contacts; and compromised privacy and confidentiality.”

The AAP does not suggest banning smartphone use for those under 13, but it does recommend that parents designate media-free times for the family and media-free locations, like the bedroom, make sure their children get adequate sleep and exercise, and address what kind of media their children are using.

Essentially, the AAP errs on the side of leaving monitoring of usage to families and pediatricians. An interesting supplement comes from a 2014 statement from Zero to Three, a nonprofit that seeks to give society the knowledge needed to support infants and toddlers, which suggests that parents don’t just need to limit their kids’ screen time, they should limit their own when they are interacted with their children, too. Both AAP and Zero to Three found that parents sharing screen time with their kids can be a good thing if the adults can make the experience more interactive and make sure children are using their minds and bodies. It might be harder for a parent to control the screen time of a pre-teen than a toddler, but realistically, even if the measure passes, it would be even harder for the state to accurately control screen time for those under 13.

June 19 2017 7:35 PM

Lyft Isn’t Reinventing City Buses. It’s Undermining Them.

If it carries passengers like a city bus, and it has fixed routes and fares like a city bus, it might as well be a city bus.

That, at least, was the verdict rendered by a chorus of Twitter wags Monday in response to a Lifehacker post about a new service Lyft is testing in San Francisco and Chicago called Lyft Shuttle. Both the site and the company invited the criticism by treating the concept as if it were novel, without mentioning its resemblance to a certain well-established form of mass transit. Tech critics had a field day, because it looked like a classic instance of Silicon Valley’s exasperating habit of mistaking privatization for innovation. A sampling of the reaction:


The joke is funny—but a little too easy, at this point. (My Slate colleagues laughed off Uber's “smart routes” service in similar terms two years ago.) Lyft’s marketing strategy for Shuttle may lack self-awareness. But to dismiss such services as redundant is to obscure both the conditions that give rise to them and the real threat that they pose to public infrastructure.

No doubt Lyft is aware that city buses exist—as are lots of people who either choose not to ride them, or who ride them while pining for a better alternative. Like Uber before it, the company has come to believe there’s unmet demand for a service that resembles a city bus in some important ways yet differs in others.

One rather obvious difference between Lyft Shuttle and the city bus is that the shuttles aren’t, well, buses. They’re cars, and that means both a different experience for the riders and a different level of flexibility for the drivers and route planners. Lyft can plan its shuttle routes according to its own fine-grained data on rider behavior, and can rearrange them at any time. That’s in contrast to city bus systems such as San Francisco’s MUNI, which gets painfully overcrowded on key routes during commute hours, but runs mostly empty at other times.

The more intimate setting of a car also puts more pressure on Lyft to ensure that troublesome riders are booted from the service, which it can do more easily because it’s private. At the same time, the fact that you need a smartphone to order a Lyft makes it likely to exclude older and poorer riders. The fact that you need a bank account makes it inaccessible to many undocumented immigrants.

Those might all be big selling points for Lyft Shuttle if you’re a young techie who doesn’t want to deal with drunks or belligerents on your daily commute. It’s more concerning if you believe in the egalitarianism of public transit, which is explicitly designed to serve the underserved. (The same goes for the flexibility of Lyft Shuttle’s route planning, which is great for Lyft but not so great for anyone who’s left relying on it because they lack alternatives.)

Lyft didn’t reinvent the city bus, any more than Uber reinvented the taxi. It’s offering a service that is likely to compete with city buses, for better or worse. And rather than glossing over the differences, we should be highlighting them, lest we forget what makes genuine public transit worth preserving.

Once the jokes subside, there are at least two lessons here for cities and transit advocates. One is that transit authorities need to adapt more quickly to new technologies (and taxpayers and policymakers need to give them the funds to do so). As my colleague Henry Grabar pointed out last year, bus ridership has been steadily declining nationally—not because people don’t want to ride the bus, but routes have been cut and systems have failed to modernize in key ways that could make their service more convenient.

The second lesson is that if mass transit systems don’t adapt, startups (and wealthy employers) will step in, siphoning away not only riders but the public demand for funding and improvements. This has already been going on for years, and it’s only going to get worse as Uber, Lyft, and others keep growing and finding new niches to serve. Some local governments now justify cuts to transit service because of the appeal of companies like Lyft, setting up a cycle of further disinvestment.

It’s tempting to laugh at Lyft for reinventing something that already exists. But if tech critics and transit activists don’t take seriously the challenge that services like Lyft Shuttle poses, then the joke is on us.

June 19 2017 6:44 PM

The Main Revelation from the GOP Data Firm’s Leak? Political Data Is Vastly Overvalued.

Let’s say you had some data that you wanted to make accessible to anyone in the entire world. You might, as many people do, accomplish this by renting some server space from Amazon and posting your data there—and anyone who typed in the appropriate web address could then access it. It’s easy. It’s also exactly what Deep Root Analytics apparently did when the firm was hired by the Republican National Committee last year to compile voter data for a fee of nearly $1 million. Unfortunately, of course, Deep Root wasn’t supposed to have made that data public.

But it had. Last week, UpGuard cyber risk analyst Chris Vickery discovered more than 1 terabyte of the data, containing personal information about 198 million U.S. citizens, sitting completely unprotected on that Amazon server, available to anyone who had been given—or could find—its web address (which, incidentally, was the Amazon subdomain “dra-dw” for Deep Root Analytics Data Warehouse).

June 19 2017 2:11 PM

Theresa May's Ideas About Online Safety and Terrorism Might Backfire

This post originally appeared on The Conversation.


In the wake of the recent attacks in Manchester and London, British Prime Minister Theresa May has called on social media companies to eliminate “safe spaces” online for extremist ideology. Despite losing the majority in the recent election, she is moving forward with plans to regulate online communications, including in cooperation with newly elected French President Emmanuel Macron.


May’s statement is just one of several initiatives aimed at “cleaning up” the internet. Others include Germany’s proposal to fine social media companies that fail to remove illegal content and the Australian attorney general’s call for laws requiring internet companies to decrypt communications upon request.

It is understandable to want to do something – anything – to help restore a lost sense of security. But as a human rights lawyer who has studied the intersection of human rights and technology for the last 10 years, I think May’s proposal and others like it are extremely concerning. They wrongly assume that eliminating online hate and extremism would reduce real-world violence. At the same time, these efforts would endanger rather than protect the public by curtailing civil liberties online for everyone. What’s more, they could involve handing key government functions over to private companies.

Weakening security for all

Some politicians have suggested tech companies should build “back doors” into encrypted communications, to allow police access. But determined attackers will simply turn to apps without back doors.

And back doors would inevitably reduce everyone’s online safety. Undermining encryption would leave us all more vulnerable to hacking, identity theft and fraud. As technology activist Cory Doctorow has explained: “There’s no back door that only lets good guys go through it.”

The harms of speech?

May’s statement also reflects a broader desire to prevent so-called “online radicalization,” in which individuals are believed to connect online with ideas that cause them to develop extreme views and then, ultimately, take action.

The concept is misleading. We are only beginning to understand more about the conditions under which speech in general, and particularly online speech, can incite violence. But the evidence we have indicates that online speech plays a limited role. People are radicalized through face-to-face encounters and relationships. Social media might be used to identify individuals open to persuasion, or to reinforce people’s preexisting beliefs. But viewing propaganda does not turn us into terrorists.

If it isn’t clear that removing extreme or hateful speech from the internet will help combat offline violence, why are so many governments around the world pushing for it? In large part, it is because we are more aware of this content than ever before. It’s on the same platforms that we use to exchange pictures of our children and our cats, which puts pressure on politicians and policy makers to look like they are “doing something” against terrorism.

Overbroad censorship

Even if online propaganda plays only a minimal role in inciting violence, there is an argument that governments should take every measure possible to keep us safe. Here again, it is important to consider the costs. Any effort to remove only “extremist” content is destined to affect a lot of protected speech as well. This is in part because what some view as extremism could be viewed by others as legitimate political dissent.

Further, the exact same material might mean different things in different contexts – footage used to provoke hate could also be used to discuss the effects of those hateful messages. This is also why we are not likely to have a technological solution to this problem any time soon. Although work is underway to try to develop algorithms that will help social media companies identify dangerous speech, these efforts are in early stages, and it is not clear that a filter could make these distinctions.

The risks of private censorship

Trying to eliminate extremist content online may also involve broad delegation of public authority to private companies. If companies face legal consequences for failing to remove offending content, they’re likely to err on the side of censorship. That’s counter to the public interest of limited censorship of free speech.

Further, giving private companies the power to regulate public discourse reduces our ability to hold censors accountable for their decisions – or even to know that these choices are being made and why. Protecting national security is a state responsibility – not a task for private companies.

If governments want to order companies to take down content, that’s a public policy decision. But May’s idea of delegating this work to Facebook or Google means shifting responsibility for the regulation of speech to entities that are not accountable to the people they are attempting to protect. This is a risk to the rule of law that should worry us all.

The way forward

There is, of course, online material that causes real-world problems. Workers tasked with reviewing flagged content risk harm to their mental health from viewing violent, obscene and otherwise disturbing content every day. And hate crimes online can have extraordinary impacts on people’s real-world lives. We need to develop better responses to these threats, but we must do so thoughtfully and carefully, to preserve freedom of expression and other human rights.

One thing is certain – a new international treaty is not the answer. In her June 4 statement, May also called on countries to create a new treaty on countering the spread of extremism online. That is simply an invitation to censor online speech, even more than some nations already do. Nations need no additional incentives, nor international support, for cracking down on dissidents.

Human rights treaties – such as the International Covenant on Civil and Political Rights – already provide a strong foundation for balancing freedom of expression, privacy and the regulation of harmful content online. These treaties acknowledge legitimate state interests in protecting individuals from harmful speech, as long as those efforts are lawful and proportional.

Rather than focusing on the straw man of “online radicalization,” we need an honest discussion about the harms of online speech, the limits of state censorship and the role of private companies. Simply shifting the responsibility to internet companies to figure this out would be the worst of all possible worlds.

The Conversation