Future Tense
The Citizen's Guide to the Future

July 11 2016 3:40 PM

Is Pokémon Go Actually Good for Your Mental Health?

As all mass cultural phenomena inevitably do, Pokémon Go—the hit augmented reality mobile game—almost immediately spawned a host of concerned criticism in the wake of its July release. One police department warned that criminals were using it to rob unsuspecting players, and a widely circulated blog post argued that it amplifies racial disparities. Others worried that it was inviting reckless driving or causing real injuries. Not all have been so critical of the game, though. To the contrary, some have suggested that it might actually be good for its players’ mental health. Framing the game in these terms may, however, be deceptive—perhaps even destructive.

Generally speaking, the idea seems to be that by encouraging us to leave our homes and head out into the world, Pokémon Go can help combat depression and overcome social anxiety. A roundup of such claims from Twitter Moments suggests the idea is relatively widespread, with one typical user testifying that the game “might just be the best thing for improving my mental health and positive body image.” BuzzFeed’s Alicia Melville-Smith interviewed a handful of players about their affirmative emotional responses to the game, each of them offering deeply personal anecdotes about their relationship to it.

While it’s entirely reasonable to say that the game is beneficial for some players, it’s important not to overgeneralize from their experiences. Moreover, given that the game has been available for less than a week, much of this enthusiasm—like the more hysterical responses it counters—may be a bit premature. In an otherwise enthusiastic article for Attn:, Laura Donovan acknowledges that research is inconclusive on whether merely going for a walk improves our emotional well-being—throwing a bit of cold water on the central premise of much of this testimony. The Harvard-based psychiatrist John Torous goes further, writing on Twitter that to claim the game is a mental health solution “belittles the struggles of many.” Nevertheless, others have taken the premise more or less on faith, inspiring at least one publication to write that “we can apparently expect a surprise health bonus” from the game.

Whatever the effects of low-level exercise, some of the good feelings players point to probably derive from other features of the game, features that have little to do with the demand to stroll. Most of all, there’s the simple pleasure of advancing your character and building your virtual menagerie. Even casual players will quickly gain in-game levels and improve their captured Pokémon in the early going, experiences that can feel like real accomplishments. Such features aren’t unique to Pokémon Go itself, though: You can find them in a host of existing mobile, console, and computer games, including in some more-or-less explicit Pokémon knock-offs. You might also point to Ingress, Pokémon Go’s more elaborate augmented reality predecessor. For many, attaching such features to a beloved franchise likely intensifies their delights, but the initial pleasures the game generates probably aren’t all that different from those you’d discover when first diving into World of Warcraft or another advancement-based role playing game.

Significantly, it’s entirely possible that these games offer real benefits, even when they don’t encourage players to exercise. Game designer Jane McGonigal has argued as much in Future Tense, writing that “purposeful game play builds self-confidence and real-world problem-solving skills,” which can, in turn, help stave off depression. While some have criticized McGonigal’s embrace of gamification—including in Slate—there’s clearly something to her premise that games can help us engage with our lives, and that they only really go wrong when we use them to flee from ordinary concerns. In this sense, some of the qualities players are attributing to Pokémon Go may simply be the products of video gaming more generally, not features that are unique to the new title.

There’s a slightly troubling quality, then, to the widespread celebration of Pokémon Go’s supposedly restorative powers. By holding the game up over (and implicitly against) others of its kind, these accounts may be inadvertently blowing a familiar dog whistle, one that plays into myths about gamers as mentally ill shut-ins. But that’s not, of course, to say that there’s anything wrong with getting a little exercise while you play (so long as you stretch first).

July 11 2016 11:30 AM

New Mexico Supreme Court: Courts Can’t Use Skype to Get Around the Constitution

Guadalupe Ashford’s body was found behind a trashcan at the edge of a parking lot. She had been bludgeoned to death by a brick, and state forensic analysts collected DNA samples from her body and the murder weapon. This DNA matched that of Truett Thomas, whom the state tried and convicted for first-degree murder and kidnapping. One problem: The forensic analyst who matched the DNA samples had moved out of state, and she was constitutionally required to testify at Thomas’ trial. The analyst didn’t want to schlep back to New Mexico, so the court allowed her to testify via Skype. What a handy solution!

Except it is also a blatantly unconstitutional one, as the New Mexico Supreme Court unanimously ruled on appeal. The problem is the Confrontation Clause of the Sixth Amendment (and its exact analogue in the New Mexico state Sonstitution), which states: “In all criminal prosecutions, the accused shall enjoy the right … to be confronted with the witnesses against him.” (Laboratory technicians and forensic analysts are considered “witnesses” for Confrontation Clause purposes.) Courtroom confrontation of witnesses was an obsession for the framers of the Bill of Rights: The practice was well-established in English common law, but the British rulers of the pre-revolutionary colonies often jettisoned confrontation rights to punish unruly colonists. So once these colonists broke free from the crown and penned their own charter of liberties, the right to confrontation was restored as an integral component of the criminal justice system.

The Supreme Court has repeatedly recognized that the Confrontation Clause strongly favors physical, in-person, face-to-face confrontation. However, the court relaxed this preference in 1990’s Maryland v. Craig, to let minors who allege sexual abuse to testify via a closed-circuit, one-way television procedure. This 5-4 decision came down at the height of America’s child abuse panic, when prosecutors sent thousands of innocent people to prison because children falsely accused them of molestation. Courts played a role in this travesty of justice by relaxing constitutional due process to spare children emotional trauma. Craig is perhaps the most egregious example of the special rules courts crafted for purported child victims. As Justice Antonin Scalia noted in dissent, the faux-confrontation procedure “gives the defendant virtually everything the Confrontation Clause guarantees (everything, that is, except confrontation).”

July 11 2016 10:03 AM

Welcome to Da Share Z0ne, the Coolest Uncool Place on the Internet

I’ve lol’d at more than my share of meme factories and Twitter comedians, and even written about a few along the way. Until recently, however, I’d never been mooned for my efforts. I’m happy to report that that changed when I set out to analyze Da Share Z0ne.

As its name suggests, Da Share Z0ne—available on both Twitter and Facebook—offers up a seemingly endless stream of goofy meme-ified images, all of them theoretically shareable, but only if you want your friends to think you’re a lunatic. Scroll rapidly through its steadily accumulating feed, and you’ll soon get a sense of its general style. Most of the images it uses feature the same borderline comical gothic aesthetic: Scythe-wielding grim reapers flip off the viewer while animated skeletons shred on flaming guitars.

While this mock-fascination with fatalist cool anchors the account’s aesthetic, the text running over the images dials in its silly tone. Inscribed in an array of tacky fonts, these messages operate in winkingly awkward contradistinction to the pictures. Though they sometimes open with tough-guy badassery, the phrases that Da Share Z0ne imprints over its images almost always undercut themselves by the end. One such post depicts a skeleton levitating over an endless field of lava. The text begins, “Get on my level,” only to conclude, “hurry up its lonely out here.” Plenty of other images simply rely on the contrast between image and text, like a picture of motorcycle jacket–clad skeleton standing in the rain that reads “Sunday is almost Monday.” Another features a bullet-toothed skull in a cowboy hat chomping menacingly on a pipe, only to inquire, “Beggin your pardon, but could you point me to the nearest restroom.”

Key among Da Share Z0ne’s mysteries may be where its anonymous creator finds the steady supply of skeletal art that it incorporates into its images. The account occasionally reuses an image, but its repertoire of (mostly skeletal) art is formidable. When it does repeat an image, it’s mostly to deliberately jokey effect, as when it posted a celebration of its millionth post (“we fuckin did it”), before almost immediately follow that up with a clumsily corrected version acknowledging that it was only 655 posts in (“thats still a lot”).

As is often the case with weird internet output, Da Share Z0ne employs deliberate ineptitude to comic effect. “I made some typos?” a skeleton behind the wheel of a luxury car in one recent post asks. “Lets see you zone anf drive.” It’s not just spelling “errors” and absent apostrophes, though: The fonts that the account uses also contributes, especially when it tries out multiple styles in a single post. In that “Get on my level” image, for example, the initial phrase appears in a faux-futuristic script, while the latter shows up in yellowish gothic characters. Switching between the two does double duty, at once nodding to the poseurish wannabe attitude of the account and providing a visual indication that the punchline has arrived.

As the Daily Dot’s Jay Hathaway observes, “The beauty of Da Share Z0ne lies in its universal appeal. Its humor crosses cliquish social boundaries.” In that, it stands apart from much of Weird Twitter, a comic subculture that relies heavily on entangled webs of in-jokes. Da Share Z0ne speaks to a broader audience because it’s not just poking fun at wannabe cool kids. Ultimately, it’s gently making light of the way we all present ourselves on social media—of the way we attempt to show off the best, brightest versions of our lives, only to accidentally reveal just how lame we really are.

There is, however, one area in which Da Share Z0ne remains stridently remain too cool for school, and that’s in its relationship to the press. When I reached out on Facebook in late June to request an interview, the read receipt indicated that someone had seen my message almost immediately, but no one wrote back for more than 12 hours. When someone finally replied, he or she simply sent a photograph of what appeared to be a hairy, naked butt. (Attempting a reverse image search turned up no other versions of the picture, which suggests that this really is the butt of someone connected to Da Share Z0ne.) I’m probably not alone in this experience: As Hathaway notes, Da Share Z0ne has repeatedly mocked attempts to write articles about it, with at least one of those posts going up right around the time when I sent in my own request for comment.

It’s hard to be too put off by such ribbing, in part because it’s in keeping with everything that makes Da Share Z0ne so much fun. Eager as I’ve been to share Da Share Z0ne with the world, I have to admit that finally doing so leaves me feeling profoundly uncool. Some of that’s structural (for example, Slate copy conventions oblige me to set its name in uppercase letters, even though the account itself favors an all-lowercase spelling), but mostly it just feels really dweeby to explain the greatness of something this self-evidently awesome. Fortunately, I suspect Da Share Z0ne itself has my back, however much its creators might mock me. After all, what do we learn from Da Share Z0ne if not that there’s nothing cooler than being uncool when you’re trying to be rad?

July 8 2016 11:46 AM

What We Know About the Bomb Robot Dallas Police Used to Kill Alleged Shooter

Three suspects have been apprehended after Thursday night’s Dallas shooting, which resulted in the death of five police officers and injured six more, plus one civilian. Following a prolonged standoff and negotiation with police, one suspect was killed by an explosion detonated by a bomb robot.

Robots have been proliferating in local policing over the last few years. The technology was largely developed for military and large-scale disaster response scenarios, but has obvious applications in local policing as well. It is used to diffuse or detonate bombs, scout locations with cameras, work in rubble, and do other jobs that are dangerous for officers. The Dallas shooting appears to be the first time a police robot has been used to kill.

Dallas police chief David Brown explained in a press conference Friday morning:

We cornered one suspect and we tried to negotiate for several hours. Negotiations broke down, we had an exchange of gunfire with the suspect, we saw no other option but to use our bomb robot and place a device on its extension for it to detonate where the suspect was. Other options would have exposed our officers to great danger. The suspect is deceased as a result of detonating the bomb.

The suspect who was killed claimed that he had planted bombs in the area, but the New York Times reports that officials said they swept the area and didn’t find any. The explosive the robot was carrying that killed the suspect was a police explosive.

A June report from the Dallas Morning News (surfaced by the Atlantic’s Adrienne LaFrance) includes descriptions of at least one or possibly two Dallas Police Department robots. The account describes a bomb robot picking up a duffel bag that then exploded, severely damaging a nearby SUV. Later, the piece describes another bomb robot—or perhaps the same one if it survived the other explosion—surveying a scene with its onboard camera, revealing images of two pipe bombs in a van, and later detonating the explosives safely. The Dallas News wrote, "An officer remotely controlled the robot’s movements as they watched the camera images on a screen." These types of police robots are not autonomous, meaning they do not make decisions using artificial intelligence on their own.

It’s not shocking that Dallas police are trained to use robots in the field, but the situation following Thursday’s shooting is unusual. As the Verge points out, police have been using robots in increasingly innovative ways, like to deliver food and a cellphone to a man on the brink of taking his own life in San Jose. But the scenario following the Dallas shooting is much more charged. Crucially the robot did not make any decisions itself and would not be capable of doing so.

July 8 2016 9:58 AM

Why Do Social Media Platforms Suggest We Follow the Wrong Accounts After a Tragedy?

Social media has become central to the ways we experience and process tragedies of all kinds, making collective phenomena feel personal. Events that would have once been abstract or distant instead feel increasingly immediate, visceral, and—most importantly—human. More than a decade into the social moment, though, the platforms themselves are still struggling with this capacity for connection—as Facebook’s muddled response to streaming video of police shootings shows. But these companies’ true fight may not be with their users, but with the algorithms that attempt to bring those users together.

On Thursday, in the wake of the fatal shooting of Philando Castile by the police, activist and artist Maira Sutton found that when she searched for information about Castile on Twitter, the site suggested she follow the National Rifle Association’s official account.

While Sutton and others, such as the columnist Trevor Timm, initially suspected this was an ad buy from the NRA, evidence suggested otherwise. Twitter does allow its users to pay to promote their accounts, but those accounts are always clearly labeled as such, according to the company’s official explanation. Since there was no “Promoted” tag on the NRA profile link in Sutton’s screen capture (and no corresponding option to dismiss the suggestion, as there normally would be with an advertisement on Twitter), it seems likely that the recommendation arose more organically.

What occurred, then, is probably much simpler, but no less troubling. As at least one other Twitter user proposed to Sutton, the suggestion likely cropped up because many on the site were tagging the NRA in their own tweets about Castile. That’s understandable, especially given the NRA’s poor record on social media responses (or lack thereof) to shootings. Noting that the two terms often appeared in concert, Twitter automatically suggested them to users who didn’t employ them together, on the assumption that they too might be interested in the connection—a connection that no human presumably reviewed or reflected upon.

Sutton would go on to observe that this possibility was—if anything—arguably worse than a simple ill-considered ad placement, writing that it was “a great example of how terrible Twitter’s algorithms are.” Timm also affirmed the same point, calling it a product of “Twitter’s awful algorithm.”

Twitter, for its part, hasn’t been especially transparent about how those algorithms work, though it is hardly alone among social media companies in its black-box approach to automated interaction. On its help page about account suggestions, the company discusses its approach in the broadest terms: “We may make suggestions based on your activity on Twitter, such as your Tweets, who you follow, accounts you interact with, and Tweets you engage with.” It further acknowledges that since the “suggestions are generated by algorithms … you may or may not know the accounts or find them relevant.”

A Twitter spokesperson said over email, “In search results, we show accounts that are relevant to the query. We determine what accounts to show based on several signals. For example, if numerous Tweets with the search term also mention a certain account, this account may appear.” Though this further explains how this happened, it does little to account for the lack of sensitivity exhibited by the pairing.

It is clear, however, that Twitter isn’t the only offender on this front—or the worst. On Facebook, for example, Slate senior editor Gabriel Roth found that when he started posting about Brexit—about which he had extensively written—the site began recommending that he “like” white supremacist, libertarian, and men’s rights pages, positions distant from his own, however much they might be entangled with contemporary British politics more generally. (Facebook did not respond to a request for comment.)

Such examples speak to the fundamental awkwardness of social media algorithms, blunt instruments passing themselves off as precision tools. Last year, David Auerbach explored how Facebook might encourage you to friend an ex you never want to speak to again by mining your phone contacts, a system similar to the one Twitter uses for some of its suggestions. Because they, like other contemporary forms of artificial intelligence, lack anything resembling emotional intelligence, the algorithms—powerful as they may be—give little consideration to our human experience of their suggestions.

Trying to correct for algorithmic ineptitude can go bad, too, of course. Facebook famously ran into trouble earlier this year when reporting by Gizmodo suggested that its “Trending” section reflected the liberal political biases of the company’s employees. While Facebook has subsequently worked to allay that situation, these suggestion mishaps indicate that social media companies might do well to dial up their human sociopolitical sensitivity just a little higher.

Twitter at least seems to be aware of the problem—and it appears to be grappling with its own systems accordingly. On Thursday, one of the company’s official accounts tweeted, “We’re sick of seeing names trend because they were killed brutally and unjustly.” We are too.

July 7 2016 6:15 PM

Live Video Platforms Should Assume They Will Eventually Host Footage of a Police Shooting

On Wednesday, 32-year-old Philando Castile, a black man from Falcon Heights, Minnesota, was shot and killed by a police officer during a traffic stop. From the front passenger seat, his girlfriend began filming a Facebook Live video of Castile bleeding in the driver’s seat and a police officer pointing a gun at Castile and talking. The video went on for 10 minutes.

After the stream was over and posted to Facebook for playback, it then went down for at least an hour in the immediate aftermath of the incident. Facebook told the Telegraph, “We’re very sorry that the video was inaccessible. ... It was down to a technical glitch and restored as soon as we were able to investigate.” This technical difficulty seems oddly coincidental given the sensitive nature of the video, but it is conceivable, especially if the video was receiving high traffic. It is now labeled, “Warning—Graphic Video. Videos that contain graphic content can shock, offend and upset. Are you sure you want to see this?”

Facebook has used this type of warning skin before, as with footage of the Walter Scott shooting last April. And certainly other social networks have struggled to make the right calls about policing inappropriate content. In 2014, for example, Twitter created controversy when it tried to suppress images of the beheading of journalist James Foley by ISIS. But there’s a whole other dimension when a video was streaming in real-time for anyone to see and then is later covered with a warning. Streaming services like Facebook Live, which launched in 2015 for celebrities and in April for all users, and Periscope, which is owned by Twitter and started in 2015, show events as they happen. As such they bring an additional complication to the already fraught question of how social networks should react, if at all, to controversial user-generated content.

A service like Facebook Live has a user base of more than 1.6 billion daily users. Though an individual person may not be able to instantly capitalize on this audience, footage that is societally significant, like the Philando Castile shooting video, can spread quickly. A service like Periscope has a smaller initial base. Bloomberg estimated in June that Twitter has less than 140 million active users every day. It’s still millions of people and can certainly surface important footage, but the scale is different.

The services also take different approaches to graphic video. Facebook writes in its Community Standards:

Facebook has long been a place where people share their experiences and raise awareness about important issues. Sometimes, those experiences and issues involve violence and graphic images of public interest or concern, such as human rights abuses or acts of terrorism. In many instances, when people share this type of content, they are condemning it or raising awareness about it. We remove graphic images when they are shared for sadistic pleasure or to celebrate or glorify violence.

When people share anything on Facebook, we expect that they will share it responsibly, including carefully choosing who will see that content. We also ask that people warn their audience about what they are about to see if it includes graphic violence.

This commentary focuses mostly on how to share content and talks less about what Facebook will do if it feels that a user isn’t meeting these standards. Periscope’s Community Guidelines are similar, but put less emphasis on the inevitability of graphic content.

Periscope is intended to be open and safe. To maintain a healthy platform, explicit graphic content is not allowed. Explicit graphic content includes, but is not limited to, depictions of child abuse, animal abuse, or bodily harm. Periscope is not for content that is intended to incite violence, or includes a direct and specific threat of violence to others. Periscope reserves the right to allow sensitive content when it is artistic, educational, scientific or newsworthy.

Given the data they host, content-sharing platforms have a subtle but deep power. As Motherboard wrote Thursday, “Facebook has become the self-appointed gatekeeper for what is acceptable content to show the public, which is an incredibly important and powerful position to be in.” It may not have been obvious at first, but it’s been recognizable for years now, and it’s time for companies to own it and make their positions plain.

When someone begins to record and stream an in-progress terrorist attack, he or she doesn’t have time to research which site will most value this type of contribution. Facebook’s Community Standards seem to imply that the company is open to supporting content that promotes transparency, while Periscope’s guidelines are more hesitant. For Facebook, it’s time to walk that walk so its users can get a better sense of what to expect. Companies have the right to promote whatever values they want—but they need to make these attributes prominent in their brands and consistent in their application. That way, consumers can make informed choices about where to take their data when it counts the most.

July 6 2016 1:00 PM

Future Tense Newsletter: Genius Ex Machina?

Greetings, Future Tensers,

There are plenty of reasons to be both fascinated by and skeptical of A.I.-generated “creative” works. This week, Dartmouth College’s Dan Rockmore and Allen Riddell further affirmed that ambivalence with their article on a competition they ran for software-authored artworks. As Rockmore and Riddell note, “A Shakespearean sonnet is basically a high-level algorithm,” thanks to the form’s standardized structure. It’s no surprise, then, that an A.I. can create something that looks a great deal like Shakespearean verse, and some of the entries in the competition managed just that. But the genius comes from a poet’s ability to work within limitations—not in the mere adherence to convention.

Above all else, Rockmore and Riddell write, A.I. authors still struggles to produce anything that resembles a complete narrative. In the past, that’s meant that humans have frequently had to intervene if they want to make it seem like their A.I. “artists” are saying anything remotely meaningful. Accordingly, we’re unlikely to see A.I. independently produce anything with the kind of world-building “audacity” that Konstantin Kakaes identifies in J.R.R. Tolkien’s Lord of the Rings, a mythopoeic complexity that underscores and enables the series’ “intimate, original grandeur.”

For the time being, A.I. authors may have found their true calling in writing about baseball, if only because baseball is a machine for making stories from data. Examining the Associated Press’ new attempt to automate minor league game recaps, Will Oremus writes that the results resemble “what you’d expect from a reasonably competent human reporter, minus any telltale signs of abject boredom, self-loathing, or stifled literary ambition.” Ultimately, though, computers are still better suited for repetitive tasks such as lab work—where they can, Stephanie Wykstra suggests, help combat the reproducibility crisis in the sciences—than they ever will be at more creative endeavors.

Here are some of the other stories that we read while puzzling over the weight of kilogram:

  • Health: Investigating a blog that chronicles Victorian attitudes toward illness, Rebecca Onion finds that our contemporary anxieties about technology and the body have a long history.
  • Criminology: Some law enforcement agencies are looking to track and target individuals with particular tattoos. Yael Grauer explores the ways such automated policing might go wrong.
  • Cybersecurity: An Israeli program is training high-school students how to defend against digital threats.
  • Genetics: Home DNA tests may be a booming industry, but most of what they tell you about your ethnic makeup is nonsense.
Events:
  • What concerted steps should Canada, Mexico, and the United States take to ensure that North America will become the world’s leading energy power for generations? Future Tense and the Wilson Center’s Canada Institute invite you to join them in Washington, D.C., at noon on Tuesday, July 26, for a conversation on what it will take for North America to fulfill its energy potential. For more information and to RSVP, visit the New America website, where the event will also be streamed live.

Jacob Brogan

for Future Tense

July 5 2016 4:41 PM

Netizen Report: The U.N. Condemned Internet Shutdowns. But Does It Matter?

GVA logo

The Netizen Report offers an international snapshot of challenges, victories, and emerging trends in internet rights around the world. It originally appears each week on Global Voices Advocacy. Ellery Roberts Biddle, Weiping Li, Hae-in Lim, Laura Vidal, and Sarah Myers West contributed to this report.

Notorious “internet enemies” like China, Russia, and Saudi Arabia approved a resolution at the U.N. Human Rights Council that condemns internet shutdowns and “human rights violations committed against persons for exercising their human rights and fundamental freedoms on the internet.” The resolution is nonbinding, meaning it cannot be enforced as law.

The resolution was hotly debated by the 47-member council, with the aforementioned states—along with South Africa, India, Indonesia, and others—pushing back against language surrounding internet shutdowns. Ultimately, with key support from certain member states and civil society groups, it is now approved. The resolution will provide a concrete benchmark and accountability mechanism that advocates can use in efforts to promote online rights in the policy arena.

But what does it mean for internet users and digital activists in countries where shutdowns and threats for online speech are the norm? For those working in the area of online access and rights, the list of current U.N. HRC members inspires concern.

The council includes several countries that are known for mandating platform and internet service shutdowns in the face of political strain—aside from Russia and China, there’s Bangladesh, where Facebook and WhatsApp were blocked for three weeks in 2015 in an effort to quell public instability. There’s Ethiopia, where state persecution of journalists and online human rights advocates are commonplace and where multiple social media platforms in the Oromo region have been blocked this year in response to student protests. There’s Venezuela, which has experienced a smattering of platform shutdowns and wholesale internet blackouts since opposition protests peaked in 2014. Beyond these, member states like Cuba, Ecuador, Morocco, Saudi Arabia, the United Arab Emirates, Qatar, and Vietnam have abysmal records when it comes to human rights violations against online activists and media workers.

While the resolution provides a strong policy standard for what constitutes “good behavior” by national governments around the world, the distance between policy and practice may remain great.

Satirical “Street Children” stuck behind bars in Egypt
Four members of a satirical web video group in Alexandria, Egypt, have been behind bars since May 10 on accusations of undermining national stability. The Ministry of Interior described them as “instigators against the ruling regime” after the group, called “Street Children,” posted a video that mocked President Abdel Fattah al-Sisi and called for him to resign. Their lawyer, who works with Cairo’s Association for Freedom of Thought and Expression, said they had also been accused of spreading “false news,” despite the fact that the videos are clearly in jest. The video received more than 1 million views on Facebook before their page was deactivated.

Founder of protest reporting outlet detained in China
Lu Yuyu, founder of citizen media outlet Not in the News, went missing June 15 along with his girlfriend. Chinese human rights advocates reported June 26 that Lu and his girlfriend Li Tingyu are being detained on suspicion of “picking quarrels and provoking trouble.”

The citizen news outlet reports and distributes news of mass demonstrations in China via Blogspot,YouTube, and the Twitter account @wickedonnaa since 2013. Not in the News also keeps track of the scale and number of incidents, the number of arrested demonstrators, and the reasons behind the demonstrations through its monthly statistics report. It recorded 28,950 incidents in mainland China in 2015 and 9,869 incidents in the first quarter of 2016.

China bulks up on internet governance
A draft security law presented before the standing committee of China’s National People’s Congress increases unspecified social responsibilities for network operators by requiring them to “comply with social and business ethics” and “maintain supervision by both government and public.” It is unclear when it may be passed.

Meanwhile, Lu Wei, the official in charge of overseeing cybersecurity and online censorship, unexpectedly stepped down from his post as director of the Cyberspace Administration of China. His deputy, Xu Lin, who is known as a loyal supporter of President Xi Jinping, will take over.

Peru slaps Google for denying citizen “right to Be forgotten”
The Peruvian Data Protection Authority sanctioned Google for ignoring a Peruvian citizen’s right to be forgotten, setting a new precedent in the country. In the case, a Peruvian citizen petitioned the Data Protection Authority to request the removal of links relating to accusations against him for child pornography, after his case was dismissed due to lack of evidence. The Data Protection Authority deemed Google responsible for treating the personal data of Peruvians and issued a $75,000 fine. The company can still appeal the decision in court.

Facebook will keep tracking non-users, at least for now
People who do not have Facebook accounts are nevertheless tracked when they visit the site, unless they use an anonymous browser such as Tor. The Belgian Privacy Commission recently sought to change this by taking the U.S.-based company to court, but it has now officially lost its case against the company. The Brussels Appeals Court dismissed the case, claiming the regulator does not have jurisdiction over the company, which has its European headquarters in Ireland. The Privacy Commission plans to launch a final appeal with the Court of Cassation, which can throw out previous judgments but not deliver new ones; the commission notes that in the past the court has overruled the Court of Appeal on cases involving jurisdiction over foreign companies.

Researchers sue U.S. over cybercrime law
Four academic researchers are suing the U.S. government, claiming they could be prosecuted under the Computer Fraud and Abuse Act for researching algorithmic discrimination. According to the researchers, the CFAA criminalizes gathering or “scraping” publicly available data from websites or creating pseudonymous user accounts on them if the sites’ terms of service prohibit this activity. The CFAA has been criticized for its vague provisions, which have been used to indict a MySpace user for adding false information to her profile, convict an IT administrator for deleting files he was authorized to access, and threaten (now deceased) internet activist Aaron Swartz with 50 years in jail for downloading large quantities of academic articles at the Massachusetts Institute of Technology.

New Research

July 1 2016 8:00 AM

Victorians’ Fears About the Ills of Modern Technology Sounded a Lot Like Ours

Late-night overdrinking? Writer’s cramp? Spoiled kids? Such complaints haunted 19th-century Britons. The blog Diseases of Modern Life is run by a group of historians at the University of Oxford, working on a research project about Victorians who feared that new technologies and configurations of social life were making people sick. The body and the world—such commentators thought—had fallen tragically out of sync with one another, in an industrial age of telegram and train travel.

This blog is full of fascinating tidbits the researchers have come across in the course of their investigations. (It's also got great images, some of which I've reproduced here.) There are some delicious oddities: In one post, researcher Melissa Dickson describes the ammoniaphone, a creation of a Dr. Carter Moffat, who claimed to take the air of southern Italy, which (opinion agreed) produced the best opera singers, and distil it into a chemical formula that you could huff in order to purify and clarify your voice. Here was a technological solution to rampant air pollution, branded and packaged to offer immediate relief.

L0058011 Ammoniaphone, ‘for voice cultivation by chemical mean

Science Museum, London. Wellcome Images

In another post, Dickson writes about H.G. Wells’ short story “The New Accelerator,” from 1901, about a chemist who creates a "nervous stimulant" that would let a person move so quickly that he could perform all of his duties at warp speed, thereby defeating the demands of a world that came at you too quickly.  This satire argued that the only way a person could possibly keep up with the rush of modern life was to become superhuman.

In many a case that the researchers identify, Victorian ideas about “diseases of modern life” seem to be totally intertwined with ideas about class and social place. Jennifer Wallis analyzes widely circulated stories about new kinds of addiction and intoxication: lady cologne drinkers, who supposedly hid their alcoholism by chugging perfume; lawyers and clerks and other professional men who imbibed alcohol late at night, not at the pub but in their own offices; the well-to-do woman supposedly “picked up on Broadway,” having succumbed to a morphine addiction. (That last lady appeared in an advertisement for Lydia E. Pinkham’s patent medicine, which she should have turned to instead.) In all of these cases, people who were socially expected to control themselves in every situation were driven to drink by the hectic pace of their work and social lives.

The consequences of such a world for the young were supposed to be dire. In another post, Wallis writes about fears of overpampered or “spoiled” children, who would not eat what they were served and rampaged through their family houses without censorship from “indulgent” parents.  “Nineteenth-century concerns for spoiled children drew on contemporary ideas about the development of the nervous system and laws of heredity, considering how the process of growing up might literally—and permanently—alter the fabric of the body and brain,” Wallis writes. (Here’s a fine contemporary joke from the London Journal: “There’s one good thing about spoiled children.” “What’s that?” “One never has them in one’s own house.”)

41_00324389
Robert Gavin, "The Flower Mission," undated.

Diseases of Modern Life

Well-to-do late-19th-century women brought flower arrangements into the drab, ugly houses of poor people who were stricken with illnesses, believing that the beauty of the flowers could cure any number of ills. While this idea now seems eminently condescending—surely the stricken poor might prefer deliveries of food for the rest of their family members to eat?—Wallis points out that no less an authority than Florence Nightingale believed in the practice, writing in her Notes on Nursing:

The effect in sickness of beautiful objects … and especially of the brilliancy of color is hardly at all appreciated. … Little as we know about the ways in which we are affected by form, by colour, and by light, we do know this, that they have an actual physical effect.

In offering this “cure,” “flower missionaries” thought to bring a little bit of nature to people who were totally trapped inside the urban cityscape industrialism had created.

Sometimes, the poor were recast as just part of a threatening landscape of urban ruckus, which encroached on the tranquility of the well-to-do class. Furious at the impositions of street musicians, who made it impossible for them to get work done in the city, in 1864 a group including Charles Dickens petitioned the House of Commons to pass a Bill for the Suppression of Street Music. “Your correspondents are professors and practitioners of one or other of the Arts of Sciences. In their devotion to their pursuits … they are daily interrupted, harassed, worried, wearied, and driven nearly mad by street musicians,” the group wrote. The petitioners cared far less for the conditions that might have driven the musicians into such a job, and more for their own peace of mind.

Is it helpful for us, contemplating our own “Diseases of Modern Life”—depression, anxiety, digital distraction, overwork—to know that people have been thinking roughly recognizable thoughts for hundreds of years?

WorryingTrends
Punch, 1906.

Internet Archive

Writing about a cartoon that appeared in Punch in 1906, Melissa Dickson notes the similarity between this satirical scene—a man and a woman, each with their separate wireless telegraph—and today’s worries about smartphone alienation: “Different technology, same statement ... As dramatically as technology changes, we, at least in the way we regard it, remain surprisingly unchanged.”

But I find it more helpful to think in terms of precursors. It’s not that “nothing has changed,” and so none of our contemporary concerns are valid. It’s more that the social world we live in has been rapidly evolving in response to an explosion in new technologies for at least two centuries. The value lies in thinking critically about the way these anxieties take shape. How many of today’s middle- and upper-class concerns about “Today’s World” would look just as class-bound as the Victorians’ dated worries, if examined from a vantage point of a hundred and fifty years hence?

June 30 2016 5:34 PM

A Tesla Driver Died in a Crash While His Car Was on Autopilot

A Tesla driver died in a crash while his Model S was on autopilot, the company disclosed in a blog post Thursday.

It’s not immediately clear to what extent Tesla’s autopilot system, which has been billed as the most advanced of its kind on the market, was at fault. According to the company, the U.S. National Highway Transportation Safety Administration has opened a “preliminary evaluation” into the system’s performance leading up to the crash. Here’s how Tesla described the accident (italics mine):

What we know is that the vehicle was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied. The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S.

The phrase in italics above is exactly the sort of excuse that everyone involved in self-driving cars had hoped never to have to hear in conjunction with a deadly accident. It’s a classic “edge case” for computer vision, the sort of thing the engineers are supposed to thoroughly solve before we entrust their software with our lives.

According to Tesla, this is the first known death involving its autopilot system. The company reported that its drivers have collectively traveled about 130 million miles in autopilot mode. On average, Tesla noted, one person dies for every 94 million vehicle miles traveled in the United States.* “It is important to emphasize that the NHTSA action is simply a preliminary evaluation to determine whether the system worked according to expectations,” the company added.

The implication is that people shouldn’t rush to label Tesla’s autopilot feature as dangerous. It’s a case that the company persistently throughout its blog post announcing the crash, making for a tone that’s more defensive than apologetic. CEO Elon Musk did offer public condolences, however:

Tesla’s blog post is worth reading in full, as it lays a blueprint for the ways in which the company is likely to defend itself in the face of the intense scrutiny that is sure to follow. It’s a test case for how the public and media will respond to the occasional deaths that will inevitably come as carmakers move gradually toward self-driving technology. Tesla appears ready to defend itself with statistics, reminding people that human drivers set a relatively low bar when it comes to safety. Whether statistics are enough to trump people’s fear remains to be seen.

It’s important to note, as Tesla does, that the company’s autopilot system officially requires the driver to keep his hands on the wheel, beeping warnings when it senses inattention. That differentiates it from the fully autonomous driving technology that Google, Uber, and others are developing and testing. And perhaps it will convince regulators and others that accidents such as this one are not to be blamed on the company or its software. Autopilot, Tesla insists, is a safety feature that is meant to be redundant to the driver’s full attention.

Yet there are pitfalls to this approach, as illustrated by YouTube videos showing drivers going “hands-free” or even vacating the driver’s seat while their Tesla is on autopilot. The company has taken steps to prevent this.

Still, as I argued last year after test-driving a Model S on autopilot, the technology—impressive as it is—does seem to tempt drivers to relax their focus. That’s why many other automakers and tech companies are taking a different approach to vehicle automation. Toyota, for instance, views its role as that of a backstop against human error, rather than a substitute for human effort. And Google has removed human drivers from the equation entirely, reasoning that it’s impossible to expect them to drive safely when they know a computer is doing much of the work.

Tesla’s description of the accident does not make it sound as if the autopilot system went rogue and crashed into something. Rather, it seems to have failed to avoid an accident that a fully engaged human driver may or may not have managed to avoid. And while Tesla doesn’t say so, it certainly seems possible that the driver in this case was devoting less attention to the road ahead than he might have if autopilot were not engaged.

I’ve emailed NHTSA for comment and will update when the agency responds.

Previously in Slate:

*Correction, June 30, 2016: This post originally misstated how many miles drivers have collectively traveled with Teslas in autopilot mode. It is 130 million miles, not 130,000. It also misstated the rate at which people die in vehicle accidents. It is one person per 94 million vehicle miles traveled, not one person per 94,000 miles.

READ MORE STORIES