Future Tense
The Citizen's Guide to the Future

Sept. 15 2017 2:52 PM

Future Tense Event: Franklin Foer to Discuss World Without Mind With Jacob Weisberg

Tech companies like Google, Amazon, Apple, and Facebook have revolutionized our lives, connecting us in ways that were once unimaginable—to one another, to information, and to entertainment. Conventional wisdom leads us to believe that the technologies unleashed by these corporations have empowered us as individuals. But is that really the case?

In World Without Mind: The Existential Threat of Big Tech, a powerful critique of the role these companies play in our economy and in our lives, Franklin Foer argues that the success of these tech juggernauts, with their gatekeeping control over our access to the world's information, has created a new form of dangerous monopoly in America life. Does our infatuation with the technological wonders these companies offer distract us from the price we pay as a society in terms of surrendered privacy, intellectual property rights, and diversity of worldviews? Is our sense of individual empowerment merely an algorithm-fed illusion?

Advertisement

Join Future Tense on Wednesday, Oct. 4, in New York for a conversation with Franklin Foer and The Slate Group Chairman Jacob Weisberg to discuss World Without Mind and the role of these new technologies in our lives. For more information and to RSVP, visit the New America website.

Sept. 15 2017 2:14 PM

We Need a Law Requiring Faster Disclosure of Data Breaches—Now

The Equifax hack is highly disturbing not only because of its massive scope, but also because of the specific type of personal data that was stolen. Credit reporting agencies are supposed to be one of our lines of defense in data security and privacy protection—and Equifax failed in its core mission. Moreover, by waiting six weeks to notify customers, Equifax robbed them of the crucial window during which they may have been able to stem some of the damage. Now, people claiming to be the hackers are demanding Equifax pay roughly $2.6 million in Bitcoin, threatening to dump data on nearly all those affected if they aren’t paid by Sept. 15.

In a world where one line of faulty computer code can mean the difference between normalcy and chaos, it is often not a question of if, but when, the most sensitive systems will be hacked. Given this reality, we must improve our ability to react at every level after companies have been breached. The Equifax debacle exposed three deficiencies in our laws that need to be corrected: We need better protections for consumers, a national reporting system for data breaches, and strong cybersecurity standards for credit reporting agencies.

Advertisement

Companies that hold our most sensitive data need to rethink their relationship with the public. Executives at major firms swear no oaths, but they are just as responsible for the well-being of the American people as any member of Congress—especially today, when companies collect and analyze more data on the average citizen than the government does. Equifax failed not because its defenses were impenetrable. Rather, it failed because it took its role as digital gatekeeper for granted. Reports show that Equifax failed to apply a known patch that may have prevented the data breach.

In the aftermath of an attack, every employee—from the CEO to the interns—has to focus on two key goals: stop the bleeding and restore confidence. Instead, Equifax customers were faced with predatory and woefully inadequate services. The company’s rollout of a website used to inform customers of their account status was riddled with technical flaws. In some instances, the very programs Equifax offered to monitor the status of user data was flagged by antivirus software as a phishing scam itself.

If users did manage to get a straight answer about the status of their data, they soon discovered they were barred from suing Equifax due to a fine-print mandatory arbitration clause. Thanks to New York’s attorney general, Equifax has changed its policy—at least in the case of this hack. Yet the fact remains: It is outrageous that Equifax was planning to take advantage of its customers’ precarious position by stripping their rights to sue if they relied on the company’s identity theft service.

To end this consumer abuse, I plan to introduce legislation that would prevent companies from enacting their forced arbitration clauses in the event of a data breach. While my colleagues and I will focus intently on Equifax during the digital autopsy phase to come, we also have to turn our gaze inward. We need to pass a national data breach notification law—now.

Currently, a muddled patchwork of 48 different state laws governs when and how companies are required to report data breaches. Aside from disadvantaging people who live in states with more lax reporting requirements, it also complicates things for companies that want to comply. Increasingly, data isn’t stored in one single place. Depending on a firm’s network architecture, a user’s account information can exist in, say, Newark, Los Angeles, and Chicago all at the same time. That means three—or often more—competing sets of laws.

Add to this the fact that Equifax and similar firms often fall through the regulatory cracks when it comes to oversight (credit reporting agencies are less heavily regulated and monitored than banks, although they hold a goldmine of data) and a stark picture emerges. Strong cybersecurity standards may have prevented this breach. On this front, I plan to offer legislation that would compel credit reporting agencies to adopt clear cybersecurity standards similar to those of the financial industry.

In the coming weeks, Equifax and its top executives will be scrutinized by investigators at the FBI, FTC, and several congressional committees. Congress must serve as a catalyst for action, bringing together consumers who demand better cybersecurity, encouraging agencies to conduct thorough oversight, and helping firms recognize that post-incident services are a crucial part of good data stewardship. Together, we can begin to develop a system that works for the 21st century.

Sept. 15 2017 10:10 AM

Netizen Report: If You Want to Run a Group Chat in China, Be Ready to Censor Your Friends

new advox logo

The Netizen Report offers an international snapshot of challenges, victories, and emerging trends in internet rights around the world. It originally appears each week on Global Voices Advocacy. Mahsa Alimardani, Ellery Roberts Biddle, Oiwan Lam, Elizabeth Rivera, Nevin Thompson, and Sarah Myers West contributed to this report.

New regulations in China will make chat group administrators responsible—and even criminally liable—for messages containing politically sensitive material, rumors, violent or pornographic content, and news from Hong Kong and Macau that “has not been reported by official media outlets.” This represents a bold policy shift by extending the work of regulating online content beyond government workers and companies to the users themselves.

Advertisement

The new rules also require internet chat service providers such as WeChat and QQ to verify the identities of users and keep a log of group chats for at least six months. The rules require the companies to moderate users’ access to chat services depending on their “social credit” rating: those who break rules may see their rights to manage group chats suspended and be reported to the government. Group managers will be seen as responsible for the management of the group.

Social media users take sides on Myanmar’s Rohingya conflict
More than 100,000 people from the ethnic minority Rohingya group have been displaced from their homes in northwest Myanamar in recent weeks. Tens of thousands of Rohingya refugees, who are mostly Muslim, are crossing into Bangladesh to escape the fighting between the Myanmar military and a pro-Rohingya insurgent group.

There is plenty of coverage of the situation by various media, ranging from mainstream wire services to independent Rohingya-run outlets like Rohingya Blogger. But for it is still difficult to obtain accurate information about the conflict, as journalists both from the region and abroad have been struggling to gain access to the conflict areas, and local media have a history of being punished for—and barred from—covering the Rohingya. Aung San Suu Kyi, the de facto leader of Myanmar, has even accused various media of circulating “fake news” on the topic. Her government has established a Facebook page, known as the Information Committee, that claims to offer verified information about the conflict.

Ample anti-Rohingya propaganda has also spread online, reinforcing the Myanmar government’s contention that Myanmar-born Rohingya are in fact “Bengalis,” or undocumented immigrants from Bangladesh. While many such messages have spread organically, researchers saw a spike of 1,500 new Twitter accounts after clashes broke out on Aug. 25. The accounts are spreading pro-Myanmar government messages and feature hashtags such as #Bengali and #BengaliTerrorists. It is unclear who is behind the new accounts.

The conflict is a hot topic in South and Southeast Asian social media circles, and has proven to be a divisive issue for both citizens and governments in the region.

In Indonesia, which is majority Muslim, a veteran journalist was accused of defamation for comparing former Indonesian President Megawati Sukarnoputri to Myanmar's Aung San Suu Kyi in a Facebook post. In the post, journalist and documentary filmmaker Dandhy Dwi Laksono wrote that if Myanmar’s government is being criticized for its treatment of ethnic Rohingya, the Indonesian government should similarly be held liable for suppressing the independence movement on the Indonesian island of West Papua. He further compared Suu Kyi’s silence on the persecution of the Rohingya to Megawati’s role as party leader of the government, which has recently intensified the crackdown on West Papuan independence activists. If he is prosecuted for and convicted of defamation, Dandhy could face up to four years in prison.

At the other end of the spectrum, the Indian government requested that Twitter locally censor a tweet expressing solidarity with the Rohingya. An estimated 40,000 Rohingya live in India, where their citizenship status has been in legal jeopardy due to recent efforts by conservative legislators to render them “illegal” immigrants.

Palestinian human rights activist arrested over Facebook post
Palestinian human rights activist Issa Amro was arrested by the Palestinian Authority for criticizing a journalist’s arrest in a Facebook post. The post, which is no longer visible on the platform, denounces the arrest of Ayman Qawasmi, who was arrested after openly criticizing the PA and calling for the resignation of Palestine’s president and prime mminister. Qawasmi was released, but Amro remains under arrest, charged with stirring sectarian tensions and “speaking with insolence.” He is also facing challenges in an Israeli military court on disputed charges relating to his political protest activities. The U.N. High Commissioner for Human Rights published a statement expressing concern at his arrest and urging his release.

Salvadoran journalists face violent threats on social media
El Faro
and Revista Factum, two highly regarded independent news websites in El Salvador, received violent threats on social media targeting specific journalists who have been covering corruption in the country’s criminal justice system. One threatening tweet said Factum and El Faro journalists would "end up like Christian Poveda," a French-Spanish journalist killed by members of the Mara Salvatrucha gang (also known as MS-13) in 2009. The head of the Salvadoran national police, Howard Cotto, and Vice President Óscar Ortiz said they were aware of reports of illegal activity by police officers and promised to open an investigation.

Japanese activists take to the streets, stomp on hateful tweets
Demonstrators gathered outside Twitter’s Japan headquarters in Tokyo, demanding the company take more action to rein in hate speech. Tokyo No Hate, a volunteer collective of activists, led the demonstration by covering the sidewalk in front of the office with printouts of abusive tweets. Protesters symbolically stomped on the tweets before crumpling them up and depositing them in recycling bins.

Chile doubles down on data retention (literally)
A secret decree by the Chilean government recently made public by investigative journalists modifies the country’s law about the interception of communications. It extends requirements for companies to retain data on digital communications made in Chile from one to two years, and asks companies to store additional metadata on communications. It also contains provisions that could stymie the use of encryption technologies that would hinder the delivery of this information. The Santiago-based digital rights group Derechos Digitales says the law may be unconstitutional.

New Research

You Can’t Stay Here: The Efficacy of Reddit’s 2015 Ban Examined Through Hate Speech”— Eshwar Chandrasekharan et al, ACM Transactions on Computer-Human Interaction

Sept. 14 2017 6:33 PM

Facebook’s Offensive Ad Targeting Options Go Far Beyond “Jew Haters”

ProPublica reported Thursday that it was able to use Facebook’s advertising platform to target users who had expressed interest in topics such as “Jew hater” and “German Schutzstaffel,” also known as the Nazi SS. And when ProPublica’s reporters were in the process of typing “Jew hater,” Facebook’s ad-targeting tool went so far as to recommend related topics such as “how to burn Jews” and “History of ‘why Jews ruin the world.’ ”

To make sure the categories were real, ProPublica tried to purchase three ads, or “promoted posts,” targeting those users. Facebook’s targeting tool initially wouldn’t place the ads—not because of anything wrong with the categories, but simply because the number of Facebook users interested in them was beneath its preprogrammed threshold. When ProPublica added a larger category to “Jew hater” and the others, however, Facebook’s ad tool reported that its audience selection was “great!” Within 15 minutes, the company’s ad system had approved all three ads.

Advertisement

Contacted about the anti-Semitic ad categories by ProPublica, Facebook removed them, explaining that they had been generated algorithmically. The company added that it would explore ways to prevent similarly offensive ad targeting categories from appearing in the future.

Yet when Slate tried something similar Thursday, our ad targeting “Kill Muslimic Radicals,” “Ku-Klux-Klan,” and more than a dozen other plainly hateful groups was similarly approved. In our case, it took Facebook’s system just one minute to give the green light.

Facebook ad targeting
Slate was able to place an ad that included the following targeting categories, among many others, with the help of Facebook's algorithmic targeting tool.

Screenshot / Facebook.com

This isn’t the first time the investigative journalism nonprofit has exposed shady targeting options on Facebook’s ad network. Last year, ProPublica found that Facebook allowed it to exclude certain “ethnic affinities” from a housing ad—a practice that appeared to violate federal anti-discrimination laws. Facebook responded by tweaking its system to prevent ethnic targeting in ads for credit, housing, or jobs. And last week, the Washington Post reported that Facebook had run ads from shadowy, Kremlin-linked Russian groups that were apparently intended to influence the 2016 U.S. presidential election.

The revelation that Facebook allows advertisers to target neo-Nazis and anti-Semites comes at a time when it and other tech companies are under growing scrutiny for their role in facilitating online hate and white supremacy. As our colleague April Glaser recently reported, that change in attitude from previously permissive tech companies has begun to give rise to a sort of right-wing shadow web that embraces controversial, offensive, and even hateful speech.

But in the meantime, it’s clear that major platforms such as Facebook have big messes of their own still to deal with. Facebook’s ad network, in particular, still seems to embody an “anything goes” approach to targeting, despite fixing a few high-profile problems such as the housing discrimination option.

About an hour after ProPublica published its story Thursday, Slate was able to place its own ad on Facebook using similarly offensive search terms for audience targeting. Though the company had removed the specific terms mentioned in ProPublica’s search, it took only a few minutes to find myriad other categories of the same ilk that were still available on the company’s ad targeting tool.

Following ProPublica’s methods, we built an ad to boost an existing, unrelated post. We used Facebook’s targeting tool to narrow our audience by demographics, including Education and Employer. We found and included 18 targeting categories with offensive names, each of which comprised a relatively small number of users, totaling fewer than 1,000 people altogether.

As with ProPublica’s ad, Facebook’s tool initially said our audience was too small, so we added users whom its algorithm had identified as being interested in Germany’s far-right party (the same one ProPublica used). That gave us a potential audience of 135,000, large enough to submit, which we did, using a $20 budget. Facebook approved our ad one minute later.

Below are some of the targeting groups Facebook allowed us to use in the ad. Many were auto-suggested by the tool itself—that is, when we typed “Kill Mus,” it asked if we wanted to use “Kill Muslim radicals” as a targeting category. The following categories were among those that appeared in its autocomplete suggestions under the option to target users by “field of study:”

  • How kill jewish
  • Killing Bitches
  • Killing Hajis
  • Pillage the women and rape the village

  • Threesome Rape


Under “school,” we found “Nazi Elementary School.” A search for “fourteen words,” a slogan used by white nationalists, prompted Facebook to suggest targeting users who had listed their “employer” as “14/88,” a neo-Nazi code. Other employers suggested by Facebook’s autocomplete tool in response to our searches:

  • Kill Muslimic Radicals
  • Killing Haji

  • Ku-Klux-Klan

  • Jew Killing Weekly Magazine
  • The school of fagget murder & assassination

Some of these categories had just one or two members; others had more. The group who had listed “Ku-Klux-Klan” as their employer included 123 people. This seems to imply that, while Facebook’s ad tool prohibits too small a number of total users, by default it allows an ad to target groups as small as a single individual, as long as other, larger groups are also targeted.

Facebook did not immediately respond to Slate’s request for comment.

Sept. 14 2017 1:56 PM

Why You Should Be Suspicious of That Study Claiming A.I. Can Detect a Person’s Sexual Orientation

Recently, the A.I. community was left largely stunned when a study released by two Stanford researchers claimed that artificial intelligence could essentially detect a person’s gay or straight sexual orientation. For those of us who have been working on issues of bias in A.I., it was a moment that we had long foreseen: Someone would attempt to apply A.I. technology to categorize human identity, reducing the rich complexity of our daily lives, activities, and personalities to a couple of simplistic variables. The now-infamous study is really only the tip of the iceberg when it comes to the dangers of predictive analytics mapping onto nuanced questions of human identity. Here, using entirely white subjects, all of whom had posted their profiles on dating sites, along with their photographs, the study concluded that its neural technology could predict whether a person was gay or straight roughly over 70 percent of the time (though it depended on gender and how many images were presented).

The study was deeply flawed and dystopian, largely due to its choices of whom to study and how to categorize them. In addition to only studying people who were white, it categorized just two choices of sexual identity—gay or straight—assuming a correlation between people’s sexual identity and their sexual activity.  In reality, none of these categories apply to vast numbers of human beings, whose identities, behaviors, and bodies fail to correlate with the simplistic assumptions made by the researchers. Even aside from the methodological issues with the study, just focus on what it says about, well, people. You only count if you are white. You only count if you are either gay or straight.

Advertisement

“Technology cannot identify someone’s sexual orientation,” stated Jim Halloran, GLAAD’s chief digital officer, in a statement. “What their technology can recognize is a pattern that found a small subset of out white gay and lesbian people on dating sites who look similar. Those two findings should not be conflated.” Halloran continued, “This research isn’t science or news, but it’s a description of beauty standards on dating sites that ignores huge segments of the LGBTQ community, including people of color, transgender people, older individuals, and other LGBTQ people who don’t want to post photos on dating sites.”

Unsurprisingly, the researchers claimed that critics were rushing to judgment prematurely. "Our findings could be wrong,” they admitted in a statement released Monday. “[H]owever, scientific findings can only be debunked by scientific data and replication, not by well-meaning lawyers and communication officers lacking scientific training," they claimed.

It may be tempting to dismiss this study as a mere academic exercise, but if this sort of research goes unchallenged, it could be applied in terrifying ways. Already, LGBT people are being rounded up for imprisonment (Chechnya), beaten by police (Jordan), targeted for being “suspected lesbians” (Indonesia), or at risk of being fired from military service (United States). What if homophobic parents could use dubious A.I. to “determine” whether their child is gay? If A.I. plays a role in determining categories of human identity, then what role is there for law to challenge the findings of science? What is the future of civil rights in a world where, in the name of science, the act of prediction can go essentially unchallenged? These are not just questions involving science or methodology. Indeed, they can often mean the difference between life, liberty, equality—and death, imprisonment, and discrimination.

The irony is that we have seen much of this before. Years ago, constitutional law had a similar moment of reckoning. Critical-race scholars like Charles Lawrence and others demonstrated how the notion of color blindness actually obscured great structural inequalities among identity-based categories. The ideals enshrined in the U.S. Constitution, scholars argued, that were meant to offer “formal equality” for everyone were not really equal at all.  Indeed, far from ensuring equality for all, the notionally objective application of law actually had the opposite effect of perpetuating discrimination for different groups.

There is, today, a curious parallel in the intersection between law and technology. An algorithm can instantly lead to massive discrimination between groups. At the same time, the law can fail to address this discrimination because the rhetoric of scientific objectivity forecloses any deeper, structural analysis of the bias that lies at the heart of these projects and the discrimination that can flow directly from them.

In this case, the researchers are right that science can go a long way toward debunking their biased claims. But they are wrong to suggest that there is no role for law in addressing their methodological questions and motivations. Instead, the true promise of A.I. does not lie in the information we reveal about one another, but rather in the questions they raise about the interaction of technology, identity, and the future of civil rights.  We can use A.I. to design a better world.  But if we leave civil rights out of the discussion, we often run the risk of reproducing the very types of discrimination we might hope to eradicate.

Sept. 14 2017 10:48 AM

Nintendo Unleashes Mario’s Nipples in a New Game, and They Are Super

Screen Shot 2017-09-14 at 10.24.00 AM
Official portrait.

Nintendo

In his official portrait, Nintendo’s iconic hero Mario poses with confident insouciance, hands placed jauntily on his hips, gut projecting outward. This is the look of a man who knows he can get away with anything, no matter how internally contradictory or tacky, including a pair of white gloves that ill befit his blue overalls with their oversize gold buttons. Even a red cap with his own logo emblazoned on it—a true sign of class if ever there was one, I’m sure.

This week, however, we have seen a new Mario. Mario as never before. Mario, if you will, gone wild.

Advertisement

On Wednesday, Nintendo showed off a promotional video for the forthcoming Super Mario Odyssey. Though there was much to enthuse over here for the franchise’s most ardent fans—an ice kingdom! An airship shaped like a top hat!—one truly novel detail stood out: Mario takes off his shirt during a trip to the beach. What’s more, for what is apparently the first time, he appears to have nipples.

The reaction was immediate and overwhelming. “Shirtless Mario Leads To Widespread Pandemonium,” read the headline to a Kotaku post that rounded up so many tweets I stopped counting them. “Here’s What Mario’s Nipples Look Like, I Guess,” Entertainment Weekly obligingly reported. “Mario’s Nipples EXPOSED *NOT CLICKBAIT*,” promised a YouTube video.

If our protagonist’s partial nudity is shocking (and OK, yes, it is not), the surprise derives as much from its seeming break with tradition as it does from Nintendo’s family-friendly reputation. Indeed, Mario’s conventional attire says as much about the history of gaming hardware as it does about the character himself.

The stark contrasts of his traditional uniform—evident in that official portrait—are persistent evidence of the limited color palettes available on the Nintendo Entertainment System and other early gaming machines. Taking advantage of those carefully managed resources, Shigeru Miyamoto and his collaborators blocked out their hero’s look in a way that would ensure the player’s avatar stood out against the background. In Super Mario Bros., for example, his original overalls typically appear to be white or brown, presumably to create a clear contrast with the flat blue of the sky behind him.

As the power of Nintendo’s hardware—and the cleverness of its software designers—increased, the appearance of their hero began to change, but always conditionally so. The raccoon-like Tanooki suit, introduced in Super Mario Bros. 3, for example, demonstrates their growing ability to work within, and expand upon, the console’s limitations. Even as the underlying technology grew more sophisticated, however, they maintained many of the original choices structurally dictated by earlier restraints. What started as a response to a machine’s finitude became the hallmark of narrative continuity. That the delicately shaded and dynamically lit Mario of the newest title still resembles the character we met more than 30 years ago is, in other words, a testament to the evolutionary history of digital architecture.

But if Mario as we see him now is a consequence of the ways he has been, perhaps we should think differently about his newly revealed torso. Remember, Mario lives in a world peopled almost entirely by animate mushrooms and sentient turtles. Mario may look human, but how do we know that we’re not simply projecting on him, imposing our own anthropocentric expectations of what a hero should be?

Think again about those gilt buttons on his overalls in the portrait above, the sole remaining hallmark of his former working class profession. (He is, we are told, not a plumber these days.) They seem to be positioned almost exactly where his “nipples” appear to be in the new footage. What if those round discolorations aren’t holdovers from mammalian prenatal development after all? Maybe they are, instead, evidence that his conventional costume fits his whale-like body just a little too tightly.

Screen Shot 2017-09-14 at 8.02.13 AM
Still from Super Mario Odyssey.

Nintendo

Perhaps, reader, you think my hypothesis ridiculous, and it may well be, but there are other signs that things are not what they seem. As many (here at Slate and on the wider internet) were quick to note, Mario’s bare body is strangely hairless, as if his chest and newly revealed arms had been shorn clean by Bowser’s cleansing fire. His face is hirsute as ever, but strangely so. Pausing the video, we note that his mustache, eyebrows, and coiffure are all different shades and textures. Is Mario—like the Mephistophelian Judge in Cormac McCarthy’s Blood Meridian—beset with alopecia universalis? Is he, perhaps, wearing a wig? Does a hastily spirit-gummed toilet brush adorn his upper lip?

It’s possible that we will never know. We may think we’ve see our hero as we really is, but I suspect that his body still holds many secrets. Gaming systems will keep advancing, but our hero will remain what he always has been. We have, I think, yet to see his true form. Let us pray that we never do.

Sept. 13 2017 2:39 PM

The Trump Administration Is Barely Regulating Self-Driving Cars. What Could Go Wrong?

We tend to think of self-driving cars in utopian terms, as benevolent conveyances that, once optimized, will make their passengers safer by removing their human shortcomings from the transportation equation. But until that future happens, they’re also large robots asking that we trust they won’t harm us. That includes not only passengers but also other drivers, kids that dart out onto the street, bicyclists, storefronts, and basically anyone or anything that might be hit by a hulking pile of steel capable of movement at more than 60 miles per hour.

That trust isn’t going to be easy to win. But President Trump’s Department of Transportation doesn’t seem too concerned. Transportation Secretary Elaine Chao released a set of guidelines on Tuesday for the budding self-driving car industry. Her approach: Let’s not regulate it.

Advertisement

The new guidelines, dubbed Vision for Safety 2.0, actually scale back Obama-era rules released last year that were already quite lenient. Like the old guidance, Chao’s new safety standards are as optional as a sunroof. “This Guidance is entirely voluntary, with no compliance requirement or enforcement mechanism,” reads the document. That means the Lyft and Uber and Waymo, Google’s self-driving car project, are free to ignore them. And considering Uber’s demonstrated disdain for following regulations, it’s hard to imagine that, without hard federal requirements to follow them, the Silicon Valley entrepreneur types behind these companies will comply. (Of course these companies are currently beholden to state and local regulations, which are hardly uniform.)

Enforceability aside, the new guidance includes fewer safety recommendations than last year’s, too. There’s now a 12-point safety standard, as opposed to the 15 questions that Obama’s DoT recommended carmakers consider. Self-driving car manufacturers are still being asked to think about things like how vehicles can safely pull over if something goes awry and how to safely operate on different types of roads, but they exclude considerations like driver privacy, which may become important down the road, since driverless cars by design collect a massive amount of data. While a light regulatory touch will likely help developers innovate quickly and test what works and what doesn’t, a lack of real safety mandates could be a recipe for disaster—especially because these cars need to be tested on human-occupied roads.

As Deborah Hersman, the president and CEO of the National Safety Council put it, since the first self-driving car guidelines were released last year, “DOT has yet to receive any Safety Assessments, even though vehicles are being tested in many states.” Surprise! When regulators don’t require safety compliance, manufacturers don’t comply. A safety assessment is what the DOT is asking carmakers to voluntarily submit to demonstrate their approach to safety and the guidelines.

The guidelines also no longei r apply to cars with partial automation, where drivers are still asked to “remain engaged with the driving task.” The timing of Tuesday’s scaled-back guidelines is telling. On Tuesday, the National Transportation Safety Board found that Tesla’s semi-autonomous Autopilot system that is supposed to robotically steer and control a car “played a major role” in a fatal crash in Florida last year. Joshua Brown, the driver, is the first person to die in a car that drives itself. The NTSB found that Brown’s “inattention” paired with Tesla’s self-driving system “permitted the car driver’s overreliance on the automation.”

While the involvement of a self-driving car is tragic in this case, a disturbing number of people die from manned car accidents every year too. In fact, the past two years represent the highest uptick in automobile-related deaths in more than a half-century. Still, that doesn’t mean that the answer to one problem is to barrel forward with bringing technologies to market that could bring a new wave of fatalities without proper regulations in place to ensure the safety of those systems.

But even if the regulatory agencies are taking a hands-off-the-wheel approach here, that doesn’t mean Congress has to. The House passed a proposal earlier this month that could force self-driving carmakers to make a clear case that their technology is safe enough to drive alongside cars with humans at the wheel. A companion bill was is now being drafted in the Senate.

Still, the House proposal is designed to make it even easier for self-driving cars to hit the road by raising the number of exemptions from regular car regulations self-driving car manufacturers can request—meaning there could be up to 100,000 robocars on American roadways in a few years. Those exemptions could involve things like steering wheels, which autonomous carmakers may not want to include in their designs—or it could involve workarounds regulators haven’t yet thought of. What it ultimately means is we could have more autonomous cars driving on U.S. roads before we have a real sense of what it means for self-driving technology to be designed safely. The roads still belong to the rest of us. And so should the rules.

Sept. 13 2017 2:05 PM

Future Tense Newsletter: Is Science Fiction Predicting the Future?

Greetings, Future Tensers,

For our monthlong series, Future of the Future, we’re writing about the future of prediction. This week, Lawrence Krauss reminds us that there are some things we just can’t see coming. He makes the case as he explains why science-fiction writers couldn’t imagine the internet. “Their job is not to predict the future,” he writes, “it’s to imagine it based on current trends.”

Advertisement

Margaret Atwood, a speculative fiction author known for writing all-too-near tales of the future, affirms this assessment in a delightful interview with Ed Finn. “ … No, I didn’t predict the future because you can’t really predict the future,” the author of The Handmaid’s Tale and Oryx and Crake said. “There isn’t any ‘the future.’ There are many possible futures, but we don’t know which one we’re going to have. We can guess. We can speculate. But we cannot really predict.” That said, autocomplete seems like it’s doing a decent job—for better or worse.

Something else that has proven hard to predict: the end of the world. As Joshua Keating writes, it’s turning out to be a problem for ISIS, which recruits using apocalyptic prophecies that haven’t been coming true. But as he explains, the terrorist organization is hardly the first movement that’s had to adapt because of a false alarm about the End Times. As previous examples show, a failed prediction won’t necessarily mean the end of ISIS.

Returning to the present, here are some pieces we read this week while trying to figure out how bad the Equifax hack actually is:

  • Preparing for the next natural disaster: As we seek the best way to offer assistance to those devastated by recent extreme weather, Jason Lloyd and Alex Trembath consider how we can prevent suffering and loss from disasters like Hurricanes Harvey and Irma in the future.
  • Tesla helps drivers flee Irma: Florida Tesla drivers got a surprise earlier this week when the electric car company remotely extended vehicle battery ranges to help with evacuation efforts—a humane response to disaster that also serves as a reminder that we don’t own our devices the same way we once did.
  • Russian political ads: Last week, Facebook admitted to congressional investigators that it found evidence that Russian operatives bought $100,000 worth of ads targeted at U.S. voters between 2015 and 2017. Will Oremus explains why this is a big deal.
  • Time capsules: Rebecca Onion takes a look inside time capsules from America’s past to discover how our culture and values have changed over time.

Events:

  • From chatbots that provide therapeutic conversation to apps that can monitor phone use to diagnose psychosis or manic episodes, medical providers now have new technological tools to supplement their firsthand interactions with patients. Join Future Tense in Washington D.C. on Sept. 28 to consider how these and other innovations in technology are reimagining the way we treat mental illness. RSVP to attend in person or watch online here.

Emily Fritcke

For Future Tense

Future Tense is a partnership of Slate, New America, and Arizona State University.

Sept. 12 2017 7:27 PM

You’ll Love Unlocking Your iPhone X With Your Face. So Will Police Trying to Access It.

It takes a lot of selling points to justify a $1,000 phone, and one of the most enticing features of Apple’s new iPhone X is Face ID, which unlocks your phone by simply looking at you. It is also the creepiest.

Rather than asking you to touch the home button with your finger to unlock, the iPhone X will use its camera to compare your face to scans stored on the phone—a method the company claims is much more secure than a fingerprint.

Advertisement

But unlocking your phone with your face also unlocks a flood of privacy and usability concerns, not the least of which being whether someone will be able to unlock your phone with your picture. (Apple says that will probably be impossible.) And then there’s potential scenario of police confiscating your phone and unlocking it by holding it up to your face.

Right now, police can’t force you to reveal your password to unlock your phone and search it without a warrant—similar to the same way police need a warrant to search your house. But some courts have ruled that law enforcement can force you to use your fingerprint to unlock a phone. And so there’s a real concern about what will happen when cops can simply hold your phone in front of you to get inside. As Elizabeth Joh, a law professor at the University of California–Davis who specializes in police use of technology, wrote on Twitter, it’s “a criminal procedure question waiting to happen.”

Some good news is that the new facial-recognition feature on the iPhone X also comes with a software upgrade that reportedly will allow users to disable Face ID unlocking by simply tapping the power button quickly five times. Doing so reverts the phone to unlock with your password. But a little-known software trick like that won’t necessarily protect everyone who gets their phone confiscated.

Apple reportedly uses presence recognition to make sure it’s really you there, and not a photo of you. And as for the fear that someone who looks like you might be able to break into your phone, Apple’s thought about that, too. Face ID works by mapping a face with 30,000 invisible dots, which is used to create a mathematical 3-D model. The company calls the special camera hardware it uses to map faces its True Depth system. And since Face ID uses artificial intelligence, it actually gets more secure and exact every time you look at your phone, building on its model. In other words, it learns your face as you use it.

170913_TECH_facetech
Life imitates Game of Thrones.

Apple

And as you grow older or grow a beard and change your hair, Face ID is supposed to pick up on those changes, too. Phil Schiller, the senior vice president of marketing at Apple, said at the event that wearing a hat or glasses or changing your facial hair won’t cause the system to malfunction. But even if Apple thinks it can be sure it’s you, the phone doesn’t seem to have a way to tell if you’re unlocking it under duress—like, for example, if an abusive partner is forcing you to look at your phone. Sure, Apple could one day add a feature that can read your facial expression, but that would probably be difficult to make reliable. Humans, after all, have a hard enough time as it picking up on how we feel without special software.

One thing that differentiates Apple’s biometric play from other tech giants, like Facebook, is that Apple isn’t creating a massive database of faces that it may one day use to tailor advertisements to you. Rather, similar to the way Touch ID worked, the image of your face is stored locally on the phone; it’s not shipped back to Apple. Retail shops are already using facial recognition to find repeat customers or identify shoplifters, and Facebook could one day work with stores to point out who walked in, how they’re feeling based on the Facebook activity, and what kinds of ads they’re most likely to respond to. Apple won’t.

Privacy concerns with Face ID, unfortunately, might not be addressed until after a police officer actually forces someone to unlock their phone with their face. But a more immediate privacy concern, and one that also probably won’t be resolved until we get to play with the new iPhone X, is whether the facial-recognition sensors will allow us to be as private as we prefer to be when glancing at our phones.

Think about it: At the moment, if you want to slyly check your iPhone under the table or in your coat pocket, you can just pull it halfway out and unlock it with your finger. But with Face ID, it might be hard to be as inconspicuous, and that raises a whole other level of privacy concerns. Our smartphones, after all, are rather private places. We can have clandestine conversations, quietly and privately search the web without anyone really being able to look over our shoulder, or just check sports scores while pretending to listen to someone. But all of that is aided by an iPhone that allows us to quietly unlock it under the table—not one that requires you to look straight at it.

Sept. 12 2017 4:45 PM

The New iPhone’s Most Adorable Feature Is Also Its Most Troubling

The audiences at Apple’s annual announcement events are notoriously vocal. Presented with a parade of incremental advancements—and the occasional real leap forward—they dutifully hoot and applaud, their celebrations so routinized that it’s hard to distinguish real enthusiasm from mere signs of life.

One detail at this year’s event did, however, seem to produce a genuine reaction from the crowd: the company’s description of a new feature that it calls animoji, which has apparently been in the works since at least 2011. “We use emojis to communicate with others to express emotion,” declared Apple Senior Vice President Philip Schiller, gesturing broadly with his hands, his own face placid. “But of course you can’t customize emojis; they only have a limited amount of expressiveness to them.” Never mind that the relative simplicity of emojis is the key to their charm. It is, as many have argued, precisely their limitations that can make them such provocative tools. With animojis, Apple is prepared to change that, offering us the ability to bring these minimalistic characters to life.

Advertisement

Animojis “are emojis that you control with your face,” Schiler said as a toothy panda mask bobbed and grinned (somehow more menacing than charming) on the massive screen behind him. “Animojis track more than 50 facial muscle movements. They’ve been meticulously animated to create amazing expressiveness.” As Craig Federighi went on to explain, these animations are possible thanks to the facial recognition hardware packed into the new iPhone X. It is, in other words, of a piece with the same technology that will let you unlock your phone by looking at it and swiping up.

His salt and pepper hair a CGI’d swoosh, Federighi demonstrated that animoji will be included directly within the phone’s messaging app. “It immediately starts tracking me, so I can make whatever expression I want,” Federighi said as he snarled like a “ferocious” cat, bawked like a chicken, and whinnied like a unicorn, “mythical creature, favorite of the startup.” The system even lets you bring the poop emoji to life. Or, as Federighi put it, “If you were, by chance, wondering what humanity would do when given access to the most advanced facial tracking technology available, you now have your answer.”

Screen Shot 2017-09-12 at 2.37.58 PM
Why?

Apple

The demonstration concluded with an exchange of audio animoji messages between Federighi (speaking as a fox) and Apple CEO Tim Cook (as an alien) in which the latter told his subordinate to “wrap this up.” It was a cute bit, but it was also one that should trouble those concerned with the increasing reach of mobile technology. Here, in ascending order of significance, are three reasons:

The first is the least worrisome, but it may be the most irritating. In effect, the potential for spoken, animated messages that Federighi showed off promises to reinvent voicemail for a generation of users who prefer text messages. It does so, however, with one crucial—and potentially maddening—difference: To watch the animation, you have to look at your phone while listening to the audio snippets (as you would while reading a text message) instead of holding it to your ear (as you would while checking a traditional voicemail).

This approach will likely encourage users to play the messages aloud through the phone’s speakers—possibly over and over again, especially if they find the animation amusing. In other words, unless you happen to have headphones attached, Apple is inviting you to fill the surrounding space up with the sonic clutter of your friends’ voices. Those sitting next to you on the bus are unlikely to find the results as amusing as you do.

Second, animoji looks like a sly gambit designed to help sell the almost $1,000 iPhone X. The feature likely won’t be available on devices without facial recognition technology—and the X seems to be the only handset in Apple’s lineup that has this capacity yet. But where I may not be able to create animojis on my iPhone 6, I almost certainly will be able to receive them through its messaging ecosystem.

Thus, like iMessage itself—which already distinguishes between your communications with Apple users and those on other platforms—the mere ability to use animoji will become a status symbol. It may, in effect, signify that your interlocutor was willing to plunk down a cool grand for the privilege of ventriloquizing an anthropomorphic pile of poop. While that arguably reflects poorly on them, your inability to respond in kind may say something even worse about you. In that respect, animoji itself is a kind of blackmail.

Third and finally, the very charms of the system are themselves troubling. As many have already noted, and as others will point out, employing facial recognition as a security feature comes at a risk, since it potentially makes it easier for others to unlock your device by holding it up to your face. It also represents the consumer-level creep of technology that can be used to identify protesters and otherwise put personal privacy in jeopardy.

Apple surely knows all of this, and that’s likely why it spent so much of its presentation focused on this adorable but largely inconsequential new feature. With animoji, Apple is, as the literary theorist Roland Barthes might put it, effectively inoculating us against such concerns. You may not, the company implicitly acknowledges, like living in a techno-surveillance state. But you’re going to love playing with this talking panda face. Have fun!!!!

The worst part? If the audience reaction is any indication, Apple is almost certainly right.

READ MORE STORIES