A Tesla Driver Died in a Crash While His Car Was on Autopilot
A Tesla driver died in a crash while his Model S was on autopilot, the company disclosed in a blog post Thursday.
It’s not immediately clear to what extent Tesla’s autopilot system, which has been billed as the most advanced of its kind on the market, was at fault. According to the company, the U.S. National Highway Transportation Safety Administration has opened a “preliminary evaluation” into the system’s performance leading up to the crash. Here’s how Tesla described the accident (italics mine):
What we know is that the vehicle was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied. The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S.
The phrase in italics above is exactly the sort of excuse that everyone involved in self-driving cars had hoped never to have to hear in conjunction with a deadly accident. It’s a classic “edge case” for computer vision, the sort of thing the engineers are supposed to thoroughly solve before we entrust their software with our lives.
According to Tesla, this is the first known death involving its autopilot system. The company reported that its drivers have collectively traveled about 130 million miles in autopilot mode. On average, Tesla noted, one person dies for every 94 million vehicle miles traveled in the United States.* “It is important to emphasize that the NHTSA action is simply a preliminary evaluation to determine whether the system worked according to expectations,” the company added.
The implication is that people shouldn’t rush to label Tesla’s autopilot feature as dangerous. It’s a case that the company persistently throughout its blog post announcing the crash, making for a tone that’s more defensive than apologetic. CEO Elon Musk did offer public condolences, however:
Tesla’s blog post is worth reading in full, as it lays a blueprint for the ways in which the company is likely to defend itself in the face of the intense scrutiny that is sure to follow. It’s a test case for how the public and media will respond to the occasional deaths that will inevitably come as carmakers move gradually toward self-driving technology. Tesla appears ready to defend itself with statistics, reminding people that human drivers set a relatively low bar when it comes to safety. Whether statistics are enough to trump people’s fear remains to be seen.
It’s important to note, as Tesla does, that the company’s autopilot system officially requires the driver to keep his hands on the wheel, beeping warnings when it senses inattention. That differentiates it from the fully autonomous driving technology that Google, Uber, and others are developing and testing. And perhaps it will convince regulators and others that accidents such as this one are not to be blamed on the company or its software. Autopilot, Tesla insists, is a safety feature that is meant to be redundant to the driver’s full attention.
Yet there are pitfalls to this approach, as illustrated by YouTube videos showing drivers going “hands-free” or even vacating the driver’s seat while their Tesla is on autopilot. The company has taken steps to prevent this.
Still, as I argued last year after test-driving a Model S on autopilot, the technology—impressive as it is—does seem to tempt drivers to relax their focus. That’s why many other automakers and tech companies are taking a different approach to vehicle automation. Toyota, for instance, views its role as that of a backstop against human error, rather than a substitute for human effort. And Google has removed human drivers from the equation entirely, reasoning that it’s impossible to expect them to drive safely when they know a computer is doing much of the work.
Tesla’s description of the accident does not make it sound as if the autopilot system went rogue and crashed into something. Rather, it seems to have failed to avoid an accident that a fully engaged human driver may or may not have managed to avoid. And while Tesla doesn’t say so, it certainly seems possible that the driver in this case was devoting less attention to the road ahead than he might have if autopilot were not engaged.
I’ve emailed NHTSA for comment and will update when the agency responds.
Previously in Slate:
*Correction, June 30, 2016: This post originally misstated how many miles drivers have collectively traveled with Teslas in autopilot mode. It is 130 million miles, not 130,000. It also misstated the rate at which people die in vehicle accidents. It is one person per 94 million vehicle miles traveled, not one person per 94,000 miles.
Apple Patents Technology to Block iPhone Cameras in Sensitive Locations. That Could Be Dangerous.
Smartphone cameras have become an important tool for documenting unexpected events or situations, whether they're capturing Julius opening a door or an incident of police brutality. But a patent filed by Apple in 2011 and approved Tuesday describes technology that would allow the company to control iPhone owners' ability to use their cameras.
Spotted by 9to5Mac, the technology would work by scanning photos and videos for the presence of particular infrared signals. If someone wanted to block iPhones from taking photos in a specific location, they would set up an infrared beam that iPhones could detect through image processing. Photos/video without the infrared signature would be stored to the phone's memory like normal, but when the processor detected infrared signals, the phone would display a notification that recording had been disabled.
The patent explains, "A transmitter can be located in areas where capturing pictures and videos is prohibited (e.g., a concert or a classified facility) and the transmitters can generate infrared signals with encoded data that includes commands temporarily disabling recording functions."
Though the technology is speculative and still only a patent, it's hard not to jump ahead to how potentially censorial it could be. There are definitely situations where the impulse to use this type of technology would be understandable, like performances where artists/backers/venues are looking to control media distribution. But as ThinkProgress points out, the ubiquity of portable cameras can help people hold powerful entities accountable for wrongdoing, and it might be tempting for institutions to use this technology to combat transparency. These hypotheticals are reminiscent of other ways technology can already be used to curb dissent, like the 2011 incident where San Francisco mass transit police shut off cell phone service in a subway station to make it harder for would-be protesters to organize.
As Chad Lorenz wrote on Slate last year in the wake of the Walter Scott shooting (which a bystander caught on video), "The most effective thing ordinary Americans can do to stop these shootings and the most effective way to make police departments accountable right now is to take more video of police confrontations. We’ve reached the point where this is the socially responsible thing to do."
Apple probably won't be building camera-blocking tech any time soon, if ever, but this patent could give other people ideas.
The AP’s New Baseball Reporter Is Not Human
The story is familiar enough at this point that I could probably automate it myself:
[Prominent news organization] announced on [day] that it has [contracted with/acquired/partnered with] [tech company] to automate its coverage of [routine news event]. The move is part of an accelerating trend of software-generated journalism, also somewhat misleadingly called “robot journalism,” in which reports once written by humans (if written at all) are composed instead by specially designed algorithms that can produce them almost instantaneously.
In this case, the prominent news website is the Associated Press, the tech company is Automated Insights, and the routine news events are minor league baseball games. The technology comes from Automated Insights’ Wordsmith program, the same one that already writes corporate earnings reports and college sports recaps for the AP.
The popular discussion around “robots writing the news” tends to presuppose that automation will displace human journalists. While that’s certainly possible in some cases, the AP says it hasn’t eliminated any reporting jobs as a result of its deals with Automated Insights. (In fact, it has added at least one: an “automation editor,” hired last year.) Rather, it says the automated earnings stories free journalists to focus on adding detail and context to the more newsworthy reports. And the automated sports recaps have allowed the agency to expand its coverage to events it had previously ignored. For instance, the AP did not previously offer recaps of most minor-league baseball games. Now its recaps will appear not only on its news wire, but on MiLB.com and on the official websites of the relevant minor-league clubs.
So, what does an automated minor-league baseball recap look like? About what you’d expect from a reasonably competent human reporter, minus any telltale signs of abject boredom, self-loathing, or stifled literary ambition.
Here’s one of the samples that Automated Insights provided to me:
Columbia beats Hickory 10-9 after Palsha induces double play
HICKORY, N.C. (AP) -- Alex Palsha got Ti'Quan Forbes to hit into a game-ending double play with the bases loaded, leading the Columbia Fireflies to a 10-9 win over the Hickory Crawdads on Tuesday.
Hickory grabbed a 7-5 lead in the sixth after Chuck Moorman hit a two-run double as part of a four-run inning.
Trailing 9-7, the Fireflies took the lead for good in the eighth inning when David Thompson homered to bring home Enmanuel Zabala and J.C. Rodriguez.
Craig Missigman (1-1) picked up the win after he allowed two runs and five hits over two innings. He also struck out one and walked one. Omarlin Lopez (4-3) went two innings, allowing three runs and four hits in the South Atlantic League game. He also struck out two and walked one.
Dash Winningham doubled and singled three times, driving home two runs in the win. Thompson homered and doubled, driving home five runs and scoring a couple.
Ricardo Valencia had four hits, while Yeyson Yrizarri and Dylan Moore recorded three apiece for Hickory in a losing effort.
Read a few, and you’ll quickly gather that Automated Insights’ software is significantly more sophisticated than the Madlibs-style automation algorithm that I half-jokingly suggested in the first sentence of this post. The AP says its human baseball writers and editors worked with the company to customize the Wordsmith platform to conform to its house style for game recaps. And Automated Insights, which was acquired by the sports data firm STATS in 2015, has developed techniques for identifying and highlighting key plays and turning points in a given game just by analyzing the game data. That allows Wordsmith to write anecdotal ledes in some cases, making its stories sound less robotic.
Yet I’m still confident, as I wrote in 2014, that human journalists have little to fear from their mechanistic counterparts, at least for the foreseeable future. Programs like Wordsmith excel at quickly converting data into prose, making them well-suited to stories whose key elements can be quantified in a spreadsheet. (In fact, you can now try Wordsmith yourself, using data from your own Excel or Google Sheets documents.) But, generally speaking, they can’t make their own qualitative observations about the world, let alone pick up the phone and grill a source to get a scoop. Automated Insights’ CEO, Robbie Allen, acknowledged as much in an interview with Poynter in 2015, when his company began partnering with the AP to summarize college baseball games:
For instance, Allen acknowledges a computer won’t report an outfielder lost a flyball in the sun, allowing the winning run to score. “Our stuff is quantitative related,” Allen said. “We’re not able to make a statement about the quality [of a play].”
Perhaps I couldn’t have automated this story after all.
Previously in Slate:
Introducing Software Heritage, the Library of Alexandria for Code
Like so many other elements of our digital culture, software tends to disappear rapidly, new versions supplanting old ones, even as unused and obsolete programs slip into disrepair. Where the Internet Archive’s Wayback Machine has long collected past versions of websites, there’s never been a similar repository for code. The new initiative Software Heritage is attempting to change that, pushing back against what it describes as the fundamental fragility of software. The site, which promotes itself as a centralized database for source code, already archives more than 2 billion source files, drawn from millions of projects.
As Software Heritage explains on its website, it aspires “to collect, preserve, and share all software that is publicly available in source code form.” Drawn as it is primarily from a variety of existing repositories such as GitHub, this isn’t some-one stop shop for obsolete software. You can't, for example, just drop in to download old versions of MS Word or Final Cut, however much you might prefer them to the programs' latest instantiations. In that regard, it’s unlike, say, Abandonia, which collects ancient DOS games whose creators no longer support them. Instead, Software Heritage will primarily serve as a resource for coders, one that will help them maintain a sense of their discipline’s constantly developing history.
Software Heritage is a project of Inria, the French national institute for computer science and applied mathematics, but a number of other organizations have already expressed support, including Microsoft and the Linux Foundation. Many of those companies and institutions have issued statements that resonate with Inria’s central premises, with Microsoft, for one, claiming that it believes the project "will help curate and conserve human knowledge in the form of code for future generations as well as help today’s generations of developers find and re-use code worldwide."
Though it presents itself as the software Library of Alexandria, for now, at least, the archive is relatively daunting, especially for the non-coder: It encourages visitors to check whether their own work is already in the database, but you can’t casually download programs, or even easily discern what it contains if you don’t know what you’re looking for. Moving ahead, however, it intends to organize things in a way that makes them more accessible—and in a way that emphasizes especially important projects and resources. It invites visitors to help it expand its coverage and laying out a variety of other features that it plans to develop, including full-text search.
Among other ideals, Software Heritage operates on the premise that preserving older software is important for the sciences. Since software is often important to replicating previous experiments, preserving it serves as a crucial means of pushing back against the reproducibility crisis. As the organization explains on its website, “Software Heritage will ensure availability and traceability of software, the missing vertex in the triangle of scientific preservation,” since it will help future researchers to know and employ “the exact version of the software used” by their predecessors.
In other words, Software Heritage is a project that has real, practical potential, offering an important reminder that we shouldn’t take the ephemeral quality of the internet for granted.
In Praise of Emoji as Tactful Conversation-Enders
Once upon a time we did not need to find a polite way to cut off a written exchange, because written exchanges happened through the mail, and days or weeks separated every dispatch. “Most sincerely, Katy,” I could scrawl, free to ignore that particular back-and-forth until the mail boat made its return trip. But the rapid-fire interaction of texting and IMing means that modern communicators face a new and modern conundrum: How do you tomahawk a conversation without making it weird?
Emoji beautifully solve this problem, magicking us out of interpersonal jams, especially when we are trying really hard to end transmission. You’ve said all you need to say, and yet you don’t want to leave the other person’s final message hanging. Without emoji, you’re in the affirmative badlands: yeah (too stoner-y and casual?), OK (bland and perfunctory), sure (sounds possibly sarcastic). Or you’ve got it (“got it!”) or you see (“I see”), even though such phrases are cheery, depersonalizing jargon and each one takes an icepick to your soul.
And what if you honestly don’t know what to say? Your friend is dog-sitting another friend’s dog, and you hate dogs (or friends). It makes sense to have a noncommittal dog emoji in reserve. Maybe your boss wants you to double-check a spreadsheet to make sure it’s up-to-date. You’d like to project affable proficiency; but in context, “will do” sounds surly, and “will do!” brown-nosey (because who genuinely likes double-checking spreadsheets?). So you Slack her the thumbs-up.
The age of the emoji is homo sapiens’ time to shine, as we’re one of the few species that traffics and delights in the symbolic. (Were the Lascaux cave paintings slapped on the wall to silence some longwinded elder?) In general, these thumbnail runes are whimsical, playful, ambiguous. They feel inviting and social even when you are putting a pin in a conversation—they shut it down without “shutting it down.” Consider the hermeneutic openness of the winky-face, which inveigles so many possible interpretations from a viewer that she feels as though the dialogue is continuing long after it’s over. Consider the friendliness of cartoon-ified eyeballs that express: I’m curious to see what you come up with! They are funny, unthreatening, self-deprecating by nature; they presume that the speaker’s message can be summarized by a cute and not especially dignified pictogram.
What’s more, as has been oft-observed, emoji help replace the nonverbal signaling that, in a verbal dialogue, provides rhythmic punctuation to the exchange. You can tell by the way someone stands, moves, and talks when they are ready for a conversation to end; in text, those clues evanesce like the steam pouring out of your ears when you need to kibosh a meandering Gchat so that you can call your sister back. Emoji, though, return to us some of that nonverbal communication—they can even replicate it, with cartoon nodding heads and “OK” hands.
As intermediaries between language and image, they are transitional: mini-Charons ferrying you into and out of verbal interactions. Not all conversation-closing emoji function the same way, however. Here’s a brief user’s guide.
The uncannily apt emoji
The perfect emoji. A feat of Olympic virtuosity—a mic drop—on either your part or destiny’s. Unicode has furnished you with a precise pictographical representation of the natural response to what I just said. No more needs to be done here.
The humorously redundant emoji
You said you bought a truck and I responded with the “truck” symbol. You said there were bananas in the kitchen and I posted a “banana.” I am the world’s most obvious one-person Greek chorus. Somehow, my tautology reveals that our conversation is over.
The moonily random emoji
I am here. I hear you. Accept this pure phatic signifier of my presence and desire to connect. Also, I am conscious of this sweet subtext and thus gently mocking it by choosing the most unaccountably arbitrary image imaginable to convey my fellowship. Aren’t human relationships absurd? But they are all we have.
The shibboleth emoji
No one but us has a clue what this emoji means. Shhh.
The dutiful emoji
OK-hand. Thumbs up. Literal translation: Aye-aye. Yessir. 10-4. Roger. Secret translation: I’m on it, stop talking to me.
Technology Let Archaeologists Uncover Jewish History Without Disturbing It
Vilnius, the capital city of Lithuania, was known as a center of Jewish intellectual and cultural life before the Holocaust. Not far from the city itself is the Paneriai forest, known in that era as the Ponar. Thanks to new archaeological technology, as of this week the forest is also known as the site at which Jewish prisoners, over the course of three months, used spoons to dig a 112-foot tunnel out of the pit in which they were made to sleep at night—through which 11 managed to survive the war.
The prisoners were in what was known as the Burning Brigade, charged by their Nazi captors with digging up bodies out of mass burial pits and burning corpses as the Soviets advanced. According to an account, one prisoner, Isaac Dogim, recognized the body of his wife by a medallion he had given her and then organized the escape of prisoners who feared that they would join their dead once they had finished burning them. On April 15, 1944, 40 tried to leave. 12 managed to escape alive.
The tunnel is a repudiation of the common conception that victims of the Holocaust made no attempt to resist. According to Jon Seligan, an archaeologist who works with Israel’s antiquities authority and participated in the project through which the tunnel was discovered, it is also “a little glimmer of hope within the dark hole of Ponar. … The tunnel shows that even when the time was so black, there was yearning for life within that.”
One challenge of archaeology is to keep history’s excavation to threaten its preservation. It was a task particularly charged in Ponar, known, according to team member and archaeologist Richard Freund, as “ground zero for the Holocaust” (the mass pits pre-dated the gas chambers). It’s no meager feat—according to Scott Branting, assistant professor of anthropology at the University of Central Florida, “archaeology is a fundamentally destructive science.” But this expedition—completed by archaeologists, as well as scientists and Jewish historians, from Canada, Israel, Lithuania, and the United States—shows how technology can be used to uncover the past without disturbing it. By using electrical resistivity tomography, or ground-scanning technology, the team (which, last year, used ground-penetrating radar to uncover the Great Synagogue of Vilna, destroyed by Nazis and Soviets alike) was able to map the path of the tunnel without disturbing any human remains at the site.
In 2013, Sarah Parcak wrote in Future Tense that thanks to technology, “today is the most exciting time in history to be an archaeologist.” Parcak herself works in space archaeology, which she defined as “the use of space- and air-based sensor systems to discover ancient settlements, cultural remains, and natural features (like relic river courses) otherwise invisible to the naked eye, or hidden due to vegetation and water. Space archaeology has been used in Egypt to show ancient settlements and tombs. It’s shown hundreds of new structures at the Mayan site of Caracol and thousands of what were unknown settlements in Syria. It’s allowed archaeologists to map looting from space. Satellite data let archaeologists to monitor damage to sites (such as in areas controlled by ISIS) and, where intervention is not possible, scanning and digitization technologies have let them make digital reproductions.
And, this week, technology let a team simultaneously uncover and respect the memory of individuals who couldn’t see light at the end of their tunnel—and so, with their very lives at stake, created one for themselves.
Will North America Become the Next Saudi Arabia? A Future Tense Event.
Not long ago Washington policymakers spent a great deal of time bemoaning our ever-increasing dependence on foreign (especially, alas, Middle Eastern) oil. Rarely has such pessimistic groupthink proven so misguided. North America is blessed with a number of comparative advantages when it comes to producing energy at a low cost, and Canada’s increased oil production, innovation in alternative energy research, Mexico’s historic energy reforms, and the shale revolution across the region have only accentuated North America’s potential to become the world’s dominant energy superpower.
Future Tense and the Wilson Center’s Canada Institute invite you to join them in Washington, D.C., at noon on Tuesday, July 26, for a conversation on what it will take for North America to fulfill its energy potential. People tend to obsess over the monthly gyrations of oil prices and the latest regulatory battle over shale or pipeline-building, but we want to look forward to 2050. What concerted steps should Canada, Mexico, and the United States take to ensure that North America will become the world’s leading energy power for generations? And how can this region lead the world not only in output and economic growth, but also in setting new standards of environmental responsibility and sustainability?
Follow the conversation online using #NAenergy and by following @FutureTenseNow. For more information and to RSVP, visit the New America website, where the event will also be streamed live.
Director of energy model for Mexico Initiative at Arizona State University
Commissioner, Mexican National Commission of Hydrocarbons
Former under secretary of energy of Mexico
Director of the Canada Institute at the Wilson Center
Former senior adviser on economic affairs at the U.S. Embassy in Ottawa
Senior adviser for international security and resource security at New America
Former assistant secretary of defense for operational energy
Create Delightful Emoji Ciphers for Your Friends to Decode
The word encryption comes up a lot. Messaging apps are adding end-to-end encryption, the “crypto wars” are back, smartphones should (or shouldn’t) have encryption turned on by default. It’s everywhere. But what exactly is it again? There are lots of great guides out there like Slate’s “Encryption 101,” but now there’s a way to understand the basics that’s even more fun ... and involves emojis!
On Tuesday, Mozilla announced Codemoji, a tool to teach people about the concepts that underlie encryption. Codemoji turns emojis into ciphers. You put a word or phrase into the system, choose a starting point emoji, and then it spits out a string of them representing an encoded version of the letters in your word or sentence.
This type of algorithm, which takes an input and encodes it using a series of steps so it can later be decoded by running the steps in reverse, is the foundation of the digital systems we call encryption. Beyond Codemoji, Mozilla has ongoing efforts to teach people about encryption and its value.
“We believe Codemoji is a first step for everyday Internet users to better understand encryption,” Mozilla writes. But don’t forget, “Codemoji is intended as a learning tool, not a platform for sharing personal data. Thankfully, modern encryption is much stronger than simple emoji ciphers. If you are going to be sending sensitive information, best to use a more sophisticated security tool.”
Codemoji won’t make you a master cryptographer, but given the dire tone of the news lately, it’s a nice change to see encryption in a fun, positive light.
“World’s First Robot Lawyer” Is a Chatbot That's Actually Useful
The bot, called DoNotPay, is designed to help people contest parking tickets. It starts by asking you a series of yes-or-no questions about the circumstances surrounding the ticket to see whether you have a valid legal basis to challenge it. If so, it then guides you through the steps necessary to file and win an appeal. (Often, it isn’t that difficult since local officials can’t be bothered to show up in court to contest the appeal.)
The result? According to the bot’s creator, Joshua Browder, DoNotPay has guided Londoners through 250,000 appeals—and they’ve won 160,000 of them. Browder, who was born in London and launched DoNotPay there, has since expanded the service to New York City. So far it has been used more than 9,000 times by New Yorkers, Browder told VentureBeat. It's available on the Web at donotpay.co.uk, or as an Android app.
Browder calls the bot “the world’s first robot lawyer.” That might be a little grandiose for such a rudimentary widget, as is Browder’s claim to VentureBeat that “people getting parking tickets are the most vulnerable in society.” But hey, he’s 19, and he’s got big dreams.
It always brightens my day to receive these emails. I wish the local governments would just follow the rules pic.twitter.com/1gz6NjYSs3— Joshua Browder (@jbrowder1) June 26, 2016
Those dreams include expanding DoNotPay to Seattle, adding a feature that helps people get compensation for flights delayed more than four hours, and even building a bot to help refugees apply for asylum. (That one will rely on IBM’s Watson software to translate between Arabic and English.)
Try DoNotPay and you’ll quickly see that this is not exactly the state of the art in artificial intelligence. (If you want to know more about the state of that art, read this instead.) Rather, it’s a relatively simple software agent that layers some basic language understanding on top of what is essentially a straightforward issue tree.
Yet what DoNotPay demonstrates is that bots don’t need to be particularly intelligent to be useful. There is a class of everyday problems that are relatively easy to overcome if you just know the relevant rules. But ascertaining those rules and applying them to your particular circumstances can feel like enough of a chore to deter people from trying. And in many cases, as with contesting parking tickets, people aren’t aware of just how simple the problem really is. In those cases, a bot such as DoNotPay feels like just the right approach to help people over the barriers that prevent them from exercising their rights.
Why Did My Period-Tracking App Send Me an Anti-Brexit Email?
Five days after Britain voted to leave the European Union, I received a message from Clue, a menstrual health app, with the subject line “Better together” followed by a peace sign emoji and a message about unity. This is the age we now live in, a time in which the app that tracks my menstrual cycle and reminds me to take my daily birth control pill wanted to belatedly share its viewpoint on the issue with me.
“I think my period tracking app just sent me an anti-Brexit email?” I texted a friend. Apparently this seems so de rigueur in 2016 that all I got was a paltry “lol” in response.
What a time to be alive, I thought, when I can record observations about my shedding uterine lining on my mobile device and in turn receive input on important matters of foreign affairs!
To be clear, I am decidedly pro–period tracking and pro-Bremain. But even I wasn’t sure how I felt about my reproductive health and views on international politics intersecting in this particular way. I grew up in a feminist family that ran an abortion clinic in Texas, and I was raised with the belief that the personal is always political—but also with a deep suspicion of the political invading the personal, especially when it comes to my health.
Although the link between the Brexit and the “blood coming out of [my] wherever,” as Donald Trump likes to say, may have seemed tenuous at first, it actually makes sense that an app to track the latter would want to educate its users about the consequences of the former. Initiatives like Clue and Planned Parenthood’s new Spot On aim to give medically accurate information to a generation raised with science-optional, abstinence-only sex education. Similarly, Team Bremain fought to spread its message as its opponents misled the public with factually inaccurate information—not unlike the Republican Party’s never-ending attack on facts with the dubious motive of “protecting women.”
Unsurprisingly, younger voters overwhelmingly flocked to Team Bremain's message of inclusion and unity, just as they have embraced technology to take control of their bodies and health. So while I may have scoffed a bit at first, I actually appreciate Clue’s effort to make the personal political and vice versa, and I hope my generation will continue to embrace it. We know that just as one missed birth control pill can have consequences, our votes do, too.
And that is why even though Britain may not remain, my period tracking app will. Because—like the U.K. and the EU—I believe we’re better together.