An Antivirus Scan Shut Down a Medical Device in the Middle of Heart Surgery
In theory, antivirus software is designed to keep users safe from their own mistakes. Who among us hasn’t occasionally visited a dodgy website or downloaded a dubious file? But while they can help counteract our carelessness, we still need to be careful about the ways we use them. That’s a lesson that one hospital recently learned the hard way when a medical device crashed in the midst of heart surgery. On investigation, it turned out that the culprit was the antivirus program on a computer to which the device was connected.
As Softpedia’s Catalin Cimpanu writes, the incident, which occurred in February 2016, involved a tool called the Merge Hemo, which contributes to cardiac data collection. The Merge Hemo itself gathers and evaluates information about the patient, then transfers that information to a connected computer. An incident report filed with the FDA explains that the crash happened because the computer automatically initiated its hourly malware scan while the procedure was in progress. That froze up the Merge Hemo app on the computer, which shut down the actual device’s interface.
Fortunately, in this case the interruption was only temporary. The FDA write-up goes on to explain “it was reported that the procedure was completed successfully once the application was rebooted.” Merge claims that fault lies with the hospital, gesturing to its own security recommendations, which note “that improper configuration of anti-virus software can have adverse affects including downtime and clinically unusable performance.”
While this story has a relatively happy ending, it still speaks to the larger cyberhygiene problem in hospitals. As my colleague Lily Hay Newman has regularly shown, virtually everything that connects to the internet has been hacked, including medical devices. And hospitals themselves have been subject to ransomware attacks by hackers, endangering patient safety. But as J.M. Porup has argued in Future Tense, the real danger in medical environments may not be malice but malware, invasive programs that could interrupt care, even if their developers didn’t actually intend to target hospitals.
It’s reassuring to see that hospitals are attempting to do something about such problems, but the Merge Hemo incident also provides an important reminder: Cybersecurity has to be an active enterprise, an ongoing, engaged process. Installing anti-malware security programs and calling it a day clearly isn’t enough. Indeed, it may make things worse.
Netizen Report: WhatsApp Briefly Blocked in Brazil, Again
The Netizen Report offers an international snapshot of challenges, victories, and emerging trends in Internet rights around the world. It originally appears each week on Global Voices Advocacy. Ellery Roberts Biddle, Juan Arellano, Marianne Diaz, Sam Kellogg, Weiping Li, Rezwan, and Sarah Myers West contributed to this report.
A Brazilian judge ordered internet service providers to block access to WhatsApp in the country for 72 hours, citing the company’s alleged failure to comply with an order to assist police in a drug investigation. The block was lifted the next day. If you think this sounds familiar, you’re not wrong: A judge also ordered WhatsApp to be blocked in 2015 for a period of about 12 hours before the decision was overturned by an appeals court.
Ethiopians face dire consequences for participating in digital culture
The Ethiopian Federal High Court convicted Zelalem Workagenehu under the country’s Anti-Terror Proclamation and will issue his sentence May 10. Workagenehu was arrested along with Yonatan Wolde and Bahiru Degu for applying to attend a training session on digital communication, social media, and leadership, which the government has described as a “training operation to terrorize the country.” All three were arrested and spent more than 600 days in prison without facing trial. Degu reported experiencing extensive torture during his first few months in detention, including beatings and being forced to remove his clothes and drink his own urine. Wolde and Degu were acquitted of the charges and released from prison, but were re-arrested shortly afterward and forced to spend another night in prison before being released again and told they remain under observation. Their relatives say they were told by the state security officers that “they would be killed if they made any moves.”
Mexican NGO takes “stalker law” to Supreme Court
The Second Chamber of Mexico’s Supreme Court of Justice of the Nation ruled on May 4 that the controversial Telecom Law, nicknamed the “Ley Stalker” (“Stalker Law”) does not violate the nation’s constitution. The law requires telecommunications companies to retain records of users’ metadata for two years and provide unrestricted access to state authorities without any requirement for judicial oversight. The law, which came into effect in August 2014, has come under criticism from digital rights groups for being invasive of privacy. The Mexico City–based NGO Network for Defense of Digital Rights now plans to challenge the Supreme Court ruling before the Inter-American Commission on Human Rights.
Russian activists sound alarm on Telegram security flaws
Two Russian opposition activists reported their Telegram messenger accounts were remotely hacked through the app’s SMS login feature, suggesting the app is not as secure as the company claims. They believe the Russian government was involved in the hack. Security researcher Frederick Jacobs pointed to similar attacks on Iranian accounts earlier this year, critiquing the safety implications of text message logins.
Myanmar activists tackle online hate, work to educate
As Internet connectivity has increased in Myanmar over the past six years, social media users have seen a rising tide of anti-Muslim sentiment on social platforms. In response, activists in have launched a campaign to confront hate speech that aims to educate Internet users on how to identify and respond to hate speech, and how to engage in constructive debates online.
Iranian cartoonists released from prison
Two popular Iranian web cartoonists who were jailed for their artwork were released from prison since our last report. Hadi Heidari, who was arrested for a cartoon marking the November 2015 terrorist attacks in Paris, was freed from prison on April 26. Cartoonist Atena Farghadani was also released on May 3, after her sentence was reduced from 12 years to 18 months.
• “The Right to Privacy in Venezuela”—Acceso Libre, the International Human Rights Clinic at Harvard Law School, and Privacy International
• “Watchtower: Mapping the Indian Government’s Cybersecurity Institutions”—Internet Democracy Project
Future Tense Newsletter: Disquieting Drones and Supersonic Flights
Greetings, Future Tensers,
Drones are used for everything from racing to aerial photography, but surveys suggest that large numbers of Americans still find them creepy. For this month’s Futurography course, we’re looking into why that is—and whether it’s likely to stay that way. As always, we’ve started out with a conversational introduction to the topic that looks into questions such as whether current drones are actually helping your neighbors spy on you (spoiler: probably not). And if you’re looking for some more schematic information, we have a cheat sheet that features a quick roundup of key players, major debates, and other topics of interest. We’ll have lots more in the weeks to come.
We also recently wrapped up another Futurography course—which looked into the supposed problem of killer artificial intelligence—with an article from Cecilia Tilli on the real dangers of A.I. As Tilli writes, “There is no reason to believe that we will be able to control generally intelligent, let alone superintelligent, systems,” but experts still disagree about the crises such systems might create. Once you’ve read that—and all the other great content we published on the topic last month—test your knowledge with our killer A.I. quiz (there’s a Dungeons and Dragons question, if that sways you) and share your own thoughts through this survey.
Coming up next week, we also have an event about the future of aviation (more details below), which started from the question of why it still takes five hours to fly cross-country. Whether or not you can make it for the actual conversation, read this Richard Aboulafia article, which shows that while there are still technological hurdles keeping us from supersonic travel, some of the real reasons are actually social. It’s also that consumer demand has plummeted as flying has become more pleasant, thanks to amenities such as onboard internet. But those very features introduce a new set of concerns, Josephine Wolff writes, potentially creating cybersecurity vulnerabilities that never would have arisen otherwise.
Here are some of the other articles we read while we guiltily ignored our email inboxes:
- Satellites: Forget E-ZPass—Singapore plans to start issuing tolls from space. Understandably, the system has created some serious privacy concerns.
- Fifth Amendment: In February, a federal magistrate judge ordered a woman to use her fingerprint to unlock her iPhone.
- Privacy: New research on the effects of mass surveillance suggests that it leads to self-censorship online, suppressing “the ideas of those on the fringes of society, while amplifying mainstream opinions.”
- Autonomous vehicles: Apparently committed to not being cool, Google is getting ready to deploy a fleet of driverless minivans. But there’s a good reason for its dorkiness.
- With the European Concorde in retirement and no American supersonic plane ever cleared for takeoff, airlines still travel at the same speed they did in the 1960s. Why is that? Join Future Tense in Washington, D.C., on Wednesday, May 11, to discuss this question and others about the future of aviation. For more information and to RSVP visit the New America web site.
for Future Tense
Google’s Next Self-Driving Car Is a Minivan
Google’s next self-driving car isn’t a car. It’s a minivan.
The tech giant on Tuesday announced a deal with the Italian-American carmaker Fiat Chrysler to build 100 driverless 2017 Chrysler Pacifica Hybrid minivans, with the first ones hitting the road late this year. Google is not selling or licensing any of its technology to Chrysler Fiat, Bloomberg notes. Rather, it’s working with the company to make sure the vehicles are specially designed to accommodate the software. From Google’s announcement:
This collaboration with Fiat Chrysler Automobiles (FCA) is the first time we’ve worked directly with an automaker to create our vehicles. FCA will design the minivans so it’s easy for us to install our self-driving systems, including the computers that hold our self-driving software, and the sensors that enable our software to see what’s on the road around the vehicle. The minivan design also gives us an opportunity to test a larger vehicle that could be easier for passengers to enter and exit, particularly with features like hands-free sliding doors.
The deal will more than double Google’s autonomous vehicle fleet, which comprises several dozen retrofitted Lexus SUVs and Google’s own adorable driverless prototypes. (Note that Google didn’t work with Lexus on those SUVs. It bought them and then had its engineers modify them as autonomous vehicles.) The company has expanded its testing of self-driving vehicles in the past year from its home base in Mountain View, California, to three other cities: Austin, Texas; Kirkland, Washington; and Phoenix.
Some companies, such as Tesla, have tried to pave the way for our automotive future by building beautiful, racy cars that become objects of desire. Google seems to be taking pretty much the opposite approach, focusing on safety and practicality to the exclusion of curb appeal.
That makes sense when you consider how people are likely to use cars that are fully driverless, as opposed to those that simply offer an autopilot mode. If you can’t drive it, there’s no point in making it fun to drive. And if you don’t own it—that is, if it’s used primarily a taxi that shuttles people from place to place on demand—then there’s not much point in making it pretty, either.
Google’s own driverless prototypes rethought a lot of what goes into a traditional vehicle, from the brakes to the steering wheel. One thing they didn’t rethink was the notion that people would still make a lot of solo trips when they no longer own their own cars. A driverless future might well be one in which people often travel in larger groups: think Lyft Line and UberPool. Hence the move into minivans.
That’s right, soccer moms and soccer dads: The robots are coming for your jobs too.
In This Beautifully Shot Sci-Fi Short, a Mind Swap Has Terrifying Consequences
In the opening moments of Trial, a new science fiction short film by the Brothers Lynch, the mysterious doctor Jennifer Bishop offers a paralyzed and disfigured soldier an opportunity: a new body. She promises, “Our biological hosts are created from the ground up, each one unique.” Soon enough, Bishop and her team have him on his feet—and not long after that things begin to go terribly, terribly wrong.
Anyone even passingly familiar with sci-fi body swap narratives will be able to guess many of Trial’s twists and turns. The pleasure here isn’t in what the filmmakers have to say—a grim moral summed up by Bishop’s admission that “all progress has a price”—so much as in the way they say it. Where some science fiction shorts go for big, bizarre effects, the Brothers Lynch take a more restrained approach. Each shot feels carefully composed and the pacing remains tense throughout.
Since most of the action is confined to the concrete corridors of Bishop’s hospital facility, we only see a portion of their near-future world. Those fragments are brought to life, however, by deft camerawork—especially in an impressive mirror sequence—and clever editing. Their elegant austerity suggests a world worn down by conflict and catastrophe, one in which those in power might be willing to try anything—no matter how dangerous—to carry on.
In a write-up of Trial for Short of the Week, Rob Munday says that the Brothers Lynch see the short as part of a larger project, one that they “are looking to expand in feature project Residual.” While it’s exciting to see what they might accomplish with a larger budget and more time to tell a story, their work in Trial is a testament to the power of minimalism and suggestion.
Tesla’s “Bioweapon Defense Mode” Sounds Like a Gimmick. It’s Actually Ingenious.
When Elon Musk announced last fall that Tesla’s new Model X SUV would come with a feature called “bioweapon defense mode,” people weren’t sure whether he was joking. It sounded like just the latest, and possibly craziest, in a long line of Hollywood-inspired marketing gimmicks by the sci-fi-loving CEO. (Think of the volume controls that go to 11, Spinal Tap style, and the Spaceballs-esque “ludicrous mode” on the Model S.)
But Musk wasn’t joking. And, far from being the Model X’s “most ridiculous feature,” as one tech blog dubbed it, bioweapon defense mode could end up being one of its biggest selling points in at least one key market: China.
It isn’t that drivers in China are paranoid about bioterror attacks. It’s that many of them deal on a daily basis with oppressive air pollution, a major quality-of-life issue in some of the country’s largest cities. For China’s wealthy, the Model X may offer a haven from the smog that no other vehicle can match.
Bioweapon defense mode, it turns out, is a bit of a misnomer. I mean, sure, you’d probably turn it on if you happened to be out for a drive when someone dropped a ricin bomb nearby. But the real purpose of Tesla’s hospital-grade HEPA cabin filtration system is to protect drivers from the more quotidian menace of pervasive air pollution.
In a blog post this week, Tesla presented the results of a dramatic test designed to demonstrate just how effective the system can be. The company says it had already fared well on California highways at rush hour and in major Chinese cities. So it took things a step further:
A Model X was placed in a large bubble contaminated with extreme levels of pollution (1,000 µg/m3 [micrograms per cubic meter] of PM2.5 [a harmful form of fine particulate] vs. the EPA's "good" air quality index limit of 12 µg/m3). We then closed the falcon doors and activated Bioweapon Defense Mode.
The chart below shows what happened next:
Within two minutes, Tesla says, the system had scrubbed the pollution in the vehicle’s cabin to the point that it was no longer detectable by the company’s sensors. The Model X’s passengers then removed their gas masks and breathed clean air.
“Bioweapon defense mode is not a marketing statement, it is real,” the company concluded. “You can literally survive a military grade bio attack by sitting in your car.”
Perhaps you could, although bioweapons experts are skeptical that you ever would. For one thing, as Gizmodo points out, you’re unlikely to realize there has even been a biological attack until it’s too late.
So, yes, the feature’s name is a marketing gimmick. Other luxury carmakers have installed high-tech air filters that didn’t get nearly this much attention.
But the feature itself is not. As Tesla points out, the World Health Organization calls air pollution “the world’s largest single environmental health risk,” contributing to more than 3 million deaths each year. Recent studies have put that number even higher. Either way, it’s more than twice the global death rate from auto accidents.
Obviously an $80,000 luxury SUV is not going to save the vast majority of those people, especially in a country such as China whose per-capita GDP is less than one-tenth that amount. That said, it could be a huge draw for the country’s fast-growing upper class. Tesla is counting on those Chinese consumers to help drive growth in demand for its pricey electric vehicles. You could almost think of “bioweapon defense mode” as a diplomatic euphemism for “Beijing mode.”
But Tesla was quick to point out that this isn’t just about China. “I disagree about this being a gimmick in any market,” spokeswoman Alexis Georgeson told me. “The HEPA filter is valuable anywhere with less than pristine air quality, which is a lot of places in the U.S. and the world.” She cited WHO estimates that air pollution reduces average life expectancy by 23 months in Beijing, but also by 10 months in Mexico City, nine months in Hong Kong, eight in Berlin and Los Angeles, and seven in Paris and London. “Not to mention this is incredibly valuable for those suffering from (or who have children suffering from) asthma and allergies.”
Even if it didn’t save or extend drivers’ lives, the system would still hold appeal as a luxury feature, blocking out unpleasant smells when you drive past a landfill or a skunk, or pretty much anywhere in California’s dung-filled Central Valley. Here’s a Tesla driver testing it out while driving through a polluted stretch of highway in Arizona.
Granted, the video was made by a self-professed Tesla lover who won his Model X in a Tesla-sponsored referral contest. I hope to get a better idea of how well the feature works when I test-drive a Model X in the coming weeks.
Regardless, the feature looks like another ingenious marketing maneuver by Musk—one that has a surprisingly serious purpose. And yes, when you turn it on, the fans go to 11.
Mass Surveillance Chills Online Speech Even When People Have “Nothing to Hide”
Terrorist activity has reached the highest level ever recorded, according to the Institute for Economics and Peace. And yet, Americans are still more likely to die in a lightning strike or a bathtub than in a terrorist attack, an argument originally made by John Mueller in his 2006 book, Overblown.
Nevertheless, acts of terror are tangible—their gore, death tallies, and elevated warning levels flash across our television screens—and produce statistics at every turn, so they offer the U.S. government an implicit mandate to continue its mass surveillance programs. Programs that undermine the privacy protections constitutionally guaranteed to every American.
They work because surveillance isn’t easily quantifiable. It’s clandestine, invisibly operating beneath our fingertips, siphoning away our data and with it, our ability to vet the consequences. Admittedly, it’s hard to get excited about the slow erosion of our civil liberties.
But it’s time to level the playing field, with more investigation into the cold, hard—and chilling—effects of government surveillance.
A steadfast commitment to freedom of expression and privacy—even in the midst of threat—is what sets democracies apart from the rest of the world. However, the U.S.’s commitment is waning. Reporters sans Frontières ranks the United States 49th in terms of press freedoms, which means we have fallen out of the top quartile of countries in the world in protecting expression. Let that sink in. America, the longstanding beacon of free speech, performs worse than some partly democratic countries in the global south, like Burkina Faso and Niger. Our nation’s whistleblowers and journalists are not adequately shielded from undue prosecution and self-censorship. Nor are our citizens.
Despite this statistic, large swathes of the American public think they’re impervious to surveillance, as if opposition raises suspicions of guilt. As a researcher examining public attitudes toward surveillance, I often encounter the argument “I’ve got nothing to hide,” typically voiced in a tone of defensive indignation. But opposition to mass surveillance does not need to be grounded in defensively hiding information; it’s about the proactive protection of your online identity.
In an effort to see what average Internet users have to hide, my graduate students and I convened a short focus group to investigate if there were any types of online activities that they would like to remain private, and sure enough, they did.
Predictably, adult content, online purchases, and strange but innocuous Google searches topped the list. But we also noted some behaviors that have direct implications for democracy: discussions on online forums, browsing news sites, and social media posting. These latter three are capital-enhancing activities, meaning they have the potential to translate into political opportunities in the offline world, like acquiring the knowledge and attitudes necessary to vote, petition and protest. A suppression of these activities threatens the vibrancy of our democracy.
It was surveillance’s effect on social media posting in particular that I wanted to quantify. So I set out to conduct the first study to test how these mass surveillance programs influence average Americans’ online behavior. I exposed a group of Internet users to a “terms of agreement” statement that reminded them—as most terms of agreements do—that their subsequent actions on our site were subject to interception and surveillance. The study’s participants were then shown a Facebook interface, where they could indicate whether they wanted to comment on, share, like, or create new Facebook posts about a current political issue.
I discovered that exposure to the terms of agreement dampened individuals’ willingness to express or otherwise support their political views. These effects were found among people who felt they held political opinions different from those of most Americans, among those who thought these programs were necessary for the sake of national security, and in a recent follow-up analysis I conducted, among racial and ethnic minorities. These individuals refrained from expressing opinions that would alienate them from both their fellow citizens and from the government.
The results were, quite literally, chilling.
And surveillance chills in a way that suppresses the ideas of those on the fringes of society, while amplifying dominant, mainstream opinions. This severely undermines the Internet’s ability to serve as a neutral platform for information sharing and discussion, instead catering only those who speak the loudest.
Published online last month, this study was released right in the middle of the presidential primary campaign, when Facebook feeds were saturated with partisan endorsements and polarizing vitriol. Strangers, acquaintances, and friends approached me—almost all in person—to confide that they too had, at times, fallen victim to this type of self-censorship on social media. This study, which drew upon a sample of Internet users from across the U.S., shows its not just happening among my social network. It’s probably rampant among yours, too.
Their anxieties are not baseless. Recent reporting has shown that the U.S. federal government has poured money into private companies to monitor and mine social media content—for what, we’re not exactly sure. And data on Americans collected and achieved by the NSA may be shared with other government agencies, without a warrant, to investigate and prosecute crimes unrelated to national security and terrorism, like drug offenses.
Even for the vast majority of us who aren’t guilty of any wrongdoing, our photos, posts, check-ins, search histories, and, above all, metadata, paint detailed summaries of our online lives. We’re entitled to privacy and the ability to choose what we want to reveal about ourselves, to the government, to our employers, and to one another.
As we continue the uninterrupted march into an era of big data, this study should serve as yet another red flag, signaling the need for greater transparency, skepticism, and quantifiable research.
For the First Time, Federal Judge Says Suspect Must Use Fingerprint to Unlock Smartphone
The recent standoff between Apple and the FBI over accessing a passcode-locked iPhone ended abruptly when the FBI was able to buy a cracking tool. But even if Syed Farook, the San Bernardino shooter who owned the iPhone, hadn't been killed, a similar conflict might have still played out. Current precedent indicates that being forced to enter a passcode runs counter to the Fifth Amendment's protection against self-incrimination. But if Farook were alive and had locked his iPhone with Apple's Touch ID, the FBI would have been looking to a very different and rapidly evolving precedent.
The Los Angeles Times reports that in February the FBI obtained a warrant from federal magistrate judge Alicia Rosenberg to compel a 29-year-old Los Angeles woman to unlock her iPhone with her fingerprint. The suspect, Paytsar Bkhchadzhyan, has a criminal record and was allegedly the girlfriend of Sevak Mesrobian, who is thought to be a member of the Armenian Power gang. Though the Times followed a trail of court records, some are sealed, so it's not totally clear why the FBI wanted to access the contents of Bkhchadzhyan's phone.
Judges at the state level have held that law enforcement can compel a suspect to provide his or her fingerprint for unlocking a computer. The distinction they draw between this method of unlocking and a password or numeric passcode centers around the precedent that thoughts and ideas are protected by the Fifth Amendment while "material evidence"—like blood, handwriting samples, or the key to a strongbox—is not. This contrast was notably discussed in a 1988 John Paul Stevens Supreme Court dissent.
Privacy advocates are still determining whether they believe that the emerging distinction between fingerprints and memorized passcodes is reasonable. Susan Brenner, a law professor at the University of Dayton, told the Times that, "It isn't about fingerprints and the biometric readers. [It's about] the contents of that phone, much of which will be about her, and a lot of that could be incriminating."
In 2014, Hayes Hunt, a criminal defense and government investigations lawyer at Cozen O’Connor, told Time, “I think the courts are struggling with this, because a fingerprint in and of itself is not testimony, ... [but] the concern is, once we put a password on something or on ourselves, we have a certain privacy interest.”
Since humans leave their fingerprints on surfaces in their daily lives, law enforcement might be able to lift prints and unlock devices even without suspects' fingers. From a legal perspective, for now it's probably safest to lock your phone with a passcode instead of your fingerprint.
The Best Place for Self-Driving Cars Is Not America. It's a Place Like Dubai.
The hotbeds of the nascent self-driving car industry today are places like Mountain View, California, home of Google, and Pittsburgh, where Uber and Carnegie Mellon University both have research hubs. Driverless cars already dot the streets there, giving residents a glimpse of the probable future in which fleets of commercial driverless vehicles share the road with human drivers.
I say “probable future” because, as inevitable as Silicon Valley technologists and prognosticators make it seem, it is still not a given that the American public and government will accept a world in which hordes of vehicular robots share the road with human drivers, even if all the still-daunting technological hurdles can be surmounted.
And yet it’s looking increasingly likely that mass adoption of commercial driverless cars will happen somewhere, even if that somewhere isn’t the United States.
For instance, it could happen in a place like Dubai.
The AP reported last week that the emirate’s leader has called for driverless vehicles to account for 25 percent of all trips on its public streets by the year 2030. Dubai has struck a deal with a French company called EasyMile to allow tests of its boxy, 10-passenger autonomous vehicle, the EZ10, on local roadways. And it is promoting a video of an autonomous concept car that looks like a Tesla Model S to get residents excited about the possibility of being ferried around by cars with no one in the driver’s seat. (It isn’t a real Tesla, and the carmaker has not announced any plans to expand sales to Dubai.)
That 2030 target would be pretty ambitious for a country as vast and diverse as the United States. Obama’s Department of Transportation does support the development of autonomous vehicle technology, and it recently mapped out a plan to create designated testing corridors around the country. However, this prudent approach is not likely to lead to rapid nationwide adoption.
For a city-state such as Dubai, however, 25 percent by 2030 seems like a target that should be relatively easy to achieve. For one thing, Dubai has the advantage of being both small in area and geographically homogeneous.
The hardest thing about building fully driverless cars for a market like the United States is ensuring that they have the capability to perform equally well in every conceivable driving situation. They have to navigate not only Mountain View and Pittsburgh, but the gridlock of Midtown Manhattan, the single-lane covered bridges of the rural Midwest, and the forested dirt roads of the Appalachians. They have to work in rain, snow, and fog.
In a place like Dubai, however, much of that isn’t necessary. There are some adverse conditions, such as heat and fog, and the urban roads can be heavily trafficked. But by and large, it’s a very car-friendly place: The streets are relatively well-paved and marked, and they aren’t as clogged with unpredictable bicycle, pedestrian, or rickshaw traffic as those of many other big cities around the world. It never snows, and it rains just 25 days a year. Oh, and the government is an absolute monarchy that can more or less impose its will on the populace.
In short, if driverless cars can work anywhere, they can probably work in Dubai. And if they do work in a place like Dubai, that makes them less likely to be a technological dead end, even if they suffer serious setbacks in the United States and elsewhere.
Previously in Slate:
In Praise of Email Debt Forgiveness Day
We live in an age of Days.
On the internet, invented occasions proliferate, some inviting awareness, others memorializing events that never interested us in the first place. The most important will get Google doodles, and even the dopiest sometimes merit recognition on Wikipedia. This May alone, we’ll be asked to honor Star Wars Day (the 4th), Towel Day (the 25th), and many more. Our trajectory is clear enough: Before long we’ll be celebrating Day Awareness Day, dutifully familiarizing ourselves with under-recognized annual occurrences.
But if the calendar is cluttered, my email inbox is more so, a thorny thicket of unanswered missives buried beneath unwanted offers. I know I am not alone in this: Though some of my friends—and some Slate editors—brag of their adherence to the Cult of the Inbox Zero, others are drowning. When I inquired about the topic on Facebook, one posted a queasifying picture of his primary Gmail inbox, letting the number—87,946 unread messages—speak for itself. I count only 68 (read and unread) in my personal account, but each is a lodestone.
It’s the consciousness of this burden that leads me to reluctantly endorse—praise, even!—Email Debt Forgiveness Day, which makes its second appearance on April 30. As Reeves Wiedeman explains in The New Yorker, Email Debt Forgiveness Day is the creation of Alex Goldman and P.J. Vogt, hosts of the internet culture podcast Reply All. On their show’s website, Goldman and Vogt write, “If there’s an email response you’ve wanted to send but been too anxious to send, you can send it on April 30th, without any apologies or explanations for all the time that has lapsed.” In lieu of further details about the delay, they invite you to simply link to their own explainer of the holiday.
If this feels necessary, it may be because the emails we fail to send often matter to us most. “The emails that I really want to respond to in a thoughtful way—put some time and heart into—are the ones I leave the longest, or in many cases, don’t end up answering,” my friend Mindi tells me. Like her, by the time I’ve called up an adequate response, I find that I’m paralyzed by the time that’s passed. A recent episode of Reply All goes deep into some such stories, tales of unresolved heartbreak and unexpected connection.
To be sure, this guilt isn’t the invention of email. Tucked away in a drawer somewhere, I probably still have a letter from one Brenna P.—a letter that she sent when we were 9, one that overwhelmed me even then, so much so that I never wrote back. Shame has always clung to human connection, a constant reminder that we’re never as good to others as we’d like to be. Digital communication hasn’t created this problem, then, but it may have intensified it, confronting us with the fact of our failures, given it a numerical value.
Email Debt Forgiveness Day isn’t right for everyone, of course. One Slate employee told me that she tried it last year, only to receive baffled responses to her belated replies. An ex mistakenly thought she was trying to get back together with him, she said, while other correspondents were just offended. Lesson learned: If you plan to celebrate the holiday this year, don’t expect everyone to understand.
Still, as a reminder that we’re not alone in our guilt, Email Debt Forgiveness Day can provide a much needed push. It might not be as socially important as, say, World Turtle Day, but in encouraging us clear up some of the emotional clutter in our lives, Email Debt Forgiveness Day might free us up to be a little more conscientious about everything else.