This Animation Lets You Watch Global Warming Heat Up Over 166 Years
When it comes to climate change, it’s often difficult to convey an appropriate sense of urgency. After all, this is a problem that been building for decades, and will take decades of coordinated effort to solve.
Still, something especially troubling is happening at a planetary scale this year. The first three months of 2016 have been so ridiculously warm that our planet is already a shoo-in to record the warmest calendar year since records began. El Niño is a big factor in why this global step-change is happening right now, but it’s not the whole story. Most of the current warming, at least when compared to pre-Industrial Revolution levels, is the direct result of greenhouse gas emissions.
Using data from the U.K. Met Office’s Hadley Centre, Ed Hawkins, a climate scientist at the University of Reading, constructed an affecting animation:
The first thing you’ll notice is that the outward progression (warming) of the temperature spiral is picking up the pace in recent years. This year, temperatures are flirting with 1.5 degrees Celsius above pre-industrial levels, and it’s easy to see that 2 degrees isn’t that far off.
Hawkins says the idea for the animation came from his fellow climate scientist Jan Fuglestvedt, a vice chair of the Intergovernmental Panel on Climate Change, after Fuglestvedt saw one of Hawkins’ previous visualizations of global temperature (below) and suggested a spiral version.
Earlier this year, according to a slightly different base line, global temperatures briefly crossed the 1.5 degree Celsius level—which vulnerable small island states warned at last year’s Paris climate negotiations was a dangerous threshold. Not coincidentally, over the past few months, large sections of coral reefs around the world have been killed by the abnormally warm temperatures—further stressing the fragile ecosystem and economic base on which many island societies depend.
Depending on how you calculate the years that define “pre-industrial,” humanity has probably already locked in crossing 2 degrees Celsius sometime in the next 15 years, at least briefly. That’s because there’s a lag in the climate system related to how long it takes the oceans to absorb the atmosphere’s excess heat. It’s at that point, or perhaps sooner, that further impacts will appear—like a virtually ice-free Arctic Ocean during the summer.
When looking at the spiral animation, Hawkins says, “the pace of change is immediately obvious, especially over the past few decades.” The dangerous acceleration of global temperatures as humanity approaches these thresholds is also clear, Hawkins says, “without much complex interpretation needed.”
Good visualizations won’t be enough for us to ramp up climate change efforts. But this particular visualization is just about the best I’ve ever seen.
What Slate Readers Think About Killer A.I.
Throughout April, as part of our fourth Futurography course, Future Tense focused on debates about the supposed dangers of artificial intelligence . We published essays and articles by experts and academics on a wide range of topics, but we’re also interested in what you have to say. To that end, we’ve written up some of your answers to our survey on the topic. We hope you’ll follow along as we explore the cultural conversations about drones this month.
Most of the Slate readers who wrote in were unconvinced that A.I. itself presents a direct threat, though many still felt that it has dangerous qualities. Arguing that it’s humans who are the greater risk, one wrote, “I’m worried about people deliberately exploiting A.I. in ways that are detrimental to human prosperity.” Others argued that we should be worrying more about the industries and governments developing the machines that about the machines themselves.
Those who did maintain that A.I. might endanger humans still tended to shy from the premise that it presents a truly existential threat. In accordance with Slate contributors such as Cecilia Tilli, some readers felt that the real problem might be that A.I. will “make many more jobs obsolete, which will severely strain the social fabric.”
To the question of which concerns about A.I. were overblown, an overwhelming number of respondents agreed that fears about actual murderous roots are largely silly. “I’m not sure we will ever get to a true A.I. in the Terminator sense,” one wrote, while another argued, “Evil robots probably won’t exist because there is no reason for them to be programmed that way.” Some went further still, proposing that even the possibility of humanlike intelligence seemed unlikely. “Computers just don't think like we do from what I understand,” one wrote.
Some readers were skeptical of the idea that A.I.’s interests could ever correspond with our own, despite the work of researchers such as Stuart Russell who are trying to ensure that computers can learn to recognize what’s most important to humans. As one who took such a position put it, “A.I. will likely just be a very, very smart machine and the concept of ‘interests’ might not bear.” Instead, “We should worry more about whether the interests of private and military A.I. R&D teams align with our public interests,” a reader suggested. Another wrote that “terrorist groups and … enemy nation states” presented the greater risk. And others continued to hold that the real trouble is that there’s no such thing as human interest per se, as did one who wrote, “We humans can’t even agree on what constitutes ‘good’ and what constitutes ‘evil.’ ”
Whatever their concerns, readers had a wide range of ideas for future A.I. research priorities. Many suggested that we should look more deeply into medical applications of A.I. such as epidemiological analyses and artificial pancreas technology. Others proposed that we should instead focus on sociological concerns, working to anticipate how A.I. will change the ways we live instead of simply developing the systems that will bring about such changes. Several others echoed a point made by Carissa Véliz in Future Tense, proposing that we need to think more fully about what constitutes consciousness. “Is AI consciousness possible? And how would we know it?” one typical reader asked.
When all was said and done, a few readers had other lingering questions about future developments. “When will Siri not be abjectly terrible?” one asked, while another inquired, “How quickly will Slate employees be replaced by A.I.?” To that we have a question of our own: How do you know we haven’t been replaced already?
This article is part of the artificial intelligence installment of Futurography, a series in which Future Tense introduces readers to the technologies that will define tomorrow. Each month from January through June 2016, we’ll choose a new technology and break it down. Read more from Futurography on artificial intelligence:
- “What’s the Deal With Artificial Intelligence Killing Humans?”
- “Your Artificial Intelligence Cheat Sheet”
- “Killer Robots on the Battlefield”
- “The Wrong Cognitive Measuring Stick”
- “The Challenge of Determining Whether an A.I. Is Sentient”
- An interview with A.I. expert Stuart Russell
- “Why You Can’t Teach Human Values to Artificial Intelligence”
- “Let Artificial Intelligence Evolve”
- “Mika Model,” a brand-new short story from sci-fi great Paolo Bacigalupi
- “When a Robot Kills, Is It Murder or Product Liability?”
- “The Threats That Artificial Intelligence Researchers Actually Worry About”
- “How Much Do You Know About Killer A.I.? Take Our Quiz.”
A Group of Researchers Are Planning to Sequence Leonardo da Vinci’s 500-Year-Old Genome
It would be pretty fascinating to study Leonardo da Vinci’s genome and try to find clues to the origin of his brilliance. Of course, genetics can’t tell us everything, but any clue about da Vinci, who died 497 years ago, is valuable. One problem, though. How do you sequence the genome of someone who’s been dead for centuries? A group of specialists think they can do it.
“The Leonardo Project,” which will honor the 500th anniversary of da Vinci’s death, published a schedule for the genome sequencing project in the journal Human Evolution on Thursday. Anthropologists, art historians, geneticists, genealogists, microbiologists, and other researchers will collaborate on the project to uncover new physical evidence linked to da Vinci.
The plan includes studying the microbiomes of da Vinci paintings and tracking the inventor’s descendants, both living and dead. The project will evaluate bones that may or may not be da Vinci’s, and will use radar in an attempt to locate da Vinci’s father’s remains, which are thought to be buried under an Italian church. Researchers will also try to verify da Vinci’s fingerprints and search for them on his works.
Information about da Vinci’s genome would lead to a better understanding of his talents, physical characteristics, and disease risks. “They hope to acquire an extensive enough genetic profile to understand better his abilities and visual acuity,” Jesse Ausubel, vice chairman of the Richard Lounsbery Foundation (a sponsor of the Leonardo Project), wrote in Human Evolution.
It will be complicated, however. A first phase of the project began in November 2014 to definitively identify da Vinci’s remains and get DNA samples from the bones. Ausubel describes such a specimen as “yet to be found,” though. The methodologies the project develops should be valuable, but it remains to be seen whether it can produce a full da Vinci genome by 2019.
As history-altering geniuses go, it seems like it would be easier to sequence Einstein’s genome, since he died much more recently and his brain was preserved. But the difficulty level will make the da Vinci genome a much more amazing achievement—if it happens.
An Antivirus Scan Shut Down a Medical Device in the Middle of Heart Surgery
In theory, antivirus software is designed to keep users safe from their own mistakes. Who among us hasn’t occasionally visited a dodgy website or downloaded a dubious file? But while they can help counteract our carelessness, we still need to be careful about the ways we use them. That’s a lesson that one hospital recently learned the hard way when a medical device crashed in the midst of heart surgery. On investigation, it turned out that the culprit was the antivirus program on a computer to which the device was connected.
As Softpedia’s Catalin Cimpanu writes, the incident, which occurred in February, involved a tool called the Merge Hemo, which contributes to cardiac data collection. The Merge Hemo itself gathers and evaluates information about the patient, then transfers that information to a connected computer. An incident report filed with the Food and Drug Administration explains that the crash happened because the computer automatically initiated its hourly malware scan while the procedure was in progress. That froze up the Merge Hemo app on the computer, which shut down the actual device’s interface.
Fortunately, in this case the interruption was only temporary. The FDA write-up goes on to explain “it was reported that the procedure was completed successfully once the application was rebooted.” Merge claims that fault lies with the hospital, gesturing to its own security recommendations, which note “that improper configuration of anti-virus software can have adverse affects including downtime and clinically unusable performance.”
While this story has a relatively happy ending, it still speaks to the larger cyberhygiene problem in hospitals. As my colleague Lily Hay Newman has regularly shown, virtually everything that connects to the internet has been hacked, including medical devices. And hospitals themselves have been subject to ransomware attacks by hackers, endangering patient safety. But as J.M. Porup has argued in Future Tense, the real danger in medical environments may not be malice but malware, invasive programs that could interrupt care—even if their developers didn’t actually intend to target hospitals.
It’s reassuring to see that hospitals are attempting to do something about such problems, but the Merge Hemo incident also provides an important reminder: Cybersecurity has to be an active enterprise, an ongoing, engaged process. Installing anti-malware security programs and calling it a day clearly isn’t enough. Indeed, it may make things worse.
Netizen Report: WhatsApp Briefly Blocked in Brazil, Again
The Netizen Report offers an international snapshot of challenges, victories, and emerging trends in Internet rights around the world. It originally appears each week on Global Voices Advocacy. Ellery Roberts Biddle, Juan Arellano, Marianne Diaz, Sam Kellogg, Weiping Li, Rezwan, and Sarah Myers West contributed to this report.
A Brazilian judge ordered internet service providers to block access to WhatsApp in the country for 72 hours, citing the company’s alleged failure to comply with an order to assist police in a drug investigation. The block was lifted the next day. If you think this sounds familiar, you’re not wrong: A judge also ordered WhatsApp to be blocked in 2015 for a period of about 12 hours before the decision was overturned by an appeals court.
Ethiopians face dire consequences for participating in digital culture
The Ethiopian Federal High Court convicted Zelalem Workagenehu under the country’s Anti-Terror Proclamation and will issue his sentence May 10. Workagenehu was arrested along with Yonatan Wolde and Bahiru Degu for applying to attend a training session on digital communication, social media, and leadership, which the government has described as a “training operation to terrorize the country.” All three were arrested and spent more than 600 days in prison without facing trial. Degu reported experiencing extensive torture during his first few months in detention, including beatings and being forced to remove his clothes and drink his own urine. Wolde and Degu were acquitted of the charges and released from prison, but were re-arrested shortly afterward and forced to spend another night in prison before being released again and told they remain under observation. Their relatives say they were told by the state security officers that “they would be killed if they made any moves.”
Mexican NGO takes “stalker law” to Supreme Court
The Second Chamber of Mexico’s Supreme Court of Justice of the Nation ruled on May 4 that the controversial Telecom Law, nicknamed the “Ley Stalker” (“Stalker Law”) does not violate the nation’s constitution. The law requires telecommunications companies to retain records of users’ metadata for two years and provide unrestricted access to state authorities without any requirement for judicial oversight. The law, which came into effect in August 2014, has come under criticism from digital rights groups for being invasive of privacy. The Mexico City–based NGO Network for Defense of Digital Rights now plans to challenge the Supreme Court ruling before the Inter-American Commission on Human Rights.
Russian activists sound alarm on Telegram security flaws
Two Russian opposition activists reported their Telegram messenger accounts were remotely hacked through the app’s SMS login feature, suggesting the app is not as secure as the company claims. They believe the Russian government was involved in the hack. Security researcher Frederick Jacobs pointed to similar attacks on Iranian accounts earlier this year, critiquing the safety implications of text message logins.
Myanmar activists tackle online hate, work to educate
As Internet connectivity has increased in Myanmar over the past six years, social media users have seen a rising tide of anti-Muslim sentiment on social platforms. In response, activists in have launched a campaign to confront hate speech that aims to educate Internet users on how to identify and respond to hate speech, and how to engage in constructive debates online.
Iranian cartoonists released from prison
Two popular Iranian web cartoonists who were jailed for their artwork were released from prison since our last report. Hadi Heidari, who was arrested for a cartoon marking the November 2015 terrorist attacks in Paris, was freed from prison on April 26. Cartoonist Atena Farghadani was also released on May 3, after her sentence was reduced from 12 years to 18 months.
• “The Right to Privacy in Venezuela”—Acceso Libre, the International Human Rights Clinic at Harvard Law School, and Privacy International
• “Watchtower: Mapping the Indian Government’s Cybersecurity Institutions”—Internet Democracy Project
Future Tense Newsletter: Disquieting Drones and Supersonic Flights
Greetings, Future Tensers,
Drones are used for everything from racing to aerial photography, but surveys suggest that large numbers of Americans still find them creepy. For this month’s Futurography course, we’re looking into why that is—and whether it’s likely to stay that way. As always, we’ve started out with a conversational introduction to the topic that looks into questions such as whether current drones are actually helping your neighbors spy on you (spoiler: probably not). And if you’re looking for some more schematic information, we have a cheat sheet that features a quick roundup of key players, major debates, and other topics of interest. We’ll have lots more in the weeks to come.
We also recently wrapped up another Futurography course—which looked into the supposed problem of killer artificial intelligence—with an article from Cecilia Tilli on the real dangers of A.I. As Tilli writes, “There is no reason to believe that we will be able to control generally intelligent, let alone superintelligent, systems,” but experts still disagree about the crises such systems might create. Once you’ve read that—and all the other great content we published on the topic last month—test your knowledge with our killer A.I. quiz (there’s a Dungeons and Dragons question, if that sways you) and share your own thoughts through this survey.
Coming up next week, we also have an event about the future of aviation (more details below), which started from the question of why it still takes five hours to fly cross-country. Whether or not you can make it for the actual conversation, read this Richard Aboulafia article, which shows that while there are still technological hurdles keeping us from supersonic travel, some of the real reasons are actually social. It’s also that consumer demand has plummeted as flying has become more pleasant, thanks to amenities such as onboard internet. But those very features introduce a new set of concerns, Josephine Wolff writes, potentially creating cybersecurity vulnerabilities that never would have arisen otherwise.
Here are some of the other articles we read while we guiltily ignored our email inboxes:
- Satellites: Forget E-ZPass—Singapore plans to start issuing tolls from space. Understandably, the system has created some serious privacy concerns.
- Fifth Amendment: In February, a federal magistrate judge ordered a woman to use her fingerprint to unlock her iPhone.
- Privacy: New research on the effects of mass surveillance suggests that it leads to self-censorship online, suppressing “the ideas of those on the fringes of society, while amplifying mainstream opinions.”
- Autonomous vehicles: Apparently committed to not being cool, Google is getting ready to deploy a fleet of driverless minivans. But there’s a good reason for its dorkiness.
- With the European Concorde in retirement and no American supersonic plane ever cleared for takeoff, airlines still travel at the same speed they did in the 1960s. Why is that? Join Future Tense in Washington, D.C., on Wednesday, May 11, to discuss this question and others about the future of aviation. For more information and to RSVP visit the New America web site.
for Future Tense
Google’s Next Self-Driving Car Is a Minivan
Google’s next self-driving car isn’t a car. It’s a minivan.
The tech giant on Tuesday announced a deal with the Italian-American carmaker Fiat Chrysler to build 100 driverless 2017 Chrysler Pacifica Hybrid minivans, with the first ones hitting the road late this year. Google is not selling or licensing any of its technology to Chrysler Fiat, Bloomberg notes. Rather, it’s working with the company to make sure the vehicles are specially designed to accommodate the software. From Google’s announcement:
This collaboration with Fiat Chrysler Automobiles (FCA) is the first time we’ve worked directly with an automaker to create our vehicles. FCA will design the minivans so it’s easy for us to install our self-driving systems, including the computers that hold our self-driving software, and the sensors that enable our software to see what’s on the road around the vehicle. The minivan design also gives us an opportunity to test a larger vehicle that could be easier for passengers to enter and exit, particularly with features like hands-free sliding doors.
The deal will more than double Google’s autonomous vehicle fleet, which comprises several dozen retrofitted Lexus SUVs and Google’s own adorable driverless prototypes. (Note that Google didn’t work with Lexus on those SUVs. It bought them and then had its engineers modify them as autonomous vehicles.) The company has expanded its testing of self-driving vehicles in the past year from its home base in Mountain View, California, to three other cities: Austin, Texas; Kirkland, Washington; and Phoenix.
Some companies, such as Tesla, have tried to pave the way for our automotive future by building beautiful, racy cars that become objects of desire. Google seems to be taking pretty much the opposite approach, focusing on safety and practicality to the exclusion of curb appeal.
That makes sense when you consider how people are likely to use cars that are fully driverless, as opposed to those that simply offer an autopilot mode. If you can’t drive it, there’s no point in making it fun to drive. And if you don’t own it—that is, if it’s used primarily a taxi that shuttles people from place to place on demand—then there’s not much point in making it pretty, either.
Google’s own driverless prototypes rethought a lot of what goes into a traditional vehicle, from the brakes to the steering wheel. One thing they didn’t rethink was the notion that people would still make a lot of solo trips when they no longer own their own cars. A driverless future might well be one in which people often travel in larger groups: think Lyft Line and UberPool. Hence the move into minivans.
That’s right, soccer moms and soccer dads: The robots are coming for your jobs too.
In This Beautifully Shot Sci-Fi Short, a Mind Swap Has Terrifying Consequences
In the opening moments of Trial, a new science fiction short film by the Brothers Lynch, the mysterious doctor Jennifer Bishop offers a paralyzed and disfigured soldier an opportunity: a new body. She promises, “Our biological hosts are created from the ground up, each one unique.” Soon enough, Bishop and her team have him on his feet—and not long after that things begin to go terribly, terribly wrong.
Anyone even passingly familiar with sci-fi body swap narratives will be able to guess many of Trial’s twists and turns. The pleasure here isn’t in what the filmmakers have to say—a grim moral summed up by Bishop’s admission that “all progress has a price”—so much as in the way they say it. Where some science fiction shorts go for big, bizarre effects, the Brothers Lynch take a more restrained approach. Each shot feels carefully composed and the pacing remains tense throughout.
Since most of the action is confined to the concrete corridors of Bishop’s hospital facility, we only see a portion of their near-future world. Those fragments are brought to life, however, by deft camerawork—especially in an impressive mirror sequence—and clever editing. Their elegant austerity suggests a world worn down by conflict and catastrophe, one in which those in power might be willing to try anything—no matter how dangerous—to carry on.
In a write-up of Trial for Short of the Week, Rob Munday says that the Brothers Lynch see the short as part of a larger project, one that they “are looking to expand in feature project Residual.” While it’s exciting to see what they might accomplish with a larger budget and more time to tell a story, their work in Trial is a testament to the power of minimalism and suggestion.
Tesla’s “Bioweapon Defense Mode” Sounds Like a Gimmick. It’s Actually Ingenious.
When Elon Musk announced last fall that Tesla’s new Model X SUV would come with a feature called “bioweapon defense mode,” people weren’t sure whether he was joking. It sounded like just the latest, and possibly craziest, in a long line of Hollywood-inspired marketing gimmicks by the sci-fi-loving CEO. (Think of the volume controls that go to 11, Spinal Tap style, and the Spaceballs-esque “ludicrous mode” on the Model S.)
But Musk wasn’t joking. And, far from being the Model X’s “most ridiculous feature,” as one tech blog dubbed it, bioweapon defense mode could end up being one of its biggest selling points in at least one key market: China.
It isn’t that drivers in China are paranoid about bioterror attacks. It’s that many of them deal on a daily basis with oppressive air pollution, a major quality-of-life issue in some of the country’s largest cities. For China’s wealthy, the Model X may offer a haven from the smog that no other vehicle can match.
Bioweapon defense mode, it turns out, is a bit of a misnomer. I mean, sure, you’d probably turn it on if you happened to be out for a drive when someone dropped a ricin bomb nearby. But the real purpose of Tesla’s hospital-grade HEPA cabin filtration system is to protect drivers from the more quotidian menace of pervasive air pollution.
In a blog post this week, Tesla presented the results of a dramatic test designed to demonstrate just how effective the system can be. The company says it had already fared well on California highways at rush hour and in major Chinese cities. So it took things a step further:
A Model X was placed in a large bubble contaminated with extreme levels of pollution (1,000 µg/m3 [micrograms per cubic meter] of PM2.5 [a harmful form of fine particulate] vs. the EPA's "good" air quality index limit of 12 µg/m3). We then closed the falcon doors and activated Bioweapon Defense Mode.
The chart below shows what happened next:
Within two minutes, Tesla says, the system had scrubbed the pollution in the vehicle’s cabin to the point that it was no longer detectable by the company’s sensors. The Model X’s passengers then removed their gas masks and breathed clean air.
“Bioweapon defense mode is not a marketing statement, it is real,” the company concluded. “You can literally survive a military grade bio attack by sitting in your car.”
Perhaps you could, although bioweapons experts are skeptical that you ever would. For one thing, as Gizmodo points out, you’re unlikely to realize there has even been a biological attack until it’s too late.
So, yes, the feature’s name is a marketing gimmick. Other luxury carmakers have installed high-tech air filters that didn’t get nearly this much attention.
But the feature itself is not. As Tesla points out, the World Health Organization calls air pollution “the world’s largest single environmental health risk,” contributing to more than 3 million deaths each year. Recent studies have put that number even higher. Either way, it’s more than twice the global death rate from auto accidents.
Obviously an $80,000 luxury SUV is not going to save the vast majority of those people, especially in a country such as China whose per-capita GDP is less than one-tenth that amount. That said, it could be a huge draw for the country’s fast-growing upper class. Tesla is counting on those Chinese consumers to help drive growth in demand for its pricey electric vehicles. You could almost think of “bioweapon defense mode” as a diplomatic euphemism for “Beijing mode.”
But Tesla was quick to point out that this isn’t just about China. “I disagree about this being a gimmick in any market,” spokeswoman Alexis Georgeson told me. “The HEPA filter is valuable anywhere with less than pristine air quality, which is a lot of places in the U.S. and the world.” She cited WHO estimates that air pollution reduces average life expectancy by 23 months in Beijing, but also by 10 months in Mexico City, nine months in Hong Kong, eight in Berlin and Los Angeles, and seven in Paris and London. “Not to mention this is incredibly valuable for those suffering from (or who have children suffering from) asthma and allergies.”
Even if it didn’t save or extend drivers’ lives, the system would still hold appeal as a luxury feature, blocking out unpleasant smells when you drive past a landfill or a skunk, or pretty much anywhere in California’s dung-filled Central Valley. Here’s a Tesla driver testing it out while driving through a polluted stretch of highway in Arizona.
Granted, the video was made by a self-professed Tesla lover who won his Model X in a Tesla-sponsored referral contest. I hope to get a better idea of how well the feature works when I test-drive a Model X in the coming weeks.
Regardless, the feature looks like another ingenious marketing maneuver by Musk—one that has a surprisingly serious purpose. And yes, when you turn it on, the fans go to 11.
Mass Surveillance Chills Online Speech Even When People Have “Nothing to Hide”
Terrorist activity has reached the highest level ever recorded, according to the Institute for Economics and Peace. And yet, Americans are still more likely to die in a lightning strike or a bathtub than in a terrorist attack, an argument originally made by John Mueller in his 2006 book, Overblown.
Nevertheless, acts of terror are tangible—their gore, death tallies, and elevated warning levels flash across our television screens—and produce statistics at every turn, so they offer the U.S. government an implicit mandate to continue its mass surveillance programs. Programs that undermine the privacy protections constitutionally guaranteed to every American.
They work because surveillance isn’t easily quantifiable. It’s clandestine, invisibly operating beneath our fingertips, siphoning away our data and with it, our ability to vet the consequences. Admittedly, it’s hard to get excited about the slow erosion of our civil liberties.
But it’s time to level the playing field, with more investigation into the cold, hard—and chilling—effects of government surveillance.
A steadfast commitment to freedom of expression and privacy—even in the midst of threat—is what sets democracies apart from the rest of the world. However, the U.S.’s commitment is waning. Reporters sans Frontières ranks the United States 49th in terms of press freedoms, which means we have fallen out of the top quartile of countries in the world in protecting expression. Let that sink in. America, the longstanding beacon of free speech, performs worse than some partly democratic countries in the global south, like Burkina Faso and Niger. Our nation’s whistleblowers and journalists are not adequately shielded from undue prosecution and self-censorship. Nor are our citizens.
Despite this statistic, large swathes of the American public think they’re impervious to surveillance, as if opposition raises suspicions of guilt. As a researcher examining public attitudes toward surveillance, I often encounter the argument “I’ve got nothing to hide,” typically voiced in a tone of defensive indignation. But opposition to mass surveillance does not need to be grounded in defensively hiding information; it’s about the proactive protection of your online identity.
In an effort to see what average Internet users have to hide, my graduate students and I convened a short focus group to investigate if there were any types of online activities that they would like to remain private, and sure enough, they did.
Predictably, adult content, online purchases, and strange but innocuous Google searches topped the list. But we also noted some behaviors that have direct implications for democracy: discussions on online forums, browsing news sites, and social media posting. These latter three are capital-enhancing activities, meaning they have the potential to translate into political opportunities in the offline world, like acquiring the knowledge and attitudes necessary to vote, petition and protest. A suppression of these activities threatens the vibrancy of our democracy.
It was surveillance’s effect on social media posting in particular that I wanted to quantify. So I set out to conduct the first study to test how these mass surveillance programs influence average Americans’ online behavior. I exposed a group of Internet users to a “terms of agreement” statement that reminded them—as most terms of agreements do—that their subsequent actions on our site were subject to interception and surveillance. The study’s participants were then shown a Facebook interface, where they could indicate whether they wanted to comment on, share, like, or create new Facebook posts about a current political issue.
I discovered that exposure to the terms of agreement dampened individuals’ willingness to express or otherwise support their political views. These effects were found among people who felt they held political opinions different from those of most Americans, among those who thought these programs were necessary for the sake of national security, and in a recent follow-up analysis I conducted, among racial and ethnic minorities. These individuals refrained from expressing opinions that would alienate them from both their fellow citizens and from the government.
The results were, quite literally, chilling.
And surveillance chills in a way that suppresses the ideas of those on the fringes of society, while amplifying dominant, mainstream opinions. This severely undermines the Internet’s ability to serve as a neutral platform for information sharing and discussion, instead catering only those who speak the loudest.
Published online last month, this study was released right in the middle of the presidential primary campaign, when Facebook feeds were saturated with partisan endorsements and polarizing vitriol. Strangers, acquaintances, and friends approached me—almost all in person—to confide that they too had, at times, fallen victim to this type of self-censorship on social media. This study, which drew upon a sample of Internet users from across the U.S., shows its not just happening among my social network. It’s probably rampant among yours, too.
Their anxieties are not baseless. Recent reporting has shown that the U.S. federal government has poured money into private companies to monitor and mine social media content—for what, we’re not exactly sure. And data on Americans collected and achieved by the NSA may be shared with other government agencies, without a warrant, to investigate and prosecute crimes unrelated to national security and terrorism, like drug offenses.
Even for the vast majority of us who aren’t guilty of any wrongdoing, our photos, posts, check-ins, search histories, and, above all, metadata, paint detailed summaries of our online lives. We’re entitled to privacy and the ability to choose what we want to reveal about ourselves, to the government, to our employers, and to one another.
As we continue the uninterrupted march into an era of big data, this study should serve as yet another red flag, signaling the need for greater transparency, skepticism, and quantifiable research.