Future Tense
The Citizen's Guide to the Future

Oct. 17 2017 7:28 PM

Would Google's New Super-Secure Email Have Protected Hillary's Campaign?

A targeted phishing scam and an unfortunate typo helped hackers infiltrate the Gmail account of Hillary Clinton’s campaign manager John Podesta during the 2016 election season.

Podesta received an email requesting a password change, and suspecting a phishing attempt, he forwarded it to IT. But the IT guy accidentally said it was “a legitimate email” rather than “not a legitimate email,” and the rest is history: Podesta’s email was accessed, 60,000 Gmail messages were leaked, and the juiciest made their way into the press.

Advertisement

A hack like this is, and was, entirely preventable. And on Tuesday, Google announced new protections for high-profile individuals who could be targeted like Podesta.

“We took this unusual step because there is an overlooked minority of our users that are at particularly high risk of targeted online attacks,” Advanced Protection project manager Dario Salice wrote in Google’s introductory blog post about the feature. “These might be campaign staffers preparing for an upcoming election, journalists who need to protect the confidentiality of their sources, or people in abusive relationships seeking safety.”

It’s designed to make hacking and phishing attempts virtually impossible to execute. But would it have been enough to stop the Podesta hack and prevent future similar attacks?

Google isn’t the first company to make changes as we’ve learned how technology influenced the election. Amid growing revelations of how much Russia used social media and advertising to affect its outcome, Silicon Valley juggernauts have been trying to shore up their services against interference. Facebook began battling fake news with a campaign to educate users, utilize third-party fact checkers to verify stories, and show readers related articles alongside stories in their feed. Twitter has suspended accounts tied to Russia and changed how it handles abuse in its app.

Google, which has since discovered Russia-linked ads on its platforms, has adjusted its search algorithms and offered more opportunities to provide feedback about search and autocomplete results. And now with Advanced Protection, it’s trying to eliminate the threat of email hacks.

Advanced Protection does three things: It protects accounts against phishing, blocks fraudulent account access, and offers safeguards against sharing sensitive data with malicious applications.

The Advanced Protection Program incorporates a physical security key (a small USB or wireless device that costs around $25) to protect against phishing. The key, which participants need to buy themselves, uses public-key cryptography and digital signatures. Without the key, even someone with your password would be unable to access your account. Advanced Protection limits your Google data access to only Google apps and adds additional safeguards in the account recovery process to prevent someone from social engineering their way into your account. It also performs additional scans on files and attachments to ensure no malware is piggybacking on the download.

This is one of the highest levels of security a public company has offered to consumer-level users to date. And, a Google representative confirmed, Advanced Protection isn’t limited to only the créme de la créme of Google users. Anyone with a Gmail account can enroll in Advanced Protection and will be granted access. There’s no vetting process. It’s not an elite club.

If this security had existed a year ago, it could have stopped the hack of Hillary's campaign—in theory. Let’s say all of her campaign staff had Advanced Protection enabled, and Podesta had received the same phishing email, and the IT guy had made the same typo. The physical key would’ve prevented the hacker from accessing Podesta’s account. Unlike even a one-time code delivered via SMS, there’s no way for this security token to be hijacked over a carrier network or insecure WiFi.

But Advanced Protection is not without a fatal flaw: Like two-factor authentication and other security measures, it’s opt-in. It’s up to the user to join the program and take advantage of its additional layers of security. As the saying goes, you can lead a horse to water, but you cannot make it drink. And a security measure is only effective if you’ve put it in place.

Since Podesta’s email was able to be hacked with a simple password change, it’s clear he did not have two-factor authentication enabled. And if he, and other staff members, weren’t willing to take advantage of that simple, but important, security precaution, it’s unlikely they would have gone through the extra hoops required of Advanced Protection.

That’s one of the biggest issues with email security today: People are unwilling to undergo some degree of inconvenience in order to secure their accounts and information. Google has supported physical security keys for three years—but do you know anyone that uses one? It’s only in recent months that we’re seeing how truly important it is to lock down your information.

It’s possible the Podesta hack will motivate a large number of people—and in particular the people running, or hoping to run, our government—to get onboard with services like Advanced Protection. But if history is any indicator, the people who most need to get on board with Advanced Protection never will. And with their decision to choose convenience over security, we’ll continue to be susceptible to ever sneakier hacking attempts trying to undermine our country’s democratic way of life.

Oct. 17 2017 4:12 PM

Twitter Is Shutting Down a Conservative Group’s Automated Tweets

Twitter has restricted a web application crucial to the operations of a conservative group known for barraging the network with tweets to promote political messages.

Oct. 17 2017 1:46 PM

Unpopular Opinion: The “1 Like = 1 Unpopular Opinion” Meme Is Bad

Memes sweep through Twitter like ocean currents. Sometimes they help us vent collective frustrations in deeply personal ways, and on other occasions they become opportunities for self-referential cleverness. Though even the best memes grow old fast, this willingness to be goofy together makes the platform worth returning to.

One of the network’s latest memes, however, shows off the social network at its most exhaustingly annoying. The posts almost all start the same. A user tweets an image with white characters that say, “1 like = 1 unpopular opinion,” on a field of black. Though the graphic speaks for itself, a short message typically accompanies it. “Okay, let’s do this,” wrote Jeet Heer, one of the progenitors of the Twitter essay format. “Ok. I’ll give this a try,” offered Vox’s Matthew Yglesias.

Advertisement

Replying to their original tweet, these users proceed to offer a steady, numbered stream of abbreviated takes. “Short stories > novels,” read one of Heer’s contributions. “Dogs are bad,” Yglesias proposed. While those are debatable proclamations, in other cases, users stake out positions that approach something like a widespread consensus, as Yglesias did when he challenged the supposed delusions of Californians:

Many of these positions feel less like truly unpopular opinions than they do like curmudgeonly complaints. More often than not, they will only be truly “unpopular” within specific milieus. Criticizing the West Coast’s fast food, for example is sure to resonate affirmatively with Shake Shack and Five Guys–loving Northeasterners. (I, for the record have no opinion on In-and-Out Burger. Please don’t @ me.) Similarly, even those who live in San Francisco might agree that the city could improve its public sanitation.

The lists themselves, meanwhile, primarily serve to index a user’s social capital, showing both that others care what they think and that they think about a lot of things. Among those who reared themselves in the early-21st-century blogosphere, participating in this exercise isn’t so much a confession as it is a ritualistic act of self-affirmation. Indeed, for internet personalities like Yglesias (who used to work at Slate), nothing could be more on brand than a rejection of received wisdom. As the journalist Mark Harris joked, tweeting your unpopular opinions may not be all that different from just … tweeting.

In the broadest sense, the “1 like = 1 opinion” format is familiar and not entirely unwelcome. One variant, for example, found Twitter users suggesting books in response to likes. That approach yielded a cornucopia of compelling options, especially from the minds of enthusiastic readers like Slate contributor Isaac Butler. The mode works in large part because it plays off the infrastructure of Twitter itself, requiring a modicum of interaction from those on the sidelines: What could be simpler than clicking the mysterious heart button? Simultaneously, it demands almost virtuosic displays of knowledge and thoughtfulness from the site’s most popular contributors—displays that many are more than prepared to offer.

The “unpopular opinion” version of the meme, however, threatens to lay bare the underlying laziness of the convention. At its best, Twitter can open up unlikely pathways of communication between otherwise distinct individuals and communities. Tweet at a celebrity and sometimes the celebrity will tweet back. Enter into an argument with a stranger and sometimes (rarely, but sometimes!) you’ll make a new friend.

By contrast, unpopular opinions  are designed to be just that. They are often little more than showy humblebrags: To assert one (or 100) is to suggest that you stand out from the crowd, whether or not you truly do. The long lists assembled by personalities such as Yglesias and Heer threaten to devolve into performances of reflexive contrarianism. Freed of context, they discourage argument, evidencing little more than the pique of those who espouse them. In aggregate the meme’s popularity demonstrates only that the oppositional stance many attribute to Slate itself has colonized the internet as a whole.

There are, of course, ways to use the unpopular opinion meme well. Prior to its current surge, the writer Beth McColl effectively parodied it avant la lettre composing a thread in 2016 that included such gems as “crime is fine when i do it” and “let the donut crop replenish before eating more . otherwise more global warming will happen.” More recently, other Twitter users have offered some positions that are engagingly provocative.

The best of these provocations, however, frequently feel more like quick pitches than simple opinions. Whether or not you’re inclined to agree with them, you’ll likely find yourself hoping the writer would expand upon them. In some other cases, the opinions in a thread do build on one another, suggesting openings for more complex assertions. Heer included a pair of tweets about anti-racism and decolonization in Canada that reflect themes that he has explored before, even as they suggest the rough outlines of a promising argument. But tossing them into a list that also includes the claim “Superhero movies and comics should be made for kids,” potentially trivializes them, giving the impression that these thoughts are less worthwhile than they have the potential to be.

And that’s the rub of it. If an opinion is worth expressing, it’s worth letting it breathe on its own. I, for my own part, tried to do just that, tweeting my own distaste for the meme on Monday night and hoping to leave things there. To my dismay, Jack Hamilton, Slate’s pop critic, liked my brief note in order to make me “go again.” I responded with the opinion that Jack is a cool guy. Please fave if u agree.

Oct. 17 2017 12:38 PM

Are Self-Driving Cars the Future of Mobility for People With Disabilities?

This piece originally appeared in The Conversation.

the-conversation-logo

Self-driving cars could revolutionize how disabled people get around their communities and even travel far from home. People who can’t see well or with physical or mental difficulties that prevent them from driving safely often rely on others—or local government or nonprofit agencies—to help them get around.

Advertisement

Autonomous vehicle technology on its own is not enough to help these people become more independent, but simultaneous advances in machine learning and artificial intelligence can enable these vehicles to understand spoken instructions, observe nearby surroundings and communicate with people. Together, these technologies can provide independent mobility with practical assistance that is specialized for each user’s abilities and needs.

A lot of the necessary technology already exists, at least in preliminary forms. Google has asked a blind person to test its autonomous vehicles. And Microsoft recently released an app called “Seeing AI” that helps visually impaired people better sense and understand the world around them. “Seeing AI” uses machine learning, natural language processing, and computer vision to understand the world and describe it in words to the user.

In the lab I run at Texas A&M, along with the Texas A&M Transportation Institute, we are developing protocols and algorithms for people with and without disabilities and autonomous vehicles to communicate with each other in words, sound, and on electronic displays. Our self-driving shuttle has given rides to 124 people, totaling 60 miles of travel. We are finding that this type of service would be more helpful than current transportation options for disabled people.

Under the Americans with Disabilities Act of 1990, all public transit agencies must offer transportation services to people with physical handicaps, visual or mental conditions, or injuries that prevent them from driving on their own. In most communities, this type of transport, typically called “paratransit,” is sort of like an extra-helpful taxi service run by public transit. Riders make reservations in advance for rides to, say, grocery stores and medical appointments. The vehicles are usually wheelchair-accessible and are driven by trained operators who can help riders board, find seats, and get off at the right stop.

Like taxis, paratransit can be costly. A Government Accountability Office report from 2012 provides the only reliable nationwide estimates. Those numbers suggest that per trip, paratransit costs three to four times what mass transit costs. And the costs are increasing, as are the number of people needing to use paratransit. At the same time, federal, state, and local funding for transit authorities has stagnated.

In an attempt to meet some of the demand, many communities have reduced the geographic areas where paratransit is available and asked disabled people to use mass transit when possible. Other places have experimented with on-demand ride-hailing services like Uber and Lyft. But in many cases the drivers are not trained to help disabled people, and the vehicles are not usually wheelchair-accessible or otherwise suitable for certain riders.

Autonomous shuttles, like the one we’re testing on the Texas A&M campus, can be a solution for these problems of access and funding. We envision a fully integrated system in which users can connect to the dispatching system and create profiles that include information on their disabilities and communications preferences as well as any particular frequent destinations for trips (like a home address or a doctor’s office).

Then, when a rider requests a shuttle, the system would dispatch a vehicle that has any particular equipment the rider needs, like a wheelchair ramp or extra room, for instance, to allow a service dog to travel.

When the shuttle arrives to pick up the rider, it could scan the area with lasers, cameras, and radar to create a 3-D map of the area, merging those data with traffic and geographic information from various online sources like Google Maps and Waze. Based on all of those data, it would determine an appropriate boarding spot, identifying curb cuts that let wheelchairs and walkers pass easily as well as noting potential obstacles, like trash cans out for collection. The vehicle could even send a message to the rider’s smartphone to indicate where it’s waiting, and use facial recognition to identify the correct rider before allowing the person to ride.

During boarding, the ride and when the rider reached the destination, the vehicle could communicate any relevant information—such as estimated arrival time or details about detours—by interacting with the rider as appropriate and listening to the responses, or by displaying text on a screen and accepting typed input. That would allow the rider and the shuttle to interact no matter what the passenger’s abilities or limitations might be.

In our lab we are exploring various elements of rider-assistance systems, including automated wheelchair ramps and improved seating arrangements for multiple wheelchair-using passengers. We are also studying elements that affect safety, as well as riders’ trust in the vehicles. For example, we are currently developing machine-learning algorithms that behave like good human drivers do, mimicking how humans respond to unforeseen circumstances.

Self-driving cars present fundamentally new ways to think about transportation and accessibility. They have the potential to change neighborhoods and individuals’ lives—including people who are disabled and often both literally and figuratively left behind. With proper planning and research, autonomous vehicles can provide even more people with significantly more independence in their lives.

The Conversation

Oct. 17 2017 11:19 AM

The Future of Cybersecurity Might Look a Lot Like Snapchat

Snapchat isn’t just the favored social media platform of millennials everywhere—it’s also becoming an under-the-radar model for the future of cybersecurity. Think about it: At its most basic, Snapchat lets you send a picture or video message, deletes it in just seconds, and makes it impossible to retrieve it afterward. A self-destructing message as a perfect way of safeguarding data and information—who knew it was going to take a generation of meme-obsessed weirdos to popularize a Mission: Impossible gimmick?

There is an increasing number of tools catered to mimicking the Snapchat model for professional means. Apps like Whisper, Confide, and Signal have become vogue among industries trafficking in sensitive information, especially among government sources leaking to reporters. But these apps still possess limitations, like the need for an internet connection or the lack of anonymity. They also have their flaws that tenacious (and creative) hackers can exploit.

Advertisement

So the only real way to ensure information can’t be stolen isn’t just to burn the message after reading—you have to burn the messenger as well. If a program handling sensitive information is deleted immediately afterward, it can’t be compromised later on by malicious parties. The original data is kept safe after it’s processed the way it was meant to be. This could obviously be a boon for businesses trying to keep trade secrets in-house, government agencies corresponding with operatives abroad, and private citizens trying to keep their social security numbers private. Anne Broadbent, a quantum computing researcher at the University of Ottawa who specializes in cybersecurity development and testing, says one-time programs could help safeguard other software as well by acting as gatekeepers to accessing valuable tools (for instance, creating a one-time password application that gives one access to military arsenal).

There’s just one problem: It’s extremely difficult to build a one-time program, at least by using conventional technologies. This would not be a way to keep office gossip on Slack about the boss’s weird ticks from being leaked. According to Broadbent, conventional information, including the code for a one-time program, can be too easily copied. For one-time programs to actually work on, say, a modern-day MacBook, you’d have to physically destroy the computer afterward to guarantee the single-serving software can’t be resurrected and run again and hacked. Not ideal.

So we have to move past conventional technologies, and the key might be quantum computing. In the quantum world, information is impossible to copy because it doesn’t exist in the static state we normally think about information. The thinking goes that if you can’t copy quantum information, a quantum computer could allow for the best opportunity to run a one-time program as envisioned.

Can this work? An international team of researchers thinks it can. A new paper currently out for peer review demonstrates a proof-of-concept for a one-time program running and deleting itself on a quantum-computing device.

Here’s how it works. Let’s say you worked at a very large credit bureau—maybe one of the largest in the nation, with a recent history of, say, putting millions of Americans’ financial information at risk—and wanted to run a program that would allow you to share—just once!—analyses of different individuals’ credit histories. Under a one-time framework, the program would run an analysis, make that information available to a client, and then delete that data along with the program to ensure total security on the company’s end.

Unfortunately, there’s a catch. Because quantum physics operates on probabilities and odds, the result of the analysis—and really, any resulting action the software takes with the data, even if it’s just to repackage it for delivery to someone else—isn’t 100 percent guaranteed to be accurate or successful. For this study, the researchers achieved a success probability of 75 percent, which isn’t bad for a first-of-a-kind trial. It’s certainly not ideal for someone trying to prove to the bank they should be allowed to buy a house, but perhaps it’s good enough for the transfer of information one wants to destroy anyway. And even if the analysis is off, the data will still be better safeguarded than nearly any other security measure.

The researchers added another key feature for safe measure: Even if the program didn’t delete itself automatically, it’s impossible to reverse-engineer the task in order to find the information that’s supposed to stay safe.

It’s unclear yet how the researchers intend to follow up on these results—the team is not taking any inquiries from the media until after the paper is formally peer-reviewed and published in a journal. But they will presumably want to minimize the probability the program produces an erroneous result. Broadbent, who wasn’t part of the research team but has read a preprint of the study, also emphasizes that there is always a trade-off between security and functionality, so the researchers will probably need to figure out a way to stretch the capabilities of a program while ensuring it still provides safety. She’s pretty optimistic, however, about the results.

Although this type of cybersecurity measure can only exist in a quantum computing process, the commercial sector, especially businesses and research firms handling enormous amounts of data, are beginning to rapidly adopt quantum computing technology, which suggests that it could become mainstream. The advent of one-time program security is probably closer than you think.

Oct. 16 2017 3:56 PM

Google Taught A.I. How to Program More A.I.

Researchers at Google accomplished a feat that had been largely restricted to the realm of science fiction dystopias: enabling artificial intelligence to produced more artificial intelligence.

 

Oct. 16 2017 3:23 PM

Tropical Storm Ophelia Really Did Break the Weather Forecast Grid

An already unusual hurricane season just got a lot weirder as Tropical Storm Ophelia set her sights on … Ireland and the British Isles. Unaccustomed to such weather, these decidedly not-tropical northern nations are buttoning up, for the first time in recent memory, in the face of the impending storm. Schools are closed, the power is out, and police report at least three people have already been killed in Ireland.

Adding to the sensation that we’re witnessing a glitch in the matrix, the storm has also brought along a literal glitch in the storm tracker maps issued by the National Weather Service. This hurricane, it seems, exists too far outside of the traditional tropical storm boundaries to be automatically mapped. As environmental activist and writer Bill McKibben pointed out on Sunday morning, a forecast of Ophelia’s wind speed probabilities abruptly ended at a latitude of 60 degrees north:

Advertisement

Michael Brennan, a hurricane specialist with the National Oceanic and Atmospheric Administration, says the graphical glitch shouldn’t worry anyone, but at the same time, he can’t yet explain Ophelia’s strange trajectory. The strange maps are a simple, if somewhat stupefying, result of an old grid: “When you set up a grid, you define boundaries of that grid,” he said. “Whenever that grid was created, it was decided it would cut off at 60 degrees north and just west of 0 degrees longitude.”

Under more normal conditions, the grid allows researchers to analyze probabilities for all tropical cyclones in the Atlantic basin and the eastern, central, and western Pacific basins—a big and important feat. But Ophelia threw a curveball, pushing farther north and farther east than previous storms. Brennan says before this storm hit the edge of the grid, even he didn’t know where its boundaries were.

In response, Brennan’s team set about making a more complete visualization of Ophelia’s path on their own computers. (The group wasn’t able to adjust the supercomputer and its ruptured renderings because that complicated process isn’t actually under their purview. Given multiple groups use the grid, even NOAA can’t just adjust it without warning.) Knowing that comprehensive maps of the storm’s trajectory would prove essential for those in Ophelia’s path, Brennan says, “we were able to change that on the fly.”

171016_FUTURE_Hurricane02

NOAA

While Brennan and his team were able to make the ad-hoc mock-up they needed, ultimately it was just that, a quick fix. They’re still left with a much bigger problem: The fact that no one—and no computer—thought much about a tropical storm traveling this far from the tropics. “That’s a pretty unusual place to have tropical cyclone,” Brennan says. “Maybe that’s something we’ll have to go back and revisit what the boundary is.”

Everything about Ophelia seems unprecedented. Even the Weather Channel seems to have the heebie-jeebies, writing, “Only 15 hurricanes have passed within 200 nautical miles of the Azores [a collection of mid-Atlantic islands] since 1851, according to NOAA's historical hurricane database.” But for all its hair-raising attribute, it’s still too soon to tell what motivated Ophelia’s unexpected path, though abnormally warm surface water temperatures and a tragically perfect jet stream definitely played major roles. That means that, like most of our unprecedented hurricanes this season, there is a real chance that climate change played a role, though unpacking exactly how strong that role is will take a bit more time. Whether Ophelia proves to be a fluke or a new fixture of our climate-changed Earth, the fact that it’s forced forecasters to reconsider their maps is just another example of how extreme this hurricane season has been.

Oct. 16 2017 12:16 PM

Critical Wi-Fi Security Flaw Leaves Communications Exposed to Eavesdroppers

It’s time to pull out your old Ethernet cords. Security researchers have discovered critical vulnerabilities in Wi-Fi connections that could allow cyber attackers to eavesdrop on internet traffic by infecting networks with computer viruses. This leaves passwords, credit card numbers, emails, photos, and other communications you transmit through the internet potentially exposed to malicious actors.

Oct. 13 2017 6:18 PM

People Are Freaking Out About the Yellowstone Supervolcano. Again.

This week, new research on the Yellowstone supervolcano erupted on social media. Deep under the first national park lies a pretty sizable volcano that, the story goes, may soon explode and—a at least according to USA Today—“wipe out life on the planet.”

So is that really going to happen? This disasterologist (as she calls herself) doesn’t seem so concerned:

Advertisement

Recent evidence suggests eruptions may develop quickly on a geological timescale, but in human years, that’s still as slow as molasses. While it’s certainly possible the next gargantuan gurgle could happen in the next few decades, it also could be centuries off. Even if the not-so-super volcano did erupt, it certainly wouldn’t end life on Earth, it’d just make living a lot harder. Yellowstone would likely be filled with magma and the rest of the continent would suffer airport shutdowns and potentially tainted crops and water, but we’d all be alive. But this hasn’t stopped people from freaking out—and, in some dark circles of the internet, fawning over—North America’s crusty pustule.

At this point in time, we are much more likely to be consumed in a wildfire or submerged in poopy floodwaters during this unusually active hurricane season than we are to witness all life on Earth consumed by a red river of magma. But the persistent fear of the supervolcano has erupted again, and again, and again. At this point, it’s not worth debunking. Rather, it’s time to ask: What fiery caldera did these myths even emerge from?

2000: Back at the turn of the new century, David Keys wrote a popular book, Catastrophe, about another “super” volcano—one that erupted over Java in 535 A.D. and really did block out the sun. While the book sparked some debate  over the veracity of its claims, it brought the notion of a supervolcano and the related super-eruption into the public consciousness. (Though, it should be noted, this wasn’t the first time the word was ever used. According to the Oxford English Dictionary, supervolcano appears to have been first used back in 1925, as a description of a sunset in a travel book.)

2002: From there, the notion of a supervolcano was picked up and spread about by R.B. Trombley, head of the “International Volcano Research Centre.” The name sounds grand, but it was actually headquartered in a trailer. Trombley claimed he could predict the next eruption of the Yellowstone supervolcano using his special earthquake-prediction software. According to Wired, in 2002 Trombley authored the “first truly geological publication focused on a ‘supervolcano that can be found in Google Scholar.” But “[v]olcanologists recognized in their field have questioned Trombley's credentials and his methodology,” the Arizona Republic wrote in 2010.

2004: Finally, super-eruptions made their way into the mainstream academic literature associated with Yellowstone, via a 2004 report in the Bulletin of Volcanology. In this study, B.G. Mason and his co-authors assessed hotspots throughout history—and called out researchers who used the “qualitative, but highly evocative, terms ‘supereruption’ and ‘supervolcano’ ” in research on past volcanic eruptions.

2005: But the Yellowstone supervolcano really had its moment when the BBC and the Discovery Channel teamed up to air Supervolcano, a 2005 factual dramaabout a hypothetical future where Yellowstone erupts. “The beauty of America's Yellowstone National Park masks one of the rarest and most destructive forces on Earth—a supervolcano,” the promotional material read.

While there isn’t publicly available data about how many people watched the documentary, it’s clear it had an impact. Use of the word supervolcano has only continued to rise after the docudramas release, according to a 2013 Wired article on the etymology of the word. For a stylish graphical representation of this point, I give to you this Google N-Gram data.

April 2014: When the biggest earthquake in more than two decades hit Yellowstone, reports of animals fleeing the supervolcano’s path proliferated online. But as Slate wrote at the time, this was far from the case. Animals, for one, don’t actually have spidey senses that allow them to run away from impending deep-earth doom. And, as we now know for certain, the supervolcano definitely didn’t erupt that un-fateful day in 2014.

July 2017: Despite repeated false predictions, supervolcano fears have only heated up in 2017. In July, a 5.8-magnitude earthquake was recorded in western Montana, which butts up against the national park. The U.S. Geological Service later attributed the admittedly unusual seismic event to the natural slip-and-slide of faults—not a foreboding rumble of Yellowstone magma.

September 2017: But the Montana earthquake hysteria was only compounded by the fact that Yellowstone later recorded an epic “earthquake swarm,” with more 2,300 low-magnitude tremors between June and August. Several outlets including Newsweek naturally capitalized on this event, writing about the geographic jitters. While a source is quoted as saying, literally, this is “nothing out of the ordinary,” our emotions were anything but.

October 2017: We’ve come full circle and are back in Spooktober 2017. The latest quiver was stirred when researchers from Arizona State University presented new findings about Yellowstone’s caldera at a volcanology conference. (Disclosure: ASU is a partner with Slate and New America in Future Tense.) Essentially, the scientists found that past eruptions on the site happened more quickly than many thought possible. “As such, scientists are just now starting to realize that the conditions that lead to supereruptions might emerge within a human lifetime,” the New York Times reported. Clearly, that’s far from saying that such an event will occur in our human lifetime, yet supervolcano fears were stoked once again.

Ultimately, a large-scale volcanic eruption wouldn’t be pretty. People would die and people would suffer, just as they did in the numerous earthquakes, hurricanes, and other natural disasters this year. But, at least for now, the Yellowstone “supervolcano” seems pretty regular—and its rumblings nothing to lose sleep over.

Oct. 13 2017 3:00 PM

Airbnb Is Opening an Apartment Building Near Disney World

Disney World will soon see a bit of Silicon Valley in its backyard. Airbnb is teaming up with a Miami-based developer, Newgard, to construct a 324-room building in Kissimmee, Florida—a few minutes away from the Disney World theme parks. The complex, which is set to open in early 2018, will be called “Niido by Airbnb.”

 

READ MORE STORIES