Future Tense
The Citizen's Guide to the Future

July 29 2016 6:02 PM

HowThe Handmaid's Tale Taught Me to Imagine Many Possible Futures

This piece was originally published in New America’s digital magazine, the New America Weekly. It is the first entry in Summer Reading, a series about the books that changed the way in which the writer sees the world off the page.

As a child insomniac, I often read through the night, eager for the next turn of plot or phrase. And so it is perhaps unsurprising that Margaret Atwood’s novels caught my attention with their moody, shadowy covers. The stories sounded forbidden and adult, and the first one I decided to read was The Handmaid’s Tale. It was one of the first times I had read something written in the first person as opposed to the third, and the perspective changed everything. I was seeing the dystopian future through the eyes of Offred, a woman trying to adjust to being enslaved after the hostile takeover of her country. It provided an entirely new level of intimacy with the story, and gave me a roadmap for how to create that intimacy in my own writing.

Advertisement

The Handmaid’s Tale, which Hulu is adapting into a series scheduled to premiere in 2017, takes place in a United States where religious zealots, in a coup, have taken over. They immediately strip women of their jobs and access to money. They eventually create a new caste system in which women are broken into categories based on childbearing potential and class. Families are split up, and women are sent training camps to learn their new roles, then assigned to the higher class homes to fill the role of cook, wife, or concubine. At the time, it sounded terrifying and impossible. As the reader, you see only what Offred sees, and her vision is literally blocked by blinders she has to wear when not in the house. She vacillates between accepting her new reality and trying to find a way out. Glimpses of her past are parceled out in recollections, and I hoped for more details with each turn of the page. I couldn’t put it down.

I then read through everything of hers I could find, and discovered how to write about experience through a political lens. Her stories offered firsthand glimpses into feminism and environmentalism as lived through personal experience. As a pessimistic and political child who wrote outraged poems about the environment and justice, I felt I had found a home. The Handmaid’s Tale helped me begin to see how the world around me intersected with many potential future worlds. How every step today can lead you somewhere seemingly improbable tomorrow.

Many years later, I went to see Atwood when she was on tour for Oryx & Crake, another dystopian novel. I asked her why she was drawn to that form—the dark vision of a world torn apart. “I read the newspaper,” was her response.

Revisiting the novel now, I couldn’t help but hear her words again. In the dystopian future, women hold funerals for fetuses. In the too-real present, our Republican vice presidential nominee signed a bill in that would require women in his state to have funeral services for their fetuses. Doctors who had provided abortions in the time before the war are executed and hung on a public wall in The Handmaid’s Tale. Republican Presidential candidates Ted Cruz and Donald Trump welcomed the endorsement of a prominent anti-choice advocate who has said death is an appropriate penalty for abortion providers.  

As I’ve gotten older and lived through a political climate that seems to increasingly legitimize stripping away reproductive rights, this story hangs on the periphery. It’s almost like a dare—try to get here from where you are. When I first read it, that seemed impossible. Now, I’m less sure. The brilliance of the book is that it acts like a warning at a trailhead—beware of what lies ahead.  It makes for a beautiful book precisely because it would make for such a hideous reality.

July 29 2016 3:42 PM

"Real Donkeys" Uses Digital Donkey Translator at London Event (for Some Reason)

We all know the feeling of staring into a donkey's expressive eyes and trying to understand what she's telling us as she hee-haws fervently. So frustrating! Luckily there's a new technology that can help.

Mark Ineson, a "donkey whisperer" who owns a donkey farm in West Yorkshire, England, hosted donkey rides in London's Jubilee Gardens this week that incorporated the digital translations. The technology purports to analyze the donkey sounds and associate them with particular emotions. To be clear, whatever merit this contraption has, a donkey translator is still not a real thing.

Advertisement

The technology firm Design Works collaborated with Ineson to develop the translation system, which turns donkey sounds into full English phrases and sentences like, "I do like to be beside the riverside," and "hold on tight." Design Works director of prototyping Sean Miles told Engineering and Technology Magazine, "We’ve been asked for all sorts of crazy contraptions over the years but getting donkeys to talk was a whole new challenge for us. ... While the technology that identifies sound as a trigger was relatively achievable, figuring out the animals’ emotions was much harder."

That was the easy part for Ineson, who owns 17 donkeys as part of his "Real Donkeys" organization and has worked with donkeys for more than 20 years. "We get the nudges, we get the facial expressions," he told Reuters. "[We] work very closely with them, day in day out, and pick up on their mannerisms, their emotions, what they're thinking basically." The "Real Donkeys" website notes that its stylish inhabitants have won awards like "Best Beach Donkey" and runner up in Donkey Sanctuary's "best group" award.

Chloe Couchman, a spokesperson for Merlin Events (the group that organized the Jubilee Gardens donkey fete), told ODN in the above video, "We know that people love donkeys." Can't argue with that.

July 27 2016 5:19 PM

Facebook Is So Afraid of Controversy, It May Take the News Out of Its Trending News Section

Facebook may have found a solution to the controversy over its “trending” news section. Not a satisfying solution, mind you—in fact, it’s an ugly, compromised, cowardly solution—but one that would at least deflect attention from the feature and head off future charges of editorial bias.

The potential solution is reflected in a test that the company is running on a subset of users, which Facebook confirmed to me on Wednesday. Mashable, earlier in July, reported on what appears to have been a crude, prior version of the test, and Huffington Post noticed the updated version on Tuesday. At least two of my Slate colleagues are now seeing it in their own feeds.

Advertisement

As a reminder, the trending box that got the company in so much trouble looked like this:

Facebook trending - current version
Facebook's current trending news feature includes a headline for each story.

Screenshot / Facebook.com

For those in the test group, the trending box now looks like this:

Facebook trending - new test
The version Facebook is testing strips the keywords of context.

Screenshot / Facebook.com

Spot the difference? The article summaries are gone, leaving only the keywords without context. In their place is a number showing how many people are talking about each keyword on Facebook.

The test is noteworthy for what it reveals about the company’s approach to controversy and how it perceives its role in the media. It’s noteworthy even if Facebook decides not to go through with these changes, but especially if it does.

The current trending feature is a fascinating thing—a mashup of algorithmically highlighted, of-the-moment topics, including celebrity gossip and viral memes, with human-written headlines so dry and passive and anodyne that they read like relics of a bygone print era. But it’s that very hybrid of machine and human selection that opened Facebook to criticism when former contractors told Gizmodo that their own judgments and biases came into play much more than Facebook had previously let on.

The flap’s political dimension was largely trumped up as even Glenn Beck agreed. Yet it brought to light a very real tension between Facebook’s self-styled image as a “neutral” tech platform and the reality of its emergence as perhaps the country’s most influential arbiter of news. (Here’s a primer on the controversy for those who never quite got what the fuss was about.)

The outcry from political conservatives, as well as from others (like me) who scrutinize the company’s influence on the media, presented Facebook with a choice. It could step up, acknowledge the role human judgment plays in its products (and not just the trending news box), and take steps to make sure that judgment is being applied with rigor and care. Or it could shrink from the controversy, pulling the human decision-making safely behind the curtain of its code.

Facebook appears to have chosen ... both. A promise from Mark Zuckerberg to investigate the claims and a high-profile summit with prominent conservatives both defused some of the faux outrage and served as a tacit admission that the company’s human leaders bear ultimate responsibility for the machinations of its algorithm. The company followed up by adding political bias to its agenda for employee bias training and publishing a statement of values for its News Feed algorithm.

But there remained the problem of the trending news feature, a relatively insignificant part of the Facebook app and website that is now a target for future allegations of bias, real or imagined. One option would have been to make it better, and more significant, by hiring experienced journalists to turn it into a sort of curated, public front page for the News Feed. But that would have made it even more of a lightning rod.

Instead, Facebook first announced a set of tweaks to its trending news guidelines that reduced at least the appearance of human involvement, if not the reality of it. Now, it may be headed further in that direction. Reducing that box to a series of keywords and engagement numbers, sans context, would make it less noticeable, less user-friendly, and ultimately less interesting. It would be like Twitter’s trending topics or Google Trends—only uglier and in no discernible order, rendering it useless even as a leaderboard. I would be very surprised if Facebook’s testing doesn’t show a marked decline in engagement with the feature.

Yet that may all be worth it to Facebook simply to minimize the risk of another brouhaha over its news judgment. If so, that will be understandable on some level: This is hardly a core feature of its service, and it’s not a hill that Facebook wants to die on. But it will also signal that the company is willing to compromise its product in order to avoid offending anyone.

Make no mistake, Facebook will still be exercising editorial judgment—if not in the trending box, then in the values that shape its News Feed algorithm, or in its treatment of live videos that show young men being shot by police officers. And those judgments will still be subject to criticism. But those are choices Facebook can’t avoid making, because the News Feed and live video are central to its business. The trending section, in its current form, is not—and if these changes go through, it never will be.

July 27 2016 3:34 PM

Future Tense Newsletter: Lessons of the DNC Hack

Greetings, Future Tensers,

Late Friday, WikiLeaks released a cache of almost 20,000 emails from the Democratic National Committee, a trove that included a great deal of personally identifying information. While it’s difficult to definitively attribute cyberattacks like this one, evidence increasingly suggests that Russia was responsible. Laura K. Bate of the Cybersecurity Initiative at New America outlines six ways that the United States might respond, from sanctions to retaliation to doing nothing at all. (Disclosure: New America is a partner with Slate and Arizona State University in Future Tense.) That last choice would be a problematic one, though: If the U.S. chooses not to act, Bate warns, it risks setting “a very dangerous precedent.”

Advertisement

The safest move, of course, is to not get hacked in the first place, in which case, two-factor authentication is still your best bet. Nevertheless, recent research indicates that authentication by text message is much less secure than we’d like to believe. Even if you avoid that, one way or another, hacking happens, which is why cybersecurity expert Josephine Wolff argues that the DNC shouldn’t have even been maintaining its own email server. While an individual Gmail account can be breached, Wolff writes, it’s much harder to grab data from a whole organization’s correspondence, which is what appears to have happened here.

Here are some of the other stories that we read while waiting for chip card charges to go through:

  • Racism: To evaluate a group of people based on their inventions is to misunderstand how innovation happens—and how credit tends to get assigned.
  • Dead Media: The VCR, which had a 40-year run, is finally going out of production. A scholar of the technology explores its legacy.
  • Complexity: Samuel Arbesman discusses the difficulty of understanding modern technology—and thinks about how we can model those systems to help us understand them better.
  • Bioethics: Gene-drive technology allows scientists to meddle with the ordinary probabilities of genetic inheritance. Are we doing enough to control it?

Jacob Brogan
for Future Tense

July 27 2016 10:10 AM

Why Didn’t Google Include Donald Trump in Its List of Presidential Candidates?

Early Wednesday morning, a Columbus, Ohio–based NBC affiliate identified a peculiar quirk in Google’s results. When you plug the term “presidential candidates” into the search engine, an info box of “Active Campaigns” auto-populates at the top of the page. Three (mostly) familiar faces show up in that box: Hillary Rodham Clinton, Bernie Sanders, and Jill Stein. Donald Trump—along with Libertarian candidate Gary Johnson and cult favorite contenders such as Vermin Supreme—was nowhere to found.

Though Donald Trump may not have a real campaign by any ordinary political standard, he clearly deserves to be on that list, as does Johnson, who has performed reasonably well on several recent polls. It’s not immediately clear what’s happening here, but it seems more likely that it’s the result of some glitch in the info box algorithm than of malicious manipulation. While several users on Reddit—a site that has, as Benjy Sarlin has reported, been taken over by Trump supporters—blame these results on bias or institutional censorship, it’s probably not that simple. Among other things, the presence of Sanders—whose campaign is, of course, definitively no longer active, to his supporters’ disappointment—make it more likely that the site is simply pulling its data from bad sources.

Advertisement

The problem is that Google doesn’t tell you where it’s getting its information from—not here at any rate. In the absence of such reference points, it’s hard to figure out where things went wrong. As Mark Graham has shown in Slate, the context for Google’s info boxes about other topics—Graham focuses on cities—is often similarly obtuse. While Google probably doesn’t hide its sourcing in order to deceive users, that ambiguity nevertheless serves its purposes. By presenting information automatically and without clear reference points, the company presents itself as a godlike font of all knowledge, almost Delphic in its capacity for oracular revelation.

Tellingly, even when Google does explain where it’s pulling data from, it still goes wrong sometimes. Indeed, as a group of researchers recently demonstrated in Future Tense, that’s been an issue throughout the campaign. Simply put, Google isn’t great at autonomously explaining candidates’ positions, and its attempts to do so may end up introducing bias into the results it displays, despite its stated desire to do the opposite. A similar fuzziness plays out when Google presents campaign finance statistics, as I’ve written before.

As of 10 a.m. Wednesday, the “Active Campaigns” box was no longer in place. Below the spot it had once been, the NBC affiliate’s report showed up atop a list of related news stories.

Update, July 27, 3:00 p.m.: A Google representative sent the following statement to Slate in response to an earlier request for comment:

We found a technical bug in Search where only the presidential candidates participating in an active primary election were appearing in a Knowledge Graph result. Because the Republican and Libertarian primaries have ended, those candidates did not appear. This bug was resolved early this morning.

The info box is back in place. It now shows Trump, Clinton, Johnson, and Stein.

July 27 2016 8:32 AM

All Maps Are Biased. Google Maps’ New Redesign Doesn’t Hide It.

On Monday, Google rolled out its new Maps design. You’ve probably already forgotten what the old one looked like, but the new version is cleaner and makes more sophisticated use of its power to show different features at different zoom levels.

It also represents the company’s ongoing efforts to transform Maps from a navigational tool to a commercial interface and offers the clearest proof yet that the geographic web—despite its aspirations to universality—is a deeply subjective entity.

Advertisement

Instead of promoting a handful of dots representing restaurants or shops at the city-view level, the new interface displays orange-colored “areas of interest,” which the company describes simply as “places where there’s a lot of activities and things to do.” In Los Angeles, for example, there’s a big T of orange blocks around Wilshire Boulevard and Vermont Avenue in Koreatown, and again on Wilshire’s Miracle Mile, stretching up La Brea Avenue.* In L.A., areas of interest tend to cling to the big boulevards and avenues like the bunching sheath of an old shoelace. In Boston, on the other hand, they tend to be more like blocks than strips. In Paris, whole neighborhoods are blotted orange.

Roads and highways, meanwhile, take on a new, muted color in the interface. This marks a departure from Google’s old design, which often literally showed roads over places—especially in contrast to Apple Maps, as cartographer Justin O’Beirne has shown. The new map is less about how to get around than about where to go.

“Areas of interest,” the company’s statement explains, are derived with an algorithm to show the “highest concentration of restaurants, bars, and shops.” In high-density areas, Google candidly explains that it is using humans to develop these zones. Algorithms, of course, are tuned by human engineers. But like Facebook with its News Feed, Google has decided that some attributes of the digital world need a human touch firsthand.

screen_shot_20160727_at_8.30.43_am
Boston

Screenshot via Google Maps

You can learn a lot about a city from its patterns of commercial development, as a planner and a visitor. To me, Google’s new interface is an invitation to explore. Commercially busy districts are the ones where the people are. That’s good walking. It’s a flâneur’s guide to the city.

That’s not all, of course. Zoom in further to an “area of interest” and commercial establishments pop out like mushrooms after the rain. It’s been only seven years since Google added places to its maps, and for several years, the company stuttered through a series of half-measures to integrate commerce and mapping. But with Google My Business, which debuted in 2014, the company has made it easier for retailers, restaurants, concert venues, and everyone else to update their own information in the map. The platform has since absorbed much of the Yellow Pages and more. The interface shows virtually everything about a restaurant or store that Yelp does—and also shows what times of day people visit and in some cases, interior maps.

Google seems to be betting that its map, as much as its search function, will lead you to spend money in the real world. Mobile usage surpassed desktop usage for Google Maps way back in 2011, and globally, consumers are buying more than six times more smartphones than computers. The more we research our designations on the go, the greater the influence of the map on real-life commerce.

Even with its sliding scales, Google Maps can’t fit every shop in Tokyo in a two-dimensional map. So who gets a spot? It’s not an obvious choice: Analyzing Apple and Google’s maps of New York and London, O’Beirne found that the two companies’ maps had just 10 and 12 percent of their place labels in common. (Likewise, different people will have different businesses pop at them—try it with a friend.)

With “areas of interest,” Google expands its geographic influence with methods that aren’t totally obvious. In New York, it’s not clear to me why 2nd Avenue in the East Village (orange) takes precedence over 1st Avenue (white), or why SoHo’s Prince Street (orange) takes precedence over Spring Street (white). And yet: if I had an afternoon in a new city, I’d sooner take a walk in an “area of interest” than elsewhere. Google Maps aims to capture the experiences of city-dwellers with its choices, and it will shape them too.

Tourist office paper maps have long done something similar, highlighting or enlarging commercial streets where their sponsors ply their trade. But the scale of Google’s enterprise, obviously, is quite different.

I’m sure internet cartographers are hard at work analyzing the distribution of “areas of interest” and where they fall. One thing I’ve noticed so far is that they don’t correspond to commercial density in a way that is neat or simple. Broadway in New York’s SoHo is not an “area of interest,” though it is among the busiest commercial thoroughfares in the world. It doesn’t need Google’s seal of approval. But why doesn’t it have one?

Mapping has always been as much art as science, less mathematical or objective than it purports to be. The new Google Maps makes that easy to remember.

*Correction, July 27, 2016: In an earlier version of this post, Wilshire Boulevard was misspelled.

July 26 2016 2:35 PM

It’s Official: Using Text Messages to Secure Your Passwords Is a Bad Idea

It’s hard to know how to protect your personal security online with data breaches happening at businesses and institutions all the time. But one thing you may have heard, including on Slate, is that enabling “two-factor authentication” (also called “multi-factor authentication” or “two-step verification”) is a relatively easy way to secure your digital accounts. This is absolutely true, but unfortunately nothing in security is ever quite as easy as people would want it to be.

On Monday, the National Institute of Standards and Technology released a draft of its new proposed Digital Authentication Guideline 800-63B. The document includes a lot of updates and changes, but one important one is a shift away from recommending SMS text messages as one of the “factors” in two-factor authentication. The most mainstream form of the security precaution up until now has been signing into a service with your username and password and then entering a onetime code received through SMS to complete the login process. The idea is that even if someone trying to access your account know your username and password, it’s unlikely that they will also have access to your phone to see the code that’s texted to you.

Advertisement

Security researchers have become increasingly concerned about this system, though, as hackers find more and more ways to remotely access SMS texts. Additionally, as VoIP communication services (Google Voice, Skype etc.) have proliferated, it has become harder to assess whether an SMS message is truly being sent over the cell network or whether it is being funneled through other transmission protocols with varying levels of security. The draft guidelines say, “Due to the risk that SMS messages may be intercepted or redirected, implementers of new systems SHOULD carefully consider alternative authenticators.”

NIST’s guidelines, which are directed at federal agencies, aren’t flat out banning SMS as an authentication factor right now. But the draft does warn that things will eventually move in that direction and that SMS “may no longer be allowed in future releases of this guidance.” The idea right now is to discourage agencies from making new investments in two-factor infrastructure that involve SMS and to invest instead in other authentication options like biometrics, secure mobile apps that generate one-time codes, cryptographic chips, or dongles that generate single-use codes. The guidelines are basically encouraging futureproofing, and are acting as a warning to existing SMS-based systems that things will eventually need to change.

“What we’re seeing now is that the investment required by a malicious actor [to hack SMS] is going down, it’s getting easier to do,” said Michael Garcia, the deputy director of authentication research program NSTIC at NIST. “The scalability of that is sufficiently high that it’s really becoming a problem. It’s certainly better than just a password to use SMS and password, but it’s insufficiently secure for a lot of applications.”

Going forward, NIST wants to encourage investment in security technology that makes it easier to switch between authentication factors, so if the efficacy of one approach is degraded by bad actors, a different one that still offers robust protection can take its place. For groups that already have SMS two-factor in place, “We’re not saying federal agencies drop SMS, don’t use it anymore,” Garcia notes. “But we are saying, if you’re making new investments you should consider that in your decision-making.”

With this generation of proposed guidelines, NIST is trying a new system for offering public previews of its drafts, so it can get additional comments and suggestions before a draft enters the standard open comment period, which will start for these proposals at the end of the summer. Garcia estimates that the guidelines will be revised and approved by the end of the year, depending on how much feedback NIST gets during the preview and open comment periods. And these recommendations don’t directly apply to the services you use from nongovernment companies like Facebook or Google. But eventually you should see these best practices trickling down to the products you use every day.

July 25 2016 2:52 PM

Was Russia Behind the DNC Leaks? It Sure Seems Like It.

On Friday, 19,252 emails sent by Democratic National Committee officials leaked on the controversial publishing platform WikiLeaks. The contents of the emails rocked the DNC, led to chair Debbie Wasserman Schultz’s resignation, and created a potentially damaging climate for Hillary Clinton’s presidential run. It’s a lot, but, incredibly, there’s much more to all of this.

The DNC announced in June that it had been hacked and was working with the well-known security firm CrowdStrike to investigate the breach. CrowdStrike said from the beginning that it had discovered two hacking groups lurking on the DNC’s networks—one that had been there for more than a year and one that had cropped up recently. Though it is difficult to definitively determine the source of sophisticated cyberattacks, the firm said it had strong forensic evidence that both hacking groups were tied to the Russian government, and it published its findings in a June report. CrowdStrike concluded that one network intruder was linked to Russia’s Federal Security Service and the other to the GRU military intelligence group.

Advertisement

But then an entity came forward on June 15, claiming to have hacked the DNC alone. “Guccifer 2.0” started publishing blog posts and posting stolen DNC documents. Soon after, WikiLeaks tweeted about a potential data dump, posting an encrypted 88 gigabyte “insurance” file for people to torrent. The idea was that WikiLeaks could publish a decryption key if it ever wanted people to access the trove (which is probably the stolen DNC files). In July, the Hill also published some DNC documents, writing, “Guccifer 2.0, the hacker who breached the Democratic National Committee, has released a cache of purported DNC documents to The Hill in an effort to refocus attention on the hack.”

But CrowdStrike was always skeptical of Guccifer 2.0. As it wrote in reaction to the original Guccifer 2.0 blog post:

CrowdStrike stands fully by its analysis and findings identifying two separate Russian intelligence-affiliated adversaries present in the DNC network in May 2016. ... Whether or not this [Guccifer 2.0 WordPress] posting is part of a Russian Intelligence disinformation campaign, we are exploring the documents’ authenticity and origin. Regardless, these claims do nothing to lessen our findings relating to the Russian government’s involvement, portions of which we have documented for the public and the greater security community.

Multiple prominent CrowdStrike competitors, including Mandiant and Fidelis Cybersecurity, independently confirmed CrowdStrike’s findings. One firm, ThreatConnect, laid out the evidence on both sides and put extensive energy into attempting to prove that Guccifer 2.0 is a real hacker. The firm concluded, though, that the the evidence is sketchy:

There appears to be strong, yet still circumstantial, evidence supporting the assertions that Guccifer 2.0 is part of a [denial and deception] campaign, and not an independent actor. The most compelling arguments for this conclusion are the previously identified Russian D&D campaigns, coupled with remaining questions related to Guccifer 2.0’s persona and backstory.

Similarly, Michael Buratowski, a senior vice president at Fidelis, wrote in June, “Based on our comparative analysis we agree with CrowdStrike. ... The malware samples contain data and programing elements that are similar to malware that we have encountered in past incident response investigations and are linked to similar threat actors.” He added, “We believe this settles the question of ‘who was responsible for the DNC attack.’ ”

In the wake of last week’s data leaks, Democrats are rallying behind this idea and the FBI has announced that it is investigating the hack. The Clinton campaign, which itself was also allegedly breached by Russian hackers, has said that it believes Russia was behind the hacks, and Nancy Pelosi is on board, too. An additional narrative has emerged, exploring potential ties and sympathies between Republican presidential nominee Donald Trump and Russian President Vladimir Putin. It is troubling to consider that Russia may be using hacking to impact a high-profile democratic election. As security researcher Thomas Rid wrote on Motherboard, “American inaction now risks establishing a de facto norm that all election campaigns in the future, everywhere, are fair game for sabotage—sabotage that could potentially affect the outcome and tarnish the winner’s legitimacy.”

Putin’s spokesman is refusing to comment and the Trump campaign is firmly denying any involvement or collaboration with the Russian government. Edward Snowden pointed out on Monday that the National Security Agency probably has bulk surveillance of Web traffic surrounding the hack and could likely produce independent metadata pointing to the real culprit or culprits. Snowden added, though, that the office of the Director of National Intelligence generally doesn’t weigh in or offer assistance on these types of investigations.

Importantly, though, some still have doubts about the evidence that Russia was actually behind the hacks and leaks. It wouldn’t be hard to imagine that an embarrassed Democratic Party is simply seizing on the Russian explanation as a way to distract and deflect from the deeply problematic DNC behavior exposed by the leaks. One outspoken skeptic is Jeffrey Carr, author of Inside Cyber Warfare. Before WikiLeaks published the DNC files, he wrote on Medium, “It’s important to know that the process of attributing an attack by a cybersecurity company has nothing to do with the scientific method. ... Neither are claims of attribution admissible in any criminal case, so those who make the claim don’t have to abide by any rules of evidence (i.e., hearsay, relevance, admissibility).” And even if the Russian government did hack the DNC, some, like journalist and activist Glenn Greenwald, caution against concluding too quickly that Russia invented Guccifer 2.0. They could still be separate entities.

Whoever is behind the hacks clearly wanted to see what was going down at the DNC. And the leaker seemingly wanted to inject a little chaos into the Democratic National Convention, given the timing of the WikiLeaks post. The people or groups involved in this whole debacle are certainly succeeding at stirring things up.

July 22 2016 5:36 PM

WikiLeaks’ DNC Email Trove Includes Social Security Numbers, Credit Card Info

On Friday, the publishing platform WikiLeaks posted 19,252 searchable emails, including 8,034 attachments, from inside the Democratic National Committee. WikiLeaks says that the emails are from the accounts of seven top DNC officials from the period between January 2015 and May 2016. They are part of WikiLeaks’ “Hillary Leaks” initiative.

The emails contain interesting and potentially important political information, but they also include data that is sensitive in a different way. As Gizmodo points out, the data trove is easily searchable for personal information like credit card numbers, birthdays, and even Social Security numbers.

Advertisement

As a group that advocates for radical transparency, WikiLeaks’ releases are often at odds with personal privacy. The organization is frequently accused of doing more damage than good with its leaks, as in the2010 diplomatic cable release and the Afghan war documents leak. In both cases, politicians and government agencies said that WikiLeaks had put people’s lives at risk, including military personnel, human rights activists, informants, and journalists.

It’s unclear whether this assertion was true and it’s difficult to assess definitively. The situation is controversial because it’s easy to say from an ideological standpoint that holding powerful entities accountable is worth exposing a few people to credit card fraud or identity theft. Of course, you might not see things that way if your social security number were available online right now.

July 22 2016 10:52 AM

Canada’s Carbon Policies Try to Provide Something for Everyone

This article is part of Future Tense, a collaboration among Arizona State UniversityNew America, and Slate. On Tuesday, July 26, Future Tense and the Wilson Center’s Canada Institute will host an event in Washington, D.C., on what it will take for North America to fulfill its energy potential. For more information and to RSVP, visit the New America website.

In the last two years, we’ve seen big commitments in the fight against climate change, including global initiatives such as the Paris Accords and national efforts like the U.S. Clean Power Plan. However, in the United States, momentum at a national level is incredibly difficult to sustain, in large part because of partisan politics and regional differences. The Clean Power Plan is languishing, after a February ruling by the Supreme Court, and the Waxman-Markey cap-and-trade bill met its end in the Senate in 2009, derailing what had been a steady bipartisan march toward a federal climate policy. California, meanwhile, remains the standard-bearer for sub-federal carbon reduction measures.

Advertisement

While federal efforts to arrest climate change have slowed in the United States, a majority of Canadians will soon live under some type of carbon pricing regime. These policies are designed at the provincial level and not coordinated by the federal government. This has led to uneven policies that have far ranging economic consequences for producers and consumers throughout Canada. The progress in Canada is better than the stagnation in the United States, but it’s still inefficient, and the lack of coordination at a federal level is limiting the country’s ability to slow carbon pollution.

Because of the provinces’ constitutional authority over most types of carbon-reduction policies, they must take the lead on these issues. Alberta and British Columbia have or are implementing a carbon tax. Quebec has linked its carbon market to California’s and implemented a variety of carbon reduction measures. Ontario recently joined the pack with a June 2016 Climate Action Plan that introduced a host of new instruments and incentives to drive down carbon emissions. The Ontario plan follows the now-familiar policy formula of either a carbon tax or cap-and-trade system, plus carbon-reduction incentives that are intended to blunt the realization that the inevitable result of either tax or cap-and-trade is higher consumer prices. Increased fuel prices are the obvious effect, but inevitably, it will cost more to emit carbon in the production of any good or service, and this expense is passed on to consumers. Yes, either the tax or the cap creates an incentive to invest in greener technologies that could eventually lower prices and taxes, but these results are mostly speculation with little testing in the real world.

In Ontario, we see a number of the other policy “innovations” designed to generate public support and show the public how cap-and-trade funds are being reinvested. These new measures include electric vehicle incentives, cash for clunkers, and funds to reduce the carbon footprint of cities, business, and homes. Of note, Ontario is offering some of the most generous electric vehicle subsidies in the world. Soon, buyers of electric vehicles in Ontario will receive between C$3,000 and C$14,000 in credits, and new funds will be allocated to private and public charging stations. The province is even changing the building code to ensure that all new home construction includes a 50-amp 240-volt garage plug for car charging.

By offering a wide array of positive incentives for Ontarians, the government hopes it can more aggressively ratchet down its carbon emissions and avoid the morning-after effect that British Columbia is now experiencing. BC enacted the continent’s first carbon tax in 2008. Since then, the province has raised the tax from C$10 to C$30 per ton of CO2. However, additional increases have been put on hold until at least 2018 to ensure that, as BC’s environment minister explains, businesses remain competitive and consumers can afford the increase.

Even supporters of carbon pricing criticize incentive programs as reheated, inefficient, and unfair. They are also an attempt to divert public attention away from the risks of implementing a number of relatively untested policies, especially in a Canadian economy hit hard by low commodity prices and a weak dollar, and a manufacturing sector that never recovered from the 2008–2009 recession.

There is no doubt that the cost of climate change to future generations is worth the most serious investment that we can offer today, but the uncertainties of many of the policy “innovations” threaten to generate unnecessarily negative effects in the name of doing good. Some of the most troubling effects include a lack of coordination between jurisdictions causing business and investment to move from more expensive to cheaper areas. In that same vein, there has been little research on the indirect effects of carbon pricing on emissions-intensive, trade-exposed industries. At what point will the carbon price become too high to grow and ship a bushel of wheat from Saskatchewan to New York City? And, at the core of these sub-federal initiatives, there are questions about whether and how carbon taxes will be reinvested or redistributed. Even more uncertain are the mechanisms for pricing, trading, and regulating carbon credits for cap-and-trade systems not to mention verification and compliance costs.

Carbon reduction policies are necessary—the sooner the better—but planning and coordination will be critical so that the effects are as minimally disruptive as possible. With the tidal wave of new of sub-federal policies, the need for coordination is acute. Esoteric mechanisms designed to buy public support or mask a lack of thorough planning are not innovation—they just make it harder to focus on policies and practices that actually work.

READ MORE STORIES