Future Tense
The Citizen's Guide to the Future

July 27 2016 5:19 PM

Facebook Is So Afraid of Controversy, It May Take the News Out of Its Trending News Section

Facebook may have found a solution to the controversy over its "trending" news section. Not a satisfying solution, mind you—in fact, it’s an ugly, compromised, cowardly solution—but one that would at least deflect attention from the feature and head off future charges of editorial bias.

The potential solution is reflected in a test that the company is running on a subset of users, which Facebook confirmed to me on Wednesday. Mashable earlier this month reported on what appears to have been a crude, prior version of the test, and Huffington Post noticed the updated version on Tuesday. At least two of my Slate colleagues are now seeing it in their own feeds.

Advertisement

As a reminder, the trending box that got the company in so much trouble looked like this:

Facebook trending - current version
Facebook's current trending news feature includes a headline for each story.

Screenshot / Facebook.com

For those in the test group, the trending box now looks like this:

Facebook trending - new test
The version Facebook is testing strips the keywords of context.

Screenshot / Facebook.com

Spot the difference? The article summaries are gone, leaving only the keywords, without context. In their place is a number showing how many people are talking about each keyword on Facebook.

The test is noteworthy for what it reveals about the company’s approach to controversy, and how it perceives its role in the media. It’s noteworthy even if Facebook decides not to go through with these changes, but especially if it does.

The current trending feature is a fascinating thing—a mashup of algorithmically highlighted, of-the-moment topics, including celebrity gossip and viral memes, with human-written headlines so dry and passive and anodyne that they read like relics of a bygone print era. But it's that very hybrid of machine and human selection that opened Facebook to criticism when former contractors told Gizmodo that their own judgments and biases came into play much more than Facebook had previously let on.

The flap’s political dimension was largely trumped up, as even Glenn Beck agreed. Yet it brought to light a very real tension between Facebook's self-styled image as a "neutral" tech platform and the reality of its emergence as perhaps the country's most influential arbiter of news. (Here’s a primer on the controversy for those who never quite got what the fuss was about.)

The outcry from political conservatives, as well as from others (like me) who scrutinize the company's influence on the media, presented Facebook with a choice. It could step up, acknowledge the role human judgment plays in its products (and not just the trending news box), and take steps to make sure that judgment is being applied with rigor and care. Or it could shrink from the controversy, pulling the human decision-making safely behind the curtain of its code.

Facebook appears to have chosen ... both. A promise from Mark Zuckerberg to investigate the claims and a high-profile summit with prominent conservatives both defused some of the faux outrage and served as a tacit admission that the company's human leaders bear ultimate responsibility for the machinations of its algorithm. The company followed up by adding political bias to its agenda for employee bias training and publishing a statement of values for its news feed algorithm.

But there remained the problem of the trending news feature, a relatively insignificant part of the Facebook app and website that is now a target for future allegations of bias, real or imagined. One option would have been to make it better, and more significant, by hiring experienced journalists to turn it into a sort of curated, public front page for the news feed. But that would have made it even more of a lightning rod.

Instead, Facebook first announced a set of tweaks to its trending news guidelines that reduced at least the appearance of human involvement, if not the reality of it. Now, it may be headed further in that direction. Reducing that box to a series of keywords and engagement numbers, sans context, would make it less noticeable, less user-friendly, and ultimately less interesting. It would be like Twitter’s trending topics or Google Trends—only uglier and in no discernible order, rendering it useless even as a leaderboard. I would be very surprised if Facebook’s testing doesn’t show a marked decline in engagement with the feature.

Yet that may all be worth it to Facebook simply to minimize the risk of another brouhaha over its news judgment. If so, that will be understandable on some level: This is hardly a core feature of its service, and it’s not a hill that Facebook wants to die on. But it will also signal that the company is willing to compromise its product in order to avoid offending anyone.

Make no mistake, Facebook will still be exercising editorial judgment—if not in the trending box, then in the values that shape its news feed algorithm, or in its treatment of live videos that show young men being shot by police officers. And those judgments will still be subject to criticism. But those are choices Facebook can’t avoid making, because the news feed and live video are central to its business. The trending section, in its current form, is not—and if these changes go through, it never will be.

July 27 2016 3:34 PM

Future Tense Newsletter: Lessons of the DNC Hack

Greetings, Future Tensers,

Late Friday, WikiLeaks released a cache of almost 20,000 emails from the Democratic National Committee, a trove that included a great deal of personally identifying information. While it’s difficult to definitively attribute cyberattacks like this one, evidence increasingly suggests that Russia was responsible. Laura K. Bate of the Cybersecurity Initiative at New America outlines six ways that the United States might respond, from sanctions to retaliation to doing nothing at all. (Disclosure: New America is a partner with Slate and Arizona State University in Future Tense.) That last choice would be a problematic one, though: If the U.S. chooses not to act, Bate warns, it risks setting “a very dangerous precedent.”

Advertisement

The safest move, of course, is to not get hacked in the first place, in which case, two-factor authentication is still your best bet. Nevertheless, recent research indicates that authentication by text message is much less secure than we’d like to believe. Even if you avoid that, one way or another, hacking happens, which is why cybersecurity expert Josephine Wolff argues that the DNC shouldn’t have even been maintaining its own email server. While an individual Gmail account can be breached, Wolff writes, it’s much harder to grab data from a whole organization’s correspondence, which is what appears to have happened here.

Here are some of the other stories that we read while waiting for chip card charges to go through:

  • Racism: To evaluate a group of people based on their inventions is to misunderstand how innovation happens—and how credit tends to get assigned.
  • Dead Media: The VCR, which had a 40-year run, is finally going out of production. A scholar of the technology explores its legacy.
  • Complexity: Samuel Arbesman discusses the difficulty of understanding modern technology—and thinks about how we can model those systems to help us understand them better.
  • Bioethics: Gene-drive technology allows scientists to meddle with the ordinary probabilities of genetic inheritance. Are we doing enough to control it?

Jacob Brogan
for Future Tense

July 27 2016 10:10 AM

Why Didn’t Google Include Donald Trump in Its List of Presidential Candidates?

Early Wednesday morning, a Columbus, Ohio–based NBC affiliate identified a peculiar quirk in Google’s results. When you plug the term “presidential candidates” into the search engine, an info box of “Active Campaigns” auto-populates at the top of the page. Three (mostly) familiar faces show up in that box: Hillary Rodham Clinton, Bernie Sanders, and Jill Stein. Donald Trump—along with Libertarian candidate Gary Johnson and cult favorite contenders such as Vermin Supreme—was nowhere to found.

Though Donald Trump may not have a real campaign by any ordinary political standard, he clearly deserves to be on that list, as does Johnson, who has performed reasonably well on several recent polls. It’s not immediately clear what’s happening here, but it seems more likely that it’s the result of some glitch in the info box algorithm than of malicious manipulation. While several users on Reddit—a site that has, as Benjy Sarlin has reported, been taken over by Trump supporters—blame these results on bias or institutional censorship, it’s probably not that simple. Among other things, the presence of Sanders—whose campaign is, of course, definitively no longer active, to his supporters’ disappointment—make it more likely that the site is simply pulling its data from bad sources.

Advertisement

The problem is that Google doesn’t tell you where it’s getting its information from—not here at any rate. In the absence of such reference points, it’s hard to figure out where things went wrong. As Mark Graham has shown in Slate, the context for Google’s info boxes about other topics—Graham focuses on cities—is often similarly obtuse. While Google probably doesn’t hide its sourcing in order to deceive users, that ambiguity nevertheless serves its purposes. By presenting information automatically and without clear reference points, the company presents itself as a godlike font of all knowledge, almost Delphic in its capacity for oracular revelation.

Tellingly, even when Google does explain where it’s pulling data from, it still goes wrong sometimes. Indeed, as a group of researchers recently demonstrated in Future Tense, that’s been an issue throughout the campaign. Simply put, Google isn’t great at autonomously explaining candidates’ positions, and its attempts to do so may end up introducing bias into the results it displays, despite its stated desire to do the opposite. A similar fuzziness plays out when Google presents campaign finance statistics, as I’ve written before.

As of 10 a.m. Wednesday, the “Active Campaigns” box was no longer in place. Below the spot it had once been, the NBC affiliate’s report showed up atop a list of related news stories.

Update, July 27, 3:00 p.m.: A Google representative sent the following statement to Slate in response to an earlier request for comment:

We found a technical bug in Search where only the presidential candidates participating in an active primary election were appearing in a Knowledge Graph result. Because the Republican and Libertarian primaries have ended, those candidates did not appear. This bug was resolved early this morning.

The info box is back in place. It now shows Trump, Clinton, Johnson, and Stein.

July 27 2016 8:32 AM

All Maps Are Biased. Google Maps’ New Redesign Doesn’t Hide It.

On Monday, Google rolled out its new Maps design. You’ve probably already forgotten what the old one looked like, but the new version is cleaner and makes more sophisticated use of its power to show different features at different zoom levels.

It also represents the company’s ongoing efforts to transform Maps from a navigational tool to a commercial interface, and offers the clearest proof yet that the geographic web—despite its aspirations to universality—is a deeply subjective entity.

Advertisement

Instead of promoting a handful of dots representing restaurants or shops at the city-view level, the new interface displays orange-colored “areas of interest,” which the company describes simply as “places where there’s a lot of activities and things to do.” In Los Angeles, for example, there’s a big T of orange blocks around Wilshire Boulevard and Vermont Avenue in Koreatown, and again on Wilshire’s Miracle Mile, stretching up La Brea Avenue*. In L.A., areas of interest tend to cling to the big boulevards and avenues like the bunching sheath of an old shoelace. In Boston, on the other hand, they tend to be more like blocks than strips. In Paris, whole neighborhoods are blotted orange.

Roads and highways, meanwhile, take on a new, muted color in the interface. This marks a departure from Google’s old design, which often literally showed roads over places—especially in contrast to Apple Maps, as the cartographer Justin O’Beirne has shown. The new map is less about how to get around than about where to go.

“Areas of interest,” the company’s statement explains, are derived with an algorithm to show the “highest concentration of restaurants, bars, and shops.” In high-density areas, Google candidly explains that it is using humans to develop these zones. Algorithms, of course, are tuned by human engineers. But like Facebook with its News Feed, Google has decided that some attributes of the digital world need a human touch firsthand.

screen_shot_20160727_at_8.30.43_am
Boston

Screenshot via Google Maps

You can learn a lot about a city from its patterns of commercial development, as a planner and a visitor. To me, Google’s new interface is an invitation to explore. Commercially busy districts are the ones where the people are. That’s good walking. It’s a flâneur’s guide to the city.

That’s not all, of course. Zoom in further to an “area of interest,” and commercial establishments pop out like mushrooms after the rain. It’s been only seven years since Google added places to its maps, and for several years, the company stuttered through a series of half-measures to integrate commerce and mapping. But with Google My Business, which debuted in 2014, the company has made it easier for retailers, restaurants, concert venues, and everyone else to update their own information in the map. The platform has since absorbed much of the Yellow Pages and more. The interface shows virtually everything about a restaurant or store that Yelp does—and also shows what times of day people visit and in some cases, interior maps.

Google seems to be betting that its map, as much as its search function, will lead you to spend money in the real world. Mobile usage surpassed desktop usage for Google Maps way back in 2011, and globally, consumers are buying more than six times more smartphones than computers. The more we research our designations on the go, the greater the influence of the map on real-life commerce.

Even with its sliding scales, Google Maps can’t fit every shop in Tokyo in a two-dimensional map. So who gets a spot? It’s not an obvious choice: Analyzing Apple and Google’s maps of New York and London, O’Beirne found that the two companies’ maps had just 10 and 12 percent of their place labels in common. (Likewise, different people will have different businesses pop at them—try it with a friend.)

With “areas of interest,” Google expands its geographic influence with methods that aren’t totally obvious. In New York, it’s not clear to me why 2nd Avenue in the East Village (orange) takes precedence over 1st Avenue (white), or why SoHo's Prince Street (orange) takes precedence over Spring Street (white). And yet: if I had an afternoon in a new city, I’d sooner take a walk in an “area of interest” than elsewhere. Google Maps aims to capture the experiences of city-dwellers with its choices, and it will shape them too.

Tourist office paper maps have long done something similar, highlighting or enlarging commercial streets where their sponsors ply their trade. But the scale of Google’s enterprise, obviously, is quite different.

I’m sure internet cartographers are hard at work analyzing the distribution of “areas of interest” and where they fall. One thing I’ve noticed so far is that they don’t correspond to commercial density in a way that is neat or simple. Broadway in New York’s SoHo is not an “area of interest,” though it is among the busiest commercial thoroughfares in the world. It doesn’t need Google’s seal of approval. But why doesn’t it have one?

Mapping has always been as much art as science, less mathematical or objective than it purports to be. The new Google Maps makes that easy to remember.

*Correction, July 27, 2016: This article originally misspelled Wilshire Boulevard.

July 26 2016 2:35 PM

It's Official: Using Text Messages to Secure Your Passwords Is a Bad Idea

It's hard to know how to protect your personal security online with data breaches happening at businesses and institutions all the time. But one thing you may have heard, including on Slate, is that enabling "two-factor authentication" (also called "multi-factor authentication" or "two-step verification") is a relatively easy way to secure your digital accounts. This is absolutely true, but unfortunately nothing in security is ever quite as easy as people would want it to be.

On Monday, the National Institute of Standards and Technology released a draft of its new proposed Digital Authentication Guideline 800-63B. The document includes a lot of updates and changes, but one important one is a shift away from recommending SMS text messages as one of the "factors" in two-factor authentication. The most mainstream form of the security precaution up until now has been signing into a service with your username and password and then entering a one-time code received through SMS to complete the login process. The idea is that even if someone trying to access your account know your username and password, it's unlikely that they will also have access to your phone to see the code that's texted to you.

Advertisement

Security researchers have become increasingly concerned about this system, though, as hackers find more and more ways to remotely access SMS texts. Additionally, as VoIP communication services (Google Voice, Skype etc.) have proliferated, it has become harder to assess whether an SMS message is truly being sent over the cell network or whether it is being funneled through other transmission protocols with varying levels of security. The draft guidelines say, "Due to the risk that SMS messages may be intercepted or redirected, implementers of new systems SHOULD carefully consider alternative authenticators."

NIST's guidelines, which are directed at federal agencies, aren't flat-out banning SMS as an authentication factor right now. But the draft does warn that things will eventually move in that direction and that SMS "may no longer be allowed in future releases of this guidance." The idea right now is to discourage agencies from making new investments in two-factor infrastructure that involve SMS and to invest instead in other authentication options like biometrics, secure mobile apps that generate one-time codes, cryptographic chips, or dongles that generate single-use codes. The guidelines are basically encouraging future-proofing, and are acting as a warning to existing SMS-based systems that things will eventually need to change.

"What we’re seeing now is that the investment required by a malicious actor [to hack SMS] is going down, it’s getting easier to do," said Michael Garcia, the deputy director of authentication research program NSTIC at NIST. "The scalability of that is sufficiently high that it’s really becoming a problem. It’s certainly better than just a password to use SMS and password, but it’s insufficiently secure for a lot of applications."

Going forward, NIST wants to encourage investment in security technology that makes it easier to switch between authentication factors, so if the efficacy of one approach is degraded by bad actors, a different one that still offers robust protection can take its place. For groups that already have SMS two-factor in place, "We’re not saying federal agencies drop SMS, don’t use it anymore," Garcia notes. "But we are saying, if you’re making new investments you should consider that in your decision-making."

With this generation of proposed guidelines, NIST is trying a new system for offering public previews of its drafts, so it can get additional comments and suggestions before a draft enters the standard open comment period, which will start for these proposals at the end of the summer. Garcia estimates that the guidelines will be revised and approved by the end of the year, depending on how much feedback NIST gets during the preview and open comment periods. And these recommendation don't directly apply to the services you use from nongovernment companies like Facebook or Google. But eventually you should see these best practices trickling down to the products you use every day.

July 25 2016 2:52 PM

Was Russia Behind the DNC Leaks? It Sure Seems Like It.

On Friday, 19,252 emails sent by Democratic National Committee officials leaked on the controversial publishing platform WikiLeaks. The contents of the emails rocked the DNC, led to chair Debbie Wasserman Schultz’s resignation, and created a potentially damaging climate for Hillary Clinton’s presidential run. It’s a lot, but, incredibly, there’s much more to all of this.

The DNC announced in June that it had been hacked and was working with the well-known security firm CrowdStrike to investigate the breach. CrowdStrike said from the beginning that it had discovered two hacking groups lurking on the DNC’s networks—one that had been there for more than a year and one that had cropped up recently. Though it is difficult to definitively determine the source of sophisticated cyberattacks, the firm said it had strong forensic evidence that both hacking groups were tied to the Russian government, and it published its findings in a June report. CrowdStrike concluded that one network intruder was linked to Russia’s Federal Security Service and the other to the GRU military intelligence group.

Advertisement

But then an entity came forward on June 15, claiming to have hacked the DNC alone. “Guccifer 2.0” started publishing blog posts and posting stolen DNC documents. Soon after, WikiLeaks tweeted about a potential data dump, posting an encrypted 88 gigabyte “insurance” file for people to torrent. The idea was that WikiLeaks could publish a decryption key if it ever wanted people to access the trove (which is probably the stolen DNC files). In July, the Hill also published some DNC documents, writing, “Guccifer 2.0, the hacker who breached the Democratic National Committee, has released a cache of purported DNC documents to The Hill in an effort to refocus attention on the hack.”

But CrowdStrike was always skeptical of Guccifer 2.0. As it wrote in reaction to the original Guccifer 2.0 blog post:

CrowdStrike stands fully by its analysis and findings identifying two separate Russian intelligence-affiliated adversaries present in the DNC network in May 2016. ... Whether or not this [Guccifer 2.0 WordPress] posting is part of a Russian Intelligence disinformation campaign, we are exploring the documents’ authenticity and origin. Regardless, these claims do nothing to lessen our findings relating to the Russian government’s involvement, portions of which we have documented for the public and the greater security community.

Multiple prominent CrowdStrike competitors, including Mandiant and Fidelis Cybersecurity, independently confirmed CrowdStrike’s findings. One firm, ThreatConnect, laid out the evidence on both sides and put extensive energy into attempting to prove that Guccifer 2.0 is a real hacker. The firm concluded, though, that the the evidence is sketchy:

There appears to be strong, yet still circumstantial, evidence supporting the assertions that Guccifer 2.0 is part of a [denial and deception] campaign, and not an independent actor. The most compelling arguments for this conclusion are the previously identified Russian D&D campaigns, coupled with remaining questions related to Guccifer 2.0’s persona and backstory.

Similarly, Michael Buratowski, a senior vice president at Fidelis, wrote in June, “Based on our comparative analysis we agree with CrowdStrike. ... The malware samples contain data and programing elements that are similar to malware that we have encountered in past incident response investigations and are linked to similar threat actors.” He added, “We believe this settles the question of ‘who was responsible for the DNC attack.’ ”

In the wake of last week’s data leaks, Democrats are rallying behind this idea and the FBI has announced that it is investigating the hack. The Clinton campaign, which itself was also allegedly breached by Russian hackers, has said that it believes Russia was behind the hacks, and Nancy Pelosi is on board, too. An additional narrative has emerged, exploring potential ties and sympathies between Republican presidential nominee Donald Trump and Russian President Vladimir Putin. It is troubling to consider that Russia may be using hacking to impact a high-profile democratic election. As security researcher Thomas Rid wrote on Motherboard, “American inaction now risks establishing a de facto norm that all election campaigns in the future, everywhere, are fair game for sabotage—sabotage that could potentially affect the outcome and tarnish the winner’s legitimacy.”

Putin’s spokesman is refusing to comment and the Trump campaign is firmly denying any involvement or collaboration with the Russian government. Edward Snowden pointed out on Monday that the National Security Agency probably has bulk surveillance of Web traffic surrounding the hack and could likely produce independent metadata pointing to the real culprit or culprits. Snowden added, though, that the office of the Director of National Intelligence generally doesn’t weigh in or offer assistance on these types of investigations.

Importantly, though, some still have doubts about the evidence that Russia was actually behind the hacks and leaks. It wouldn’t be hard to imagine that an embarrassed Democratic Party is simply seizing on the Russian explanation as a way to distract and deflect from the deeply problematic DNC behavior exposed by the leaks. One outspoken skeptic is Jeffrey Carr, author of Inside Cyber Warfare. Before WikiLeaks published the DNC files, he wrote on Medium, “It’s important to know that the process of attributing an attack by a cybersecurity company has nothing to do with the scientific method. ... Neither are claims of attribution admissible in any criminal case, so those who make the claim don’t have to abide by any rules of evidence (i.e., hearsay, relevance, admissibility).” And even if the Russian government did hack the DNC, some, like journalist and activist Glenn Greenwald, caution against concluding too quickly that Russia invented Guccifer 2.0. They could still be separate entities.

Whoever is behind the hacks clearly wanted to see what was going down at the DNC. And the leaker seemingly wanted to inject a little chaos into the Democratic National Convention, given the timing of the WikiLeaks post. The people or groups involved in this whole debacle are certainly succeeding at stirring things up.

July 22 2016 5:36 PM

WikiLeaks’ DNC Email Trove Includes Social Security Numbers, Credit Card Info

On Friday, the publishing platform WikiLeaks posted 19,252 searchable emails, including 8,034 attachments, from inside the Democratic National Committee. WikiLeaks says that the emails are from the accounts of seven top DNC officials from the period between January 2015 and May 2016. They are part of WikiLeaks’ “Hillary Leaks” initiative.

The emails contain interesting and potentially important political information, but they also include data that is sensitive in a different way. As Gizmodo points out, the data trove is easily searchable for personal information like credit card numbers, birthdays, and even Social Security numbers.

Advertisement

As a group that advocates for radical transparency, WikiLeaks’ releases are often at odds with personal privacy. The organization is frequently accused of doing more damage than good with its leaks, as in the2010 diplomatic cable release and the Afghan war documents leak. In both cases, politicians and government agencies said that WikiLeaks had put people’s lives at risk, including military personnel, human rights activists, informants, and journalists.

It’s unclear whether this assertion was true and it’s difficult to assess definitively. The situation is controversial because it’s easy to say from an ideological standpoint that holding powerful entities accountable is worth exposing a few people to credit card fraud or identity theft. Of course, you might not see things that way if your social security number were available online right now.

July 22 2016 10:52 AM

Canada’s Carbon Policies Try to Provide Something for Everyone

This article is part of Future Tense, a collaboration among Arizona State UniversityNew America, and Slate. On Tuesday, July 26, Future Tense and the Wilson Center’s Canada Institute will host an event in Washington, D.C., on what it will take for North America to fulfill its energy potential. For more information and to RSVP, visit the New America website.

In the last two years, we’ve seen big commitments in the fight against climate change, including global initiatives such as the Paris Accords and national efforts like the U.S. Clean Power Plan. However, in the United States, momentum at a national level is incredibly difficult to sustain, in large part because of partisan politics and regional differences. The Clean Power Plan is languishing, after a February ruling by the Supreme Court, and the Waxman-Markey cap-and-trade bill met its end in the Senate in 2009, derailing what had been a steady bipartisan march toward a federal climate policy. California, meanwhile, remains the standard-bearer for sub-federal carbon reduction measures.

Advertisement

While federal efforts to arrest climate change have slowed in the United States, a majority of Canadians will soon live under some type of carbon pricing regime. These policies are designed at the provincial level and not coordinated by the federal government. This has led to uneven policies that have far ranging economic consequences for producers and consumers throughout Canada. The progress in Canada is better than the stagnation in the United States, but it’s still inefficient, and the lack of coordination at a federal level is limiting the country’s ability to slow carbon pollution.

Because of the provinces’ constitutional authority over most types of carbon-reduction policies, they must take the lead on these issues. Alberta and British Columbia have or are implementing a carbon tax. Quebec has linked its carbon market to California’s and implemented a variety of carbon reduction measures. Ontario recently joined the pack with a June 2016 Climate Action Plan that introduced a host of new instruments and incentives to drive down carbon emissions. The Ontario plan follows the now-familiar policy formula of either a carbon tax or cap-and-trade system, plus carbon-reduction incentives that are intended to blunt the realization that the inevitable result of either tax or cap-and-trade is higher consumer prices. Increased fuel prices are the obvious effect, but inevitably, it will cost more to emit carbon in the production of any good or service, and this expense is passed on to consumers. Yes, either the tax or the cap creates an incentive to invest in greener technologies that could eventually lower prices and taxes, but these results are mostly speculation with little testing in the real world.

In Ontario, we see a number of the other policy “innovations” designed to generate public support and show the public how cap-and-trade funds are being reinvested. These new measures include electric vehicle incentives, cash for clunkers, and funds to reduce the carbon footprint of cities, business, and homes. Of note, Ontario is offering some of the most generous electric vehicle subsidies in the world. Soon, buyers of electric vehicles in Ontario will receive between C$3,000 and C$14,000 in credits, and new funds will be allocated to private and public charging stations. The province is even changing the building code to ensure that all new home construction includes a 50-amp 240-volt garage plug for car charging.

By offering a wide array of positive incentives for Ontarians, the government hopes it can more aggressively ratchet down its carbon emissions and avoid the morning-after effect that British Columbia is now experiencing. BC enacted the continent’s first carbon tax in 2008. Since then, the province has raised the tax from C$10 to C$30 per ton of CO2. However, additional increases have been put on hold until at least 2018 to ensure that, as BC’s environment minister explains, businesses remain competitive and consumers can afford the increase.

Even supporters of carbon pricing criticize incentive programs as reheated, inefficient, and unfair. They are also an attempt to divert public attention away from the risks of implementing a number of relatively untested policies, especially in a Canadian economy hit hard by low commodity prices and a weak dollar, and a manufacturing sector that never recovered from the 2008–2009 recession.

There is no doubt that the cost of climate change to future generations is worth the most serious investment that we can offer today, but the uncertainties of many of the policy “innovations” threaten to generate unnecessarily negative effects in the name of doing good. Some of the most troubling effects include a lack of coordination between jurisdictions causing business and investment to move from more expensive to cheaper areas. In that same vein, there has been little research on the indirect effects of carbon pricing on emissions-intensive, trade-exposed industries. At what point will the carbon price become too high to grow and ship a bushel of wheat from Saskatchewan to New York City? And, at the core of these sub-federal initiatives, there are questions about whether and how carbon taxes will be reinvested or redistributed. Even more uncertain are the mechanisms for pricing, trading, and regulating carbon credits for cap-and-trade systems not to mention verification and compliance costs.

Carbon reduction policies are necessary—the sooner the better—but planning and coordination will be critical so that the effects are as minimally disruptive as possible. With the tidal wave of new of sub-federal policies, the need for coordination is acute. Esoteric mechanisms designed to buy public support or mask a lack of thorough planning are not innovation—they just make it harder to focus on policies and practices that actually work.

July 21 2016 6:33 PM

Ted Cruz Keeps Trying to Protect Internet Freedom in Weirdly Wrong Ways

At the Republican National Convention on Wednesday night, Ted Cruz delivered a controversial speech. Most of it was about freedom and party differences, but did you know that Cruz also has strong views about the technical bodies that oversee the internet? Welp, he does.

Nestled in a list of superficial policy points, Cruz said, “The internet? Keep it free from taxes, free from regulation. And don’t give it away to Russia and China.” Seems weird, but here’s what he’s really referencing.

Advertisement

The U.S. government currently oversees the nonprofit group Internet Corporation for Assigned Names and Numbers, or ICANN. This organization assigns and manages domain names and basically maintains fundamental order online. The Obama administration has been working on an initiative for a few years that would allow an international coalition of stakeholders to replace the U.S. as the steward of some of ICANN’s key functions under the Internet Assigned Numbers Authority, or IANA. The idea is that this would more closely align with the fundamental principles of internet openness, instead of relying on one nation state as some type of moral arbiter.

As computer scientist and internet standards pioneer Jon Postel said in testimony to the House of Representatives’ Subcommittee on Technology in 1998, “There was one issue on which there seemed to be almost unanimity [among internet stakeholders]: the Internet should not be managed by any government, national or multinational.”

As you might have guessed, Cruz opposes this plan. He fears that it will allow authoritarian governments like oh, you know, Russia and China, to censor the internet internationally like they do within their own countries. In August 2015, Cruz told Wall Street Journal opinion writer L. Gordon Crovitz, “It’s a key issue that the U.S. not give away control of the Internet to a body under the influence and possible control of foreign governments.”

This is not what the transition plan proposes, though. The multistakeholder organization overseeing ICANN would incorporate not only a diverse set of government representatives but also private sector groups. One statement of support for the transition, signed by organizations like Human Rights Watch and the Center for Democracy and Technology, referenced a letter from Sens. Cruz, Lankford, and Lee to the Department of Commerce (which holds the United States’ current contract with ICANN). “While we share the Senators’ stated desire to protect Internet freedom, we note that their proposed solution of delaying the IANA transition will unintentionally have exactly the effect they hope to avoid: Delay would incur risk of increasing the role for foreign governments over the Internet and undermine free speech.” The statement added that it found the senators’ concerns “puzzling” because they are so contrary to the stated mission of the transition plan.

Cruz has a strange track record when it comes to internet openness. Though he’s fighting so hard to, by his estimation, protect the internet from influence and censure, he opposed the Federal Communications Commission’s recent efforts to uphold net neutrality principles by reclassifying broadband as a utility. In a 2014 Washington Post opinion piece, Cruz wrote, “Net neutrality is Obamacare for the Internet. It would put the government in charge of determining Internet pricing, terms of service and what types of products and services can be delivered, leading to fewer choices, fewer opportunities and higher prices.” To borrow the word used above, this stance was puzzling for its sheer inaccuracy. Net neutrality itself aims to achieve everything Cruz is concerned about protecting.

A weird stance on internet freedom is just the beginning when it comes to unpacking the fallout from Cruz’s RNC speech. You have to give the guy credit for being passionate about the issue, though. He even made a (very misleading) video for crying out loud.

July 21 2016 6:27 PM

Netizen Report: Some Iranian Hardliners Want the Government to Stop Blocking Twitter

The Netizen Report offers an international snapshot of challenges, victories, and emerging trends in internet rights around the world. It originally appears each week on Global Voices AdvocacyMahsa Alimardani, Ellery Roberts Biddle, Weiping Li, Laura Vidal, and Sarah Myers West contributed to this report.

GVA logo

Reports of web censorship—including blocking specific websites and entire social media platforms, as well as overall internet blackouts—have been so widespread over the last two weeks that we’ve decided to dedicate this Netizen Report to the trend.

Advertisement

Iranian leaders are not so sure about Pokémon Go but might stop blocking Twitter
A group of Iranian hardliners have demanded the government stop blocking Twitter in an unexpected change of tune from a group that typically stands at the forefront of policies curtailing freedom of expression. The group wants to use Twitter to counter Saudi Arabian propaganda, which it argues is part of a “psychological operation” against Iran. Propaganda concerns have increased since the recent attacks in Nice, France.

Iranian officials also have responded to Pokémon Gopledging to censor the game if the developers do not agree to cooperate with Iran’s National Foundation for Computer Games, which has censored multiple games in the past. They say they will seek to keep the game’s data servers inside of Iran, along with cooperation with the government to prohibit the game from targeting locations that could be of national security concerns. The request to keep servers inside the country might be seen as an extension of the Supreme Council of Cyberspace’s demand in May that all foreign messaging companies move the data they hold on Iranians onto servers inside the country within a year or face censorship.

In other news, Iran has put Apple on notice, stating the company has just a “few days” to register or “all iPhones will be collected from the market,” according to a report by Tasnim News. Due to sanctions against Iran, Apple had previously not officially entered the Iranian market. Smugglers, however, have brought iPhones to the country. A 2015 report suggested that there were about 6 million iPhones in circulation in Iran at that time. This new ban would not affect existing iPhone owners, but would ban further sales of the phone on the market.

Zimbabwe: #ShutdownZim protests spark WhatsApp shutdown
Protests across Zimbabwe
 over an escalating economic crisis have brought on a new wave of censorship in the country: Zimbabweans have reported not being able to access WhatsApp, which was used to organize and circulate images of the protests, and the telecom regulatory authority issued a public notice warning users they were being closely monitored and could be “easily identified,” according to the Washington Post. The Zimbabwe Broadcasting Corp. and radio station STAR FM also received a warning from the CEO of the Broadcasting Authority not to “broadcast programs that incite, encourage, or glamorize violence or brutality” and to avoid “broadcasting obscene and undesirable comments from participants, callers and audiences.” The government is rumored to be working on licensing an internet gateway for the country, a mechanism that would force all traffic to pass through a single portal that would be operated by the government and allow authorities broad access to internet traffic and user data.

Brazil: WhatsApp is down, again, briefly
WhatsApp was also briefly blocked in Brazil for the third time in less than a year following a court order from a judge after failing to surrender user data to police. The Supreme Court accepted an appeal that brought the service back online four hours later, calling the lower court’s decision “not very reasonable and not very proportional.”

Kashmiris report total suspension of internet and mobile amid unrest
Following the July 8 killing of Kashmiri rebel leader Burhan Wani, internet and mobile services in the region were shut down for at least six days. Thousands of Indian soldiers are patrolling the streets and have used tear gas and pellets on protesters. Several Kashmiris have also reported having their social media accounts suspended in what free expression advocates Baba Umar and Nighat Dad suspect might be a campaign by trolls to flag their accounts.

Turkey’s coup attempt sees a 50 percent drop in internet traffic
Meanwhile, during the attempted coup in Turkey, internet users reported having trouble accessing a range of websites and services including Facebook, Twitter, and YouTube. CloudFlare reported an approximately 50 percent drop in Turkey’s total internet traffic during the unrest. Yet what at first appeared to be at least a partial blackout typical of past periods of unrest in Turkey soon turned on its head, as President Erdogan turned to Twitter—which he described in 2013 as a “menace to society”—and Apple’s FaceTime to address the country. Websites continue to be blocked as of Thursday afternoon in the aftermath of the attempted coup, with the Turkish site Engelli Web (Disabled Web) reporting that a judge approved the censorship of 20 websites. And following Wikileaks’ release of nearly 300,000 emails sent to and from officials of the AKP, Erdogan’s party, Wikileaks was blocked, too.

Ethiopia: #OromoProtests trigger broad social media censorship
Ethiopian telecommunication company EthioTelecom blocked social media platforms including Twitter, WhatsApp, and Facebook Messenger for at least two months, beginning in December, in Oromia, where students are protesting the government’s plan to expand the capital city, Addis Ababa, into neighboring farm lands in the state. The telco also reportedly plans to enforce a new price scheme to more heavily regulate data plans and what kinds of apps users can operate on their devices. Oh, and it intends to track, identify, and ban any mobile devices not purchased from the Ethiopian market, making it easier for the company to track data sent to and from subscribers on the network. The protests in Oromia, which began in November 2015, have been the largest and bloodiest demonstrations against the Ethiopian government in a decade, with at least 400 people killed, more injured, and thousands jailed.

Nicaragua might get rid of its “internet tax”
The Nicaraguan government is considering a repeal of its internet tax in order to improve national connectivity. Currently the government charges a 20 percent tax on mobile terminals, resulting in high costs for internet users. The announcement followed meetings between government officials and entrepreneurs in the telecommunications sector to explore ways to improve infrastructure.

New Research
•    “Examining internet Freedom in Latin America: Colombia”—Association for Progressive Communications
•    “FAST Africa: The 2016 Action Week and Beyond”—World Wide Web Foundation
•    “The Online Intermediary Liability Research Project”—Center for Advanced Studies and Research on Innovation Policy, University of Washington School of Law

READ MORE STORIES