Future Tense
The Citizen's Guide to the Future

Aug. 1 2017 7:16 PM

Google Is Matching Your Offline Buying With Its Online Ads, but It Isn’t Sharing How

The Federal Trade Commission received a complaint Monday from privacy advocates requesting a full investigation into a new advertising scheme from Google that links individuals’ online browsing data and what they buy offline in stores.

The privacy group that launched the federal complaint, the Electronic Privacy Information Center, alleges that Google is using credit card data to track whether online ads lead to in-store purchases without providing an easy opt-out or clear information about how the system works. The complaint specifically calls out a new advertising program Google unveiled in May that reportedly relies on billions of credit card records, which are matched to data on what ads people click on when logged into Google services.

The ability to link online ads to actual in-store purchases is often described as the “holy grail” of data-driven advertising, according to David Carroll, a professor at the New School who studies the online data tracking industry.

Google says it can’t disclose which companies it works with to get customers’ offline shopping records because of confidentiality agreements it has with those partners. So at the moment, the only way for a Google user to prevent his or her offline purchasing history from being linked to their web browsing is to opt out of Google’s web and app tracking entirely, which could make it nigh-impossible to use other Google services.

If Google did share the names of its partners in its offline ad-tracking program, customers could presumably stop using those services.  There are plenty of reasons why a person wouldn’t want their offline purchasing data to mingle with their online accounts. What you buy at a drug store alone can point to health concerns, sexual history, or other personal information that you may want to keep to yourself.

But Google says not to worry about that information seeping out, since it “does not learn what was actually purchased by any individual person (either the product or the amount). We just learn the number of transactions and total value of all purchases in a time period, aggregated to protect privacy,” a spokesperson said in an email. In other words, Google is saying that the advertiser doesn’t learn who clicked on their ads, just how many of those clicks translated to offline sales.

But that even if the data is anonymized by both credit card payment data holders and Google, those in-store linkages are not truly anonymous, despite what companies claim, according to Chris Hoofnagle, a law professor at Berkeley who specializes in data privacy.

“There’s a long history to this,” Hoofnagle said. Ten years ago, a digital advertiser industry group, the Data and Marketing Association, argued that phone numbers were not personally identifiable information since one number is usually shared within a single household linked to multiple individuals. That logic is being recycled. Hoofnagle says that digital marketers’ “new trick is to take personally identifiable information and hash it.” That means the personal data is encrypted. “That would be fine,” Hoofnagle continued, “but everyone uses the same hashes, and so these hashes are essentially pseudonyms.” Or, as Wolfie Christl, a digital privacy researcher and author of the book Networks of Control, explains in a recent report, data companies generally use the same encryption method. If everyone is masked with the same pseudonym process, it's easy to track that pseudonym across the internet

Just last week, at the annual hacker conference Defcon in Las Vegas, a journalist and a data scientist shared how they were able to obtain a database tracking 3 million German users’ browsing history, spanning 9 million different websites. The data set was said to be anonymized, but the team was able to de-anonymize many of the users, according to a report in the Guardian. For some people, the researchers could just look at the browsing history. For instance, a Twitter analytics page contains a URL with the username in it—so checking to see if a tweet went viral could give away your identity in “anonymous” browsing data.

EPIC’s complaint also points out that Google isn’t sharing enough detail about how it’s encrypting the data. The complaint alleges Googles uses a type of encryption, CryptDB, that has known security flaws. While it’s unclear that Google’s offline to online ad-tracking system uses CryptDB, Google has not shared details on the math and software that its using to implement its encryption.

“We don’t know a lot about how this is implemented,” said Joseph Lorenzo Hall, a technologist with the Center for Democracy and Technology, which is in part funded by Google. Hall says that typically Google would publish a white paper or some further explanation of how its encryption works.

Google also wouldn’t clarify whether users consent to having their web browsing linked to their offline purchase history, but a spokesperson did say that their “payment partners have the rights necessary to use this data.”

Carroll of the New School says that Google’s ad practices here can be manipulative. “Google is in the market of predicting consumer behavior and commoditizing our behavior at scale,” said Carroll. “We don’t know how it works. We don’t know how they are protecting us.”

Even if Google is able to anonymize its ad data, it should still make it easier for people to opt out of linking their browsing history to their offline shopping. Right now, you have to go navigate to the privacy settings of your account and then find the Activity Controls page.

Screenshot of where to opt-out of web and app tracking on Google


It’s not super intuitive to find, but then again, Google, which is in the business of selling ads, would probably prefer you keep your personal data as accessible as possible.

Aug. 1 2017 3:31 PM

That Email “Prank” Illustrates the Horrific Digital Security at Trump’s White House

You may remember that we recently went through a national election that was, in large part, about email security. Well, about that: News broke Monday night that numerous members of the Trump administration had exchanged emails with a prankster pretending to be White House staff.

The whole situation is as troubling as it is ridiculous. The prankster—who also duped the CEOs of Goldman Sachs and Citigroup in June and goes by @SINON_REBORN on Twitter—even tricked the White House homeland security adviser, Thomas Bossert, into assuming he was writing to Jared Kushner. Bossert, who was a fellow at the Atlantic Council’s Cyber Statecraft Initiative before joining the White House, is supposed to be an expert on cyber security.

The absurdity of the exchanges, which were first published on CNN, illustrates just how unprofessional and at times hostile the White House staff can be—but perhaps more importantly, it also points to a weak culture of digital security that could pose a serious threat to national security.

After all, if senior officials—including, again, a cybersecurity expert—don’t have enough basic digital security training to spot fake or malicious emails, there’s no telling what else people in the White House have clicked on. The whole network and computer system used by Trump’s administration may well be infested with malware. That’s because one of the most common ways people are attacked online is by opening emails that look like they come from a trusted source. If the unlucky target clicks on a link or an attachment in the email, it can trigger the installation of spyware. This is how an attack over Gmail spread in May, when more than 1 million people were tricked into downloading malware that looked like a link to a Google Document.

Hackers may also court people to responding to their fake email with sensitive information, like passwords, bank numbers, or in the case of the White House, national intelligence.

What’s alarming is how forthcoming Trump’s White House staff was with interpersonal details about other staff in the administration.  For instance, the prankster tricked Anthony Scaramucci, the then-White House communications director, into thinking he was emailing with former White House Chief of Staff Reince Priebus, who had been fired the day before the fake email was sent.

That exchange is worth reading in full:

The fake Prebius wrote: “I had promised myself I would leave my hands mud free, but after reading your tweet today which stated how; 'soon we will learn who in the media who has class, and who hasn't', has pushed me to this. That tweet was breathtakingly hypocritical, even for you. At no stage have you acted in a way that's even remotely classy, yet you believe that's the standard by which everyone should behave towards you? General Kelly will do a fine job. I'll even admit he will do a better job than me. But the way in which that transition has come about has been diabolical. And hurtful. I don't expect a reply."
To which Scaramucci replied: "You know what you did. We all do. Even today. But rest assured we were prepared. A Man would apologize."
Fake Prebius:  "I can't believe you are questioning my ethics! The so called 'Mooch', who can't even manage his first week in the White House without leaving upset in his wake. I have nothing to apologize for."
Real Scaramucci:  "Read Shakespeare. Particularly Othello. You are right there. My family is fine by the way and will thrive. I know what you did. No more replies from me."

Other fake emails sent by the prankster include correspondence between the real Scaramucci and an email pretending to be from the Ambassador to Russia-designate Jon Huntsman Jr., as well as exchanges between the real Huntsman and emails sent by the prankster pretending to be Eric Trump. But the real Eric Trump, for his part, wasn’t so easily duped. CNN reported that he quickly caught on to the fraud and replied to tell the prankster his email had been forwarded to law enforcement.

This isn’t a new problem. Even the since-fired director of the FBI, James Comey, responded to a fake email sent by Gizmodo in April, as did Newt Gingrich, who is an informal adviser to the president.

Since becoming President, Trump has said that he aims to crack down on leakers. But considering how the email trickster, who told CNN he was only trying to be “humorous,” was able spark conversation with individuals at the highest level of the U.S. government with only a few fake emails, the White House’s shoddy cybersecurity protocols may be causing much of its own communications to leak out like a sieve.

Aug. 1 2017 3:06 PM

What’s the Point of a Robot Soccer Tournament if the Robots Are Terrible at Soccer?

Lionel Messi is generally considered the best soccer player in the world, so talented, that as a baby, he must have been dipped twice in the River Styx—once by each magical foot. Javier Mascherano, a teammate of Messi’s on the Argentine national team and F.C Barcelona, said: “Although he may not be human, it’s good that Messi still thinks he is.”

But maybe nonhumans aren’t so good at soccer.

This weekend, the premier international robotics soccer tournament, the annual RoboCup, took place in Nagoya, Japan. The competition, which sees itself as a publicly appealing way to promote AI and robotics, has set a goal that “by the middle of the 21st century, a team of fully autonomous humanoid robot soccer players shall win a soccer game, complying with the official rules of FIFA, against the winner of the most recent World Cup.”

I may be shortsighted—after all, only 63 years passed between the Wright brothers’ first flight and Neil Armstrong’s small step on the moon—but it seems that they have a long way to go to reach that goal. Right now, the kid-size humanoid soccer players look like K’nex robots, walk like penguins, and fall over constantly.

The tournament, which has taken place every year since 1997, has nine different competitions, only six of which involve robots physically playing soccer. There are three humanoid divisions, in which kid-size, teen-size, and adult-size bipedal robots “play” soccer against one another. Next, there are “small size” and “medium size” non-humanoid leagues. There is also a “standard platform” league, in which all competitors use the same type of robot, instead of creating their own according to specs. For some reason, this standard player is a communication robot, more C3PO than HK-47, and has very limited movement skills, which allows observers to enjoy “smiling at cute robots taking a tumble when some distance away from the ball,” according to the event organizers.

This year, two teams were clear winners. Team NimbRo, from Germany’s University of Bonn, won the humanoid teen-size and adult-size Round Robin competitions by a goal differential of 25 and 26, respectively. Likewise, Team Rhoban, a team based out of the computer science department of Bordeaux University in France, won the kid-size division with a goal differential of 17. To give some idea of how dominant those teams were, the second-place goal differentials for each of those leagues were 3, 0, and 4. As usual with technology, success clusters near the top.

Certainly, a lot has improved in robot soccer in recent years. For instance, 2017’s robots are much more likely to make that easy shot than 2012’s robots. When a robot falls over (which, I can’t stress this enough, happens all the time), it no longer sets off a chain of $50,000 dominos. But they lack what makes the game beautiful. When they shoot, their AI adjusts by repeatedly stutter-stepping before pausing and kicking. If you’re betting on a robot soccer team, you’d better hope that your keeper is in the right place at the right time, because they almost never dive toward an incoming ball.

The RoboCup also consists of totally inhuman robots playing soccer, and they are actually quite good. The smallest look like little Roombas that clasp the ball in a pair of recessed forceps, and the larger size look like traffic cones on wheels. Both robots zip around the field, coordinating, passing in space, taking one-timers, and actually saving goals by moving toward the shot. Watching them approaches something like fun.

While soccer is the headliner at RoboCup, there are other two competitions that might be more practical, the RoboCup Rescue and the RoboCup Industrial competitions, which focus on goal-oriented tasks that don’t innately lend themselves to human ability. The goal of the first competition is for a robot to follow a fixed path occasionally  blocked by obstacles on the route. It’s easy to imagine this robot travelling through a burning building or Fukushima. The goal of the second competitions is for robots to ferry tools from table to table, imitating servers, lab technicians, or advanced factory workers. Those are impressive achievements exactly because they’re difficult and tedious for human to perform.

So yes, today’s robots lack Messi’s grace, his lithe touches, mid-turn, slicing between a pack of defenders. But the “point” of the RoboCup isn’t to make soccer robots because soccer robots are cool. The point is that teaching robots to mimic full-body coordination is very hard, and by working on that problem, scientists can make technological breakthroughs relevant in other, more important tasks. A robot may never approach Messi’s jaw-dropping soccer abilities and that’s fine. It doesn’t mean we should stop trying. Because every time one of those robots misses a simple shot, or falls down for no reason, it makes them better at wading through flames to save trapped families—a task the still-human Messi could never accomplish.

July 28 2017 6:08 PM

The Absurdity of Honolulu’s New Law Banning Pedestrians From Looking at Their Cellphones

If the fusty sigh of “Kids these days!” were a law, it would look something like the new Honolulu ordinance making it illegal to cross the street while looking at a cellphone. The fines will start in October at $35 and increase to $75 for a second offense and $99 for a third.

The law, signed by Mayor Kirk Caldwell on Thursday, is intended to lower the city’s pedestrian-fatality rate, which is among the highest in the U.S. In practice, however, it will inject police discretion into another routine of daily life—while perpetuating the media-driven myth that pedestrians are responsible for their own deaths.

There is an epidemic of American pedestrians getting killed by drivers. But there is virtually no evidence that they are being run over because they are too busy reading Slate on their phones.

There are a few reasons why the “distracted walking” narrative has taken hold. The first comes from a 2013 Ohio State University study that reported that the percentage of pedestrians visiting an emergency room for injuries sustained while using cell phones has risen, from less than 1 percent in 2004 to more than 3.5 percent in 2010. But the numbers of victims remains quite small—in the low four figures, according to Consumer Product Safety Commission data—and injuries related to cellphone use seemed to track neatly between pedestrians and drivers.

July 28 2017 5:30 PM

Yes, U.S. Scientists Edited an Embryo’s Genes, but Super-Babies Are a Ways Away

MIT Technology Review reported Thursday that a team of researchers from Portland, Oregon were the first team of U.S.-based scientists to successfully create a genetically modified human embryo. The researchers, led by Shoukhrat Mitalipov of Oregon Health and Science University, changed the DNA of—in MIT Technology Review’s words—“many tens” of genetically-diseased embryos by injecting the host egg with CRISPR, a DNA-based gene editing tool first discovered in bacteria, at the time of fertilization. CRISPR-Cas9, as the full editing system is called, allows scientists to change genes accurately and efficiently. As has happened with research elsewhere, the CRISPR-edited embryos weren’t implanted—they were kept sustained for only a couple of days.

In addition to being the first American team to complete this feat, the researchers also improved upon the work of the three Chinese research teams that beat them to editing embryos with CRISPR: Mitalipov’s team increased the proportion of embryonic cells that received the intended genetic changes, addressing an issue called “mosaicism,” which is when an embryo is comprised of cells with different genetic makeups. Increasing that proportion is essential to CRISPR work in eliminating inherited diseases, to ensure that the CRISPR therapy has the intended result. The Oregon team also reduced the number of genetic errors introduced by CRISPR, reducing the likelihood that a patient would develop cancer elsewhere in the body.

Separate from the scientific advancements, it’s a big deal that this work happened in a country with such intense politicization of embryo research. But the climate around these issues has changed recently: The U.S National Academy of Sciences has repeatedly endorsed basic research related to embryo editing, doing so again this February.

But there are a great number of obstacles between the current research and the future of genetically editing all children to be 12-foot-tall Einsteins.

Possibly chief among these obstacles is that a CRISPR intervention would have to be completed at or just after fertilization to yield a super child. The authors of the upcoming paper (which is apparently scheduled to be published, though it’s unclear where) used the donated sperm of men carrying inherited disease mutations to create embryos with those mutations with the goal of then editing out the genetic diseases. This required the authors to know the disease carried by the sperm, and to be able to correct for that disease at the time of fertilization. Since human eggs can be fertilized by sperm half an hour after sex, CRISPR editing would likely require IVF, which is increasingly common but still out of reach for many families.

Furthermore, Stanford University law professor Hank Greely tweeted that the “key point” was that no team had yet implanted a CRISPR-edited embryo in a uterus for development. Until this research is done with real embryos that are allowed to reach maturity, and not research embryos, we are still far away from CRISPR being used widely.

And no matter the amount of academic interest in the topic, further research and clinical trials won’t take place unless funding is given. Right now, all federal agencies in the U.S., including the National Institute of Health, are prohibited from funding research that edits genes in embryos. Science magazine reports this is “because of a congressional prohibition on using taxpayer funds for research that destroys human embryos.” This means that funding for embryo editing must (and will) come from private sources, inherently reducing the degree to which the government can supervise and direct this kind of research.

There’s also the issue of price. Several commercial CRISPR-based gene therapies have gone to market abroad in the last couple years. They’re intended for already-born humans, not embryos. None of them have yet made it to the U.S, but one company that may be the first, Spark Therapeutics of Philadelphia, estimates that its treatment will cost roughly $500,000.  to treat a genetic eye condition in one eye if it finally gets FDA approval. Spark’s treatment isn’t even the most expensive. A 2012 drug called Glybera cost $1.4 million in Germany for genetic treatment of an ultra-rare disease called lipoprotein lipase deficiency.

So, while this research is an important building block for the future, it doesn’t mean the future is already here.

July 28 2017 2:07 PM

Federal Court: Public Officials Cannot Block Social Media Users Because of Their Criticism

Does the First Amendment bar public officials from blocking people on social media because of their viewpoint?

That question has hung over the White House ever since Donald Trump assumed the presidency and continued to block users on Twitter. The Knight First Amendment Institute at Columbia University has sued the president on behalf of blocked users, spurring a lively academic debate on the topic. But Trump isn’t the only politician who has blocked people on social media. This week, a federal court weighed in on the question in a case with obvious parallels to Trump’s. It determined that the First Amendment’s Free Speech Clause does indeed prohibit officeholders from blocking social media users on the basis of their views.

Davison v. Loudoun County Board of Supervisors involved the chair of the Loudoun County Board of Supervisors, Phyllis J. Randall. In her capacity as a government official, Randall runs a Facebook page to keep in touch with her constituents. In one post to the page, Randall wrote, “I really want to hear from ANY Loudoun citizen on ANY issues, request, criticism, compliment, or just your thoughts.” She explicitly encouraged Loudoun residents to reach out to her through her “county Facebook page.”

Brian C. Davidson, a Loudon denizen, took Randall up on her offer and posted a comment to a post on her page alleging corruption on the part of Loudoun County’s School Board. Randall, who said she “had no idea” whether Davidson’s allegations were true, deleted the entire post (thereby erasing his comment) and blocked him. The next morning, she decided to unblock him. During the intervening 12 hours, Davidson could view or share content on Randall’s page but couldn’t comment on its posts or send it private messages.

July 27 2017 4:29 PM

The New Wisconsin Foxconn Plant Will Probably Be Staffed By Robots—if It Ever Gets Built

On Wednesday, Foxconn— the Taiwanese manufacturing juggernaut that’s responsible for assembling Apple’s iPhone—announced that it plans to open a new plant in Wisconsin. If all goes to plan, Foxconn says it will create up to 3,000 new jobs initially, not including the labor that will go into building the plant. The company claims that eventually as many as 13,000 people could be employed.

But that’s a big if.

Any number of things could go wrong before shovels break ground: Foxconn could pull out, or it could decide to significantly reduce the size of the plant. Even if the factory is built, it’s probably going to filled with robots, which could mean far fewer than the promised 3,000 jobs.

Of course, none of that stopped Trump from crowing about it on Wednesday at a White House event, while Wisconsinites (and Republicans) Gov. Scott Walker and Rep. Paul Ryan stood by his side. Earlier in the week Trump proclaimed that “three big, beautiful plants” are on the way from Apple.

The new plant, which will make LCD screens, is slated to amount to a $10 billion investment from Foxconn, and each job is supposedly going to clock in an average salary of $54,000. Wisconsin sweetened the deal for the Taiwanese company by offering $3 billion in subsidies to offset the costs of coming to America. The simple math may make that sound like a good deal—$3 billion for $10 billion.

But as Tim Cuplan at Bloomberg points out, $3 billion for 3,000 jobs means the state is paying $1 million per job. But let’s be generous and factor in the construction jobs that would go into building the plant, which the state estimates could total 16,000 jobs, and the long-term estimate of employing 13,000 people at the plant.  Those 29,000 jobs would still cost more than $100,000 a person in state subsidies.

But there’s an even bigger problem than recouping the state’s investment. Foxconn’s history and the future of manufacturing in general both suggest Wisconsinites shouldn’t bust out the six-pack just yet.

For one, Foxconn has a track record of promising factories to cities in need of jobs and not coming through. It happened in 2013 in Harrisburg, Pennsylvania, when Foxconn promised a $30 million factory that would employ 500 workers. The announcement made headlines, adding to both Foxconn’s and the Pennsylvania politicians’ political capital, but it was never actually built, and there’s no sign it will ever happen. Very little was made of the deal’s quiet death. It also happened in Vietnam in 2007 and Indonesia in 2014.

Even if a plant gets built, it could fall short of expectations. In 2011, Foxconn promised a plant in Brazil that was projected to create 1000,000 jobs. In 2015, the factory reported it employed roughly 3,000 people, and the company never explained why it fell short of its projections, according to Reuters.

Last year, Foxconn boasted that it replaced 60,000 workers with robots at a single factory in China. The company even makes its own industrial robots, dubbed Foxbots, that work on its assembly lines. Foxconn was making about 10,000 Foxbots a year in 2015.

Factories are increasingly coming to the U.S. in part because wages in China have been on the rise for the past 15 years—but also because the cost of robots is going down. It ultimately may prove cheaper to manufacturer products closer to the major markets where they sell.

Foxconn isn’t the only company that’s made efforts to manufacture in the U.S. in recent years. Look at the Carrier plant Trump claimed he helped convince to stay in the country last year. Carrier’s decision was driven in part by a plan to save on labor by adding more automation to the plant. Adidas also shared plans last September to open a new factory in Georgia that will be highly roboticized, only employing about 160 human workers.

Even if Foxconn lives up to its promises, the introduction of automated factories may ultimately lead to fewer human workers in the long run. Annual shipments of manufacturing robots to the U.S. are projected to rise 300 percent in the next nine years, according to the research firm ABI.

Those robots are going to have to work somewhere, and they’ll most likely find a home at whatever new factories come to the country, even the new ones that Trump claims to have had a hand in wooing to U.S. shores. Factories aren’t built to employ people. They’re built to make money. If it’s ultimately cheaper to buy a machine than pay a salary and medical benefits, robots are more likely to get the job. There’s no promise here that anything will improve for American workers beyond cheaper shipping costs for factory made goods. After all, it’s cheaper to ship an iPhone from California to Chicago than it is from Shanghai.

July 26 2017 3:38 PM

Trump’s FCC Chairman Is Misleading Congress About Net Neutrality

If Trump’s Federal Communications Commission has its way, the internet of the future is probably going to look a lot like the internet of today—and that’s a bad thing. New websites and innovative startups will have a hard time finding an audience without network neutrality protections––the rules that prohibit internet providers from speeding up access to some websites and not others.

Ajit Pai, the current chairman of the FCC, is laser-focused on, as he eloquently put it late last year,  “taking a weed whacker” to the network neutrality framework—known as the open internet rules—that the FCC passed in 2015 under President Obama. He’s already started whacking away. In May, Pai introduced a new proposal that would undo the Obama-era net neutrality rules, which the current Republican-led FCC could vote on by the end of the year.

On Tuesday, a House subcommittee held a hearing to discuss the future of the FCC and where its plans to unravel open internet protections are headed next. And Republicans at the hearing wasted no time backing Pai’s plan to rescind the open internet rules.

“Chairman Pai, we hope you’re keeping that ‘weed whacker’ handy because it has a lot of work to do,” Rep. Marsha Blackburn, a Republican from Tennessee, said in her opening remarks.

Pai’s proposal to undo the open internet rules argues that net neutrality has dissuaded internet providers, like Comcast, Verizon, and AT&T, from investing in building out and upgrading their networks. Likewise, at the congressional hearing, Pai said that a convincing argument that investment in internet infrastructure actually was on the rise could persuade him to stop trying to roll back net neutrality protections .

The problem with this argument, though, is that according to the internet providers themselves, investment in their networks actually has gone up since the net neutrality rules were passed.

In the first quarter of 2017, AT&T told investors it “expanded the company’s 100% fiber network powered by AT&T Fiber, which is now in parts of 52 metros with plans to reach at least 23 more metros, across the 21 states.” AT&T further pointed out that it “expects to add 2 million fiber locations in 2017.” If the 2015 network neutrality rules did dissuade infrastructure build out as Pai says, AT&T certainly didn’t get the memo.

In Comcast’s call to investors in February 2016, the telecomm noted that its increased capital spending was due to an uptick in “investment in network infrastructure to increase network capacity.”

And Verizon, too, has told investors that it continued to spend on expanding its network post-net neutrality. “In 2015, Verizon invested approximately $28 billion in spectrum licenses and capital for future network capacity,” the company said in its January 2016 investor report. In April of this year, Verizon noted that its capital spending of $3.1 billion “was largely network-related to maintain leadership in our markets.”

Small internet providers have been even more explicit about the importance of network neutrality for their success. In June more than 40 small internet providers from across the country wrote a letter to the FCC to share that none had experienced “any barriers to investment” as a result of the 2015 decision. They also shared their concerns that repealing the rules would increase the market power of large internet providers like Comcast or Verizon, making it even more difficult for smaller providers to compete.

Now, it is possible that internet providers might have invested even more without the net neutrality rules. But even if that’s true, investments may be stunted for any number of reasons, including the proposed merger between Time Warner and AT&T, the presidential election, or Verizon’s acquisition of Yahoo, to name a few. It’s unclear what kind of proof Ajit Pai needs to persuade him that network neutrality rules have not caused investment in broadband infrastructure to slump.

To support his proposal, Pai cited research claiming that capital expenditure from internet providers has gone down 5.6 percent since 2014. But that doesn’t necessarily mean net neutrality is to blame. Reducing overall spending can be a result of shifting investments and priorities at the company—and again, internet providers themselves say that they are still investing in infrastructure.

But one thing is clear: Without network neutrality rules, internet providers stand to make a whole lot of money. That’s because the companies will be able to operate what’s essentially a two-way toll, collecting money from both internet subscribers and websites that want to reach those users at faster speeds. This will inevitably put new, smaller businesses at an extreme disadvantage.

One of the great promises of the internet is that there’s no telling what someone might innovate next. The possibilities are endless––that is, unless internet providers aren’t forced to treat everyone equally.

Pai’s claims that internet providers aren’t investing in their networks is misleading, at best, and potentially ruinous for the future of a vibrant internet if his proposal to gut net neutrality rolls through unchallenged without a big public fight. And the scary thing is that in the current political climate, with so many major changes underway all at once, net neutrality may become a casualty.

July 26 2017 3:02 PM

Future Tense Newsletter: Why the State Department Needs a Cyber Office

We’re on Day 187 of Trump’s presidency and still there’s no shortage of important jobs in the federal government that remain unfilled. The State Department appears poised to add a new vacancy to the USAJOBS website as Secretary of State Rex Tillerson reportedly considers shutting down the State Department’s cyber office. Josephine Wolff explains why this is a horrid idea, writing, “cybersecurity for a global internet requires international perspectives and engagement—requires, in other words, the involvement of high-level State Department officials.” At a time when the FBI is warning parents about internet-connected toys spying on their kids and even data from a pacemaker presents privacy concerns, international debates and decisions about internet security and internet freedom—two important areas for the State Department cyber office—are more important than ever.

If news about toys spying on kids has inspired you to search the web for more information on cybersecurity, you might be surprised to see some disconcertingly specific news recommendations from Google’s news feed. Last Wednesday, Google launched an expanded version of “the feed,” a feature in its mobile search app that draws in news stories and blog posts from around the web based on your search history. The result, said Will Oremus, is an almost creepy level of personalization. Yet Oremus noted that even with Google’s records of your online behavior, the feature remains fundamentally impersonal compared with Facebook’s newsfeed. Where Google’s feature falls short by comparison, Oremus writes, “It delivers the topics you care about but not the people you care about.”

Other things we read this week while bracing ourselves for the app-ocalypse:

  • War on science: Lawrence Krauss warns us that the Trump administration’s censorship of government scientists, appointment of unqualified officials to senior scientific posts, and underfunding of scientific research programs are all part of a dangerous trend.
  • Apocalyptic thinking: Though there are risks to embracing pessimism and fear, Tommy Lynch explains how both are a necessary aspect of confronting the threat of climate change.
  • Radio dramas: While it can feel like we’re moving toward immersive forms of storytelling with the advent of virtual reality, the podcast boom has created something of a golden age of radio dramas, writes Angelica Cabral.
  • Fake images: As it becomes increasingly difficult to distinguish real images from computer-generated ones, Nick Thieme explores how we can start using technology to tell the difference.
  • Libyan robotics team: The all-female robotics team from Afghanistan wasn’t the only team to struggle to get to the U.S. for the FIRST Global Challenge robotics competition—the team from Libya faced major obstacles, too.

RIP Microsoft Paint,
Emily Fritcke
for Future Tense

Future Tense is a partnership of Slate, New America, and Arizona State University.

July 25 2017 12:19 PM

Don’t Blame Online Anonymity for Dark Web Drug Deals

Last Thursday, the Justice Department announced that it had worked with European authorities to shutter two of the largest destinations on the dark web to buy and sell illegal drugs, AlphaBay and Hansa.

The shutdown followed reports from earlier in the month that AlphaBay, the larger of the two, had mysteriously stopped working, causing users to flock to Hansa.  But it turns out that Hansa had been taken over by the Dutch national police, who were collecting information on people using the site to traffic drugs.

European and American law enforcement collaborated to quietly arrest AlphaBay’s alleged founder Alexandre Cazes in Thailand on July 5. The 25-year-old Cazes later committed suicide in a Thai jail, according to the New York Times.

These dark web drug marketplaces are accessed using a service called Tor, which allows users to browse the internet anonymously. With Tor, you can circumvent law enforcement surveillance as well as internet censorship filters, which are often installed by governments or companies to restrict where people go online. Tor also allows for the creation of anonymously hosted websites or servers that can only be accessed via the Tor Browser. AlphaBay and Hansa were both hosted anonymously on Tor.

Though AlphaBay, Hansa, and, most famously, Silk Road depended on Tor to run their illegal operations, the Tor Project, the nonprofit that maintains the anonymous browser and hosting service, says that only 2 percent of Tor traffic has to do with anonymously hosted websites. The vast majority of Tor traffic is used for browsing the web anonymously. More than 1.5 million people use Tor every day, according to a spokesperson.

The U.S. government has a rather complicated relationship with Tor. On the one hand, documents revealed by Edward Snowden revealed how the National Security Agency had been trying to break Tor for years, searching for security vulnerabilities in browsers that would allow law enforcement to crack the online anonymity service. The Department of Defense has also invested in trying to crack Tor. During the 2016 trial of one of the administrators of Silk Road 2.0, another shuttered dark web drug-trafficking site, it was revealed that DoD hired researchers from Carnegie Mellon University to try to break Tor’s encryption in 2014.

Yet Tor also wouldn’t exist without the U.S. government—it was  originally built as a project out of the U.S. Naval Research Laboratory. The State Department continues to fund Tor (at least someone has told Rex Tillerson about it, presumably) because internet users around the world rely on the anonymity tool to access information and communicate safely online, particularly in countries where the internet is heavily monitored or censored by the government, like in China with its national firewall, or in Thailand, where it’s illegal to criticize the royal family online.

Cazes, the AlphaBay ring leader, was caught thanks to investigative work, not a break in Tor’s encryption. Cazes had sent password recovery emails to his email address, which investigators used to find his LinkedIn profile and other identifiers. (And no, the FBI did not dig up an email from Cazes asking to join his professional network on LinkedIn. According to The Verge, Cazes used the same address on a French technology troubleshooting website, which listed his full name, leading investigators to find a LinkedIn profile where he boasted cryptography and web hosting skills, as well as involvement in a drug front.)

And that’s good news for the vast majority of Tor users who aren’t interested in scoring molly. In 2015, a report from the U.N. declared that anonymity tools “provide the privacy and security necessary for the exercise of the right to freedom of opinion and expression in the digital age."

Anonymity tools, like so many technologies, have both good and bad applications. And in the same way cellphones aren’t evil just because some people use them to make drug deals, it’s important to not malign anonymity tools just because some people use them to sell drugs, too. If the U.S. government is ever successful in finding a way to disable Tor’s encryption to find criminals, it could put hundreds of thousands of people who depend on Tor at risk, too.