Future Tense
The Citizen's Guide to the Future

Aug. 22 2016 6:14 PM

Facebook Pulled Back the Curtain on Targeted Advertising. Yikes.

Who among us hasn’t signed on to Facebook, seen our newsfeed dotted with ads for Everton soccer jerseys, and wondered how Facebook knew we wereliterally just at a sports bar watching that team play? OK, so maybe you’re a Manchester United fan, but we’ve all puzzled over how exactly Facebook knows to tailor certain adds just for us.

Aug. 22 2016 4:30 PM

Massachusetts Is Going to Tax Uber to Subsidize the Taxi Industry

For tech startups that operate in a legal gray area, getting taxed is a bit like graduating from high school. It's a gesture of regulatory acceptance—a sign that politicians have adjusted to the company's existence and begun viewing it as a potential source of revenue, rather than a mere threat to incumbant interests to squelched. Airbnb has been an especially instructive example—it practically begs cities to tax its operations, knowing that will be a step on the path to legitimacy.

That's why, on balance, the news that Massachusetts has passed a new 20 cent tax on every trip ordered through ride-hailing services like Uber and Lyft is a good omen for the companies. The new levy was included in a wider regulatory bill that will, among other things, institute a two-step background-check process for drivers and create a new division within the state government to oversee the companies. The tax likely isn't enough to change demand for ride-hailing services. But since it will mostly go toward city and state budgets, it likely means that elected officials will have an incentive to make sure the startups stick around. Between that, and the new bureaucratic institutions devoted to overseeing Uber and Lyft, the industry hasn't just finished 12th grade—it's basically just wrapped up a bachelor's.

However, the new tax has one quirk: 5 cents of the fee generated by each ride will be used to subsidize the state's licensed taxi industry. As Reuters has reported, it's not entirely clear how this arrangement will work yet, but, “The law says the money will help taxi businesses to adopt 'new technologies and advanced service, safety and operational capabilities' and to support workforce development.” A Boston-area industry rep said it might go toward improving the smartphone app taxis there use.

So Uber and Lyft are being asked to fund their industry rivals in return for doing business in the Bay State. (At least until 2021. After 2021, the nickel fee starts going to the state and municipalities, along with the other 15 cents of the tax.) Is this absurd? Maybe a little. But as a price to pay for regulatory normalization, it's relatively light. Mostly, it seems like a modest payoff to help the taxi lobby swallow its own concessions in the bill. For instance, taxi operators argued that Uber and Lyft drivers should have to be fingerprinted, just like their own employees. Uber pushed back, and the requirement is absent from the final legislation. Does the taxi subsidy make economic sense? No, not really. But as a temporary political accommodation, it doesn't seem especially egregious. And now Uber gets to operate 100 percent on the up-and-up. It should be thrilled.

Aug. 22 2016 2:47 PM

Barbra Streisand to Apple CEO Tim Cook: Siri “Pronounces My Name Wrong”

Barbra Streisand, owner of two Oscars, 10 Grammys, five Emmys, and a Tony Award, is—incredibly—still having to deal with mispronunciations of her last name. She’s on a crusade to end it once and for all—and not being human is no excuse.

 

Aug. 22 2016 11:29 AM

Instagram Accounts May Provide Clues to Depression

Scroll through your friends’ Instagram feeds and you’ll likely see the sunniest, most charmed versions of their lives: days at the beach, elegant meals, and playful pets. Now, however, new research indicates that even the most charmed imagery might tell a different story.

According to a study published on the electronic preprint service arXiv, machine learning analysis of users’ Instagram accounts can “identify markers of depression.” The study’s authors—Andrew G. Reece, a Ph.D. candidate at Harvard, and Christopher M. Danforth, a professor at the University of Vermont—further claim that their “models outperformed general practitioners’ average diagnostic success rates for depression.”

The MIT Technology Review explains that Reece and Danforth’s findings map strikingly onto conventional associations with depression. The two performed their study on a group of about 170 workers from Amazon’s Mechanical Turk, “of whom around 70 were clinically depressed,” according to the Review. Participants completed “a standard clinical depression survey” and also provided other information, including the dates of their diagnoses where applicable. The algorithm looked for patterns in the images posted by depressed individuals prior to their diagnoses, while also evaluating an array of recent photos from individuals who were not depressed.

While the study looked at a variety of features—including the number of people in the images and the language used to describe them—the most immediate flag may well be the color schemes apparently preferred by those who were depressed: “The researchers found that depressed individuals tend to post images that are bluer, grayer, and darker.” That plays out in the filters that Instagram users impose on their images. The Review writes that while “healthy individuals preferred a filter called Valencia, which lightens photographs,” depressed participants in the study tended to employ one that grayscales their photographs.

While these findings are striking, there may still be reason to hesitate before fully embracing Instagram as a mass diagnostic tool. Perhaps most notably, it’s important to remember that social media rarely serves as an automatic translation of a user’s life and experiences. Given how well the color schemes line up with those stereotypically associated with depression, it’s possible that these images were a conscious form of communication for some users, especially in advance of their diagnoses. In that light, perhaps the algorithm is best at pinpointing those who are already preparing to communicate with others about their mental health.

Aug. 19 2016 3:38 PM

A Password-Strength Meter Doesn’t Really Measure Strength at All

If you’ve ever made an online account, then you’ve come across a password strength meter—that little thing that encourages you to add a little more complexity to your credentials. And, if you’re like me, you ultimately acquiesce—though maybe with a sigh of annoyance—because they force a little bit more safety into your online life. But that safety may be an illusion.

On Sophos’ Naked Security, web consultant Mark Stockley writes about his investigation into password strength meters. It was a repeat of an experiment he ran in 2015, and the new results were not encouraging.

To test password strength meters, Stockley used five passwords that would “fail a genuine cracking attempt instantly and then ran them through five popular password strength meters.”  If the strength meters were at all up to muster, they should have rejected any of his proposed passwords. Simply rejecting all of the passwords wouldn’t actually prove that a meter is good, but Stockley says that accepting any of them would be instantly damming.

Aug. 19 2016 2:00 PM

Artificial Intelligence-Enabled Cleaning Machines Might be the Future, if the Unions Allow it

This post originally appeared on Inc.

In September, San Diego robotics startup Brain Corporation will introduce artificial intelligence software that allows giant commercial floor-cleaning machines to navigate autonomously. The follow-up offering it wants to develop may be even more forward-looking: A training and certification program for janitors to operate the machines.

The program, still in early stages of planning, is aimed at helping janitors maximize efficiency and establishing standards and best practices for the use of robots in janitorial work, according to Brain Corporation. The company says it is not aware any other such training program exists.

There’s additional incentive for Brain Corp. to offer training options. Buzz around artificial intelligence and robotics technologies has caused concerns about jobs being automated out of existence. It’s prudent for Brain Corp. to frame its machine as non-threatening in the eyes of organized labor groups.

Aug. 19 2016 12:45 PM

The NSA Hack Shows Why the U.S. Government Shouldn’t Stockpile Software Vulnerabilities

Earlier this week, top secret code written by one of the NSA’s most clandestine branches was released on the internet. Among other things, it contains a cache of technologically sophisticated hacking tools. The content is from 2013, and various experts, including former NSA staff, have confirmed that it looks to be genuine. Much of this advanced technology uses existing vulnerabilities—security flaws in software and hardware—to attack systems, break through firewalls, and gain access to private networks. In this case, the tools target routers made by both U.S. and Chinese companies, including Cisco and Fortinet.

Critics have long alleged that the U.S. government stockpiles too many vulnerabilities. Various branches of the government have responded with claims that they disclose 91 percent of vulns they find, and that their alleged stockpile of zero-days (previously unknown vulnerabilities, so the vendor has had “zero-days” to fix them) is exaggerated. But this release by the “Shadow Brokers” has proven that the NSA does have at least a few vulnerabilities that it has kept to itself.

There is a relatively unknown process that the government uses to evaluate the vulnerabilities that the government finds or acquires. As far as we know, the Vulnerabilities Equities Process, or VEP, has been in place since 2010, but was not particularly active until 2013. After the Snowden revelations, which included discussion of vulnerabilities, a number of policymakers, advocates, academics and technologists criticized potential stockpiling. In response the government “reinvigorated” the VEP. Under the post-Snowden process, vulnerabilities are supposed to be reviewed by a group of representatives from government agencies who then decide whether the information should be shared with the company that built the product so that it can be patched, or whether the government may keep the information to itself for offensive and defensive purposes. But then, in 2014, the Heartbleed vulnerability threatened two-thirds of the internet, and the NSA was accused of knowing about it beforehand. In response, the White House posed a public list of considerations for when an agency proposes temporarily withholding knowledge of a vulnerability, including rating the risk of leaving it unpatched, identifying the harm that a hostile nation could do with it, and diagnosing the likelihood that someone else will discover it.

There are still serious questions about the functioning of the VEP, the most serious being that, in fact, it may be holding back, or not reviewing, some potentially dangerous zero-day vulnerabilities—in which case the vendor that maintains the software would not know that they existed. Some have called for a more transparent process with almost automatic disclosure, while others argue we need more information before pushing for reform.

So what does this week’s hack mean for the VEP? We know that as of 2013 these vulnerabilities were in the government’s possession and that some of them were still zero-days until the Shadow Brokers released them. This raises two possible, mutually exclusive scenarios for how the VEP was used. The government may have reviewed the information and decided the vulnerabilities were worth holding onto, which means that we have proof that at least some significant vulnerabilities are being kept secret. The other possibility is that these exploits might not have been reviewed by the VEP agencies—though this wouldn’t necessarily have violated procedure, because the decision to retain the information leaked by the Shadow Brokers may have been made before the VEP was standard practice. We just don’t know enough about how the VEP functioned originally to say—which is itself a problem.

This leak of NSA exploits brings to the forefront many questions that security experts have long been asking about the VEP: Is every single vulnerability reviewed by a broader process? What type of vulnerabilities are exempt or retained? Does the NSA alone get to decide what secrets are worth keeping? If the same data were hacked in 2015, after the VEP was supposed to be fully active, would fewer vulnerabilities show up in the data dump? One of the most common counter arguments to questions about the security and efficacy of the VEP is basically “you would trust our policies, if you knew what we knew.” Well, now we know a bit of it, and the information doesn’t inspire confidence. If the Shadow Brokers’ hack is a test of the government’s policies on disclosure of zero-days, they are clearly falling short.

The hack also challenges other parts of the government’s argument for vulnerability non-disclosure—first, that its security measures are strong enough that their secret stash of exploits won’t be exposed, and second, that the vulnerabilities they retain don’t need to be patched because they won’t be found by a bad actor who will exploit them. We now know that, at least, the first is false. As for the second, the NSA’s “nobody but us” argument—which it’s also used in its fight against encryption—is extremely unrealistic. The very real threat of non-disclosure of vulnerabilities cannot be downplayed with arguments that the NSA is uniquely capable of finding zero-days and impervious to cyberattacks.

So here we stand, with highly dangerous NSA hacking tools available for anyone to download and a cache of others up for sale on the black market. If government hacks are our only window into the transparency and efficacy of how the government deals with vulnerabilities, then the Shadow Brokers are nowhere near the biggest of our cybersecurity problems.

Aug. 18 2016 5:51 PM

In Six Months, Twitter Suspended 235,000 Accounts for Promoting Terrorism

In a Thursday blog post Twitter announced that it had suspended 235,000 accounts since February for violating its ban on violent threats and for promoting terrorism. The company had announced earlier this year that they shut down 125,000 accounts between mid-2015 and February 2016 in an effort to stifle accounts used to promote terrorism.

Public reproach of Twitter has been particularly harsh, with critics arguing that the company is providing a platform for terror groups to grow—one such claim even made it to court, although it was recently dismissed. In April, Jean-Paul Rouiller, director of the Geneva Centre for Training and Analysis of Terrorism, told CNN that social media is vital to modern terrorist organizations: “They would not have been able survive, they would not be able to recruit people. The human touch is always needed, but social media is their shop-window.”

Aug. 18 2016 4:01 PM

Here’s How Data From Amazon’s Delivery Algorithm Can Help Reduce Discrimination

Amazon recently began to offer same-day delivery in selected metropolitan areas. This may be good for many customers, but the rollout shows how computerized decision-making can also deliver a strong dose of discrimination.

Sensibly, the company began its service in areas where delivery costs would be lowest, by identifying ZIP codes of densely populated places home to many existing Amazon customers with income levels high enough to make frequent purchases of products available for same-day delivery. The company provided a web page letting customers enter their ZIP code to see if same-day delivery served them. Investigative journalists at Bloomberg News used that page to create maps of Amazon’s service area for same-day delivery.

The Bloomberg analysis revealed that many poor urban areas were excluded from the service area, while more affluent neighboring areas were included. Many of these excluded poor areas were predominantly inhabited by minorities. For example, all of Boston was covered except for Roxbury; New York City coverage included almost all of four boroughs but completely excluded the Bronx; Chicago coverage left out the impoverished South Side, while extending substantially to affluent northern and western suburbs.

Aug. 18 2016 1:19 PM

Yet Another Way the Baltimore Police Unfairly Target Black People

What do you get when you take a very segregated city with a history of high racial tension, give it a police department that is demonstrably racially biased against blacks, and arm those police with secret high-tech surveillance tools? The answer: Baltimore. This is a city where, through use of suitcase-size fake cellphone towers that can be used to track Baltimore residents, police disrupt the cellphone network on a regular basis, disproportionately—unfairly—focusing on black neighborhoods.

With all this in mind, on Tuesday I represented three national nonprofits in filing a complaint against the Baltimore City Police Department for its use of certain surveillance equipment that mimics cellular towers in order to track cellphones. That equipment, described as “cell site simulators” and often referred to in shorthand as “stingrays,” emulates real cell towers, sends out counterfeit registration signals to nearby cellphones, forces cellphones in the area to connect with it, and then enables law enforcement agents to catalog or track any cellphones within 200 meters—about a two-block radius. These devices are used by police departments around the country, but quite likely most heavily in Baltimore, and they have real downsides. They can cause disruptions to the cellular phone network, jam 911 and other emergency calls, and directly discourage First Amendment–protected speech and access to information.

Cell site simulators have been challenged in the past, primarily for Fourth Amendment violations when police have failed to get warrants to use the devices. Less has been said, however, about how these devices fare under the Communications Act—which, as we point out in our complaint, is not well. In order to masquerade as cell towers, cell site simulators transmit signals over the air to cellphones, just like real cell towers do. Unfortunately for the Baltimore City Police Department, federal law says you need a license to do that. The law also says you can’t interfere with legitimate communications between someone’s cellphone and the cellular network. Finally, the law directs the Federal Communications Commission to make emergency calling available, and to ensure that communications networks are available to all people equally, or at least that they are not discriminatorily only to some.

The FCC is going to have to do something about all of this, and soon. Interference with the cellular network—especially in an era when more than half of households no longer have landlines—is greatly concerning. But more than that, the FCC has the legal obligation to enforce the Communications Act, and this is a clear violation of the Communications Act. We’ve asked the FCC to stop Baltimore police from using these devices, and not to use them again unless and until they get the licenses they need under the law. Given the ways in which surveillance technology enhances police power, and in light of the recent Justice Department report demonstrating that the police are racially biased, asking the Baltimore Police to stop using the devices seems like a reasonable thing to ask for.

But what then? We’d like to see surveillance equipment specifically addressed in any consent decree reached between DOJ and the police of Baltimore or any other city over racially biased policing. And if police agencies also want the FCC to modify its rules to make licenses easier to obtain, as they might, the burden should be on law enforcement to explain clearly and compellingly to the public why we should support that, and how we can be confident the technology won’t continue to be used in a way that exacerbates racial disparities in society. From the days of slavery, to the Civil Rights Era, to today, communities of color have borne far more than their share of surveillance harms.

Also, future conversations between police and activists over racially biased policing simply must include a surveillance component. And conversations between police and activists about intrusive surveillance need to include a racial justice component. Because surveillance and racial justice are not two separate issues. These issues are one and the same, and we should scrutinize and treat them that way.

READ MORE STORIES