Future Tense
The Citizen's Guide to the Future

April 1 2015 4:40 PM

Why Scientists Need to Give Up on the Passive Voice

As a group, scientists are not widely admired for their prose style. To no small extent, this derives from their insistence on the passive voice, that boogeyman of basic composition classes. Nevertheless, the style has its defenders: Two experts in scientific style recently took to Reddit to debate the convention, taking positions for and against the passive voice in scientific writing. Their conversation reveals that quarrels about the active and passive voices have more to do with the way our culture discusses science than they do with arbitrary quirks of style.

Few ostensible rules are more poorly understood than the prohibition against the passive voice, partly because the passive voice itself is poorly understood. In the Reddit AMA, Celia Elliott, a grant writing specialist in the Department of Physics at the University of Illinois, took a stand on behalf of the passive voice, but to do so she first had to explain it. As she wrote, “It’s all about the direction of the action. In the active voice, the subject of the sentence does the action. (‘The pitcher throws the ball.’) In the passive voice, the subject of the sentence receives the action of the verb. (‘The ball was caught by the pitcher.’)” Put more simply, the active voice emphasizes agency, while the passive voice puts the focus on objects themselves. Consequently, the active tends to be associated with subjective experience and the passive with objective facts.


When I teach courses on writing, I try to avoid arbitrary rules. Following the late style expert Joseph Williams, I hold that good writing is basically good storytelling. To tell a story well, we need to clearly identify our characters and then show the reader what those characters do. The passive voice makes storytelling more difficult because it hides the characters deep in the sentence—if it shows them at all. On Reddit, Kristin Sainani, an associate professor of health research and policy at Stanford University, took a similar position, arguing that the passive voice “obscures who is responsible for what.” The passive has its place (I used it to open the prior paragraph), but, more often than not, it disrupts the flow of a narrative, making it difficult for the reader to connect one idea to the next.

By contrast, Elliott argues that scientists should use the passive voice in order to highlight their results. She writes, “The main advantage of the passive voice, in my opinion, is that it allows the writer to put the important concepts, ideas, findings, principles, and conclusions first. ...” In other words, the passive voice allows us to discuss discoveries rather than the scientists who discovered them. In theory, it plays an important rhetorical function, because it insists on the factual truth of discoveries by minimizing the role that fallible human subjects play in the equation.

Ultimately, however, scientists may be doing themselves a disservice by downplaying their place in the scientific process. Sainani holds that there’s something slightly untrustworthy about passive constructions, writing, “It’s more accurate and honest to say, ‘We found that …’ since this emphasizes the role that the experimenters played in designing, conducting, and interpreting the experiments.”

Among other things, the passive voice may make it more difficult to celebrate particular scientific accomplishments. When scientists fight for the passive voice, they’re not fighting for their right to write poorly. They think science should speak for itself. But in a time when climate change deniers blind themselves to hard data and vaccine conspiracy theorists blithely cover their ears to public health risks, it has never been more clear that science doesn’t speak for itself.

The success of charismatic scientists like Neil deGrasse Tyson shows that the public responds better to stories about science than they do to simple scientific facts. So long as scientists insist on writing in the passive voice, they may have a harder time telling those stories well.

Video Advertisement

April 1 2015 11:32 AM

Virtual Reality Simulation Tries to Teach Cops When to Shoot

This post originally appeared in WIRED.

“I gotta get the guns,” Scott Digiralomo tells me over his shoulder as he leads me down the cinder block hallways of the Morris County Public Safety Training Academy in Morristown, New Jersey. Digiralomo, director of the county’s Department of Law and Public Safety, ducks into an empty room and, out of a large black safe, fetches an M4 rifle and a Glock.


At this point you should know that as a writer who works in Manhattan, lives in one of the yuppiest neighborhoods in Brooklyn, and gets panicky just passing by armed officers in the New York City subway, this is not how my days typically begin. And yet no more than 30 minutes later, there I am, a Glock tucked into the holster on my right hip and a can of pepper spray in the left, cautiously approaching a woman in a white SUV who is blocking her ex-boyfriend’s driveway, refusing to let him and his new girlfriend leave.

“Get that crazy bitch out of here now!” yells the new girlfriend, standing in front of the house as I wander up the lawn. Before I can take another step, shots ring out from the SUV. I freeze and a beat later clumsily pull the weapon from my hip.

“Uh, put your hands up? And your weapon down? Please?” I say too politely, as if asking a waitress for another basket of bread. But it works. The shooter emerges from the car in a gray hoodie and jeans. She’s still screaming, but she drops the gun and falls to her knees, arms raised. In that instant, I’m pretty sure the situation’s under control, so I take a second to wonder what I’m supposed to do next.

And then she shoots me.

Behind me, Digiralomo is laughing, not because he’s some masochist who’s going to watch me bleed to death but because the entire scenario, as you may have guessed, is a virtual reality simulation, and I—standing in the middle of the darkened room, surrounded by an array of screens, doing what has to be the world’s worst impersonation of a cop—look like a total tool.

But while this may have been little more than an exercise in embarrassment for me, Digiralomo assures me that this system, designed by a company called Virtra, is actually critical in helping police officers hone their skills as decision makers before they’re let out in the real world. Morris County installed the technology last November, smack dab in the middle of one of the most contentious periods between police and the public in recent history. And while Digiralomo says that wasn’t why the academy bought the roughly $300,000 system, it’s hard not to see the connection.

The fatal shooting of Michael Brown in Ferguson, Missouri, last summer cracked open the scab on one of our country’s oldest wounds. It fueled new conversations about centuries-old issues and exposed gaping rifts across the entire country, not only on the subject of whether Officer Darren Wilson was justified in shooting Brown, but on whether or not minorities living in the United States are safe in the hands of the police officers that are hired to protect them. No amount of technology will ever solve these deeply rooted societal issues. Systems like Virtra’s—so-called cave automatic virtual environment, or CAVE, systems—have been around for a while. But as President Obama and others call for more robust police training, training technology that can simulate a world more like the real one takes on an added urgency.

Today, in states like New Jersey where Digiralomo works, officers are required to requalify for the police force twice a year by testing their shooting accuracy on a gun range. While that demonstrates that officers can use their weapons, it doesn’t necessarily help them understand whether they should.

“In a lot of cases like Ferguson, it’s not about whether or not the officer was accurate when they shot,” Digiralomo says. “The question comes down to the decision the officer made and whether the officer should have used deadly force. A lot of that comes down to decision-making.”

Systems like Virtra’s are designed with just that in mind. “We’re finding there’s a need for cities and national agencies to train at above minimum standards,” says Bob Ferris, CEO and founder of Virtra. “With this new technology, they can better prepare officers for use of force and the life-and-death situations that often make the headlines.”

Ferris was early on the virtual reality bandwagon, launching Virtra in 1993 as an entertainment company that would run simulations at theme parks around the country. But after 9/11, Ferris completely overhauled the business to focus on immersive police training, which required a total rethinking of the technology itself.

In the early days, Virtra started off making virtual reality goggles, not unlike the ones Oculus is now famous for, but when the company began working with law enforcement, Ferris realized this technology could do officers more harm than good.

“You want the officer to learn proper muscle memory, so in order to have the training apply at the highest level of effectiveness to real life encounters, you have to remove the head-mounted display, unless they’d use one in real life,” Ferris says. It’s also critical for officers to practice moving around space and interacting with one another, which would be severely inhibited if everyone were wearing goggles.

Instead, the system I tested out in Morris County, which is now being used at more than 200 training facilities around the world, consists of five large screens that surround a stage and five overhead projectors that cast lifesize videos onto the screens, giving the users the feeling that they’re standing in the center of a scene. The Glock on my hip was a real gun, but rather than being loaded with bullets, it was loaded with carbon dioxide, causing the gun to recoil each time I pulled the trigger. At the end of the gun is a laser, which interacts with the cameras overhead to detect whether or not a shot is accurate. The system also comes with a wearable device that gives officers a small electrical shock to simulate they’ve been shot. “Oh yeah,” Digiralomo says. “It hurts.”

But the most important part of the system is the content. The video of the woman in her ex-boyfriend’s driveway was just one of dozens of different scenarios that Virtra created, ranging from routine traffic stops to school shootings. And like a good choose-your-own-adventure novel, trainers can manipulate what happens next, escalating the tension or diffusing the situation on screen as it’s happening based on what the officer in training says and does. In my case for instance, Digiralomo could have made the shooter calm down, ending the scene right there. But since, of course, I was being a bit of a self-conscious baby about it, he let her shoot me.

Trainers can make a dog bark or a gate close in the distance. They can change the weapon at the last second, so instead of pulling a gun out, the suspect might pull out a bottle or a bat or nothing at all. This, Digiralomo says, is essential. “One of the concerns we had was we don’t want to run every officer through here so that every single scenario they got, it was justified to use deadly force,” he says. “So we’ll run a few with deadly force, a few where they use pepper spray, a few where the person just complies and gives up.”

Trainers can also create scenarios that challenge officers’ unconscious biases. For instance, in one video, a shooter is on the loose in a movie theater. As the officer surveys the scene, a black off-duty cop rushes through a door on the officer’s left with a gun in his hand. The trainer can run a scenario in which the officer’s badge is visible in his other hand or a scenario in which his badge is on his hip and not immediately apparent to the officer in training. According to Digiralomo, when the off-duty officer has the badge on his hip, the trainee kills him 80 percent of the time.

That’s why both Ferris and Digiralomo say having competent trainers operating this system and catching trainees’ mistakes is so important. “The instructor needs to go through what they did right and wrong, and it’s amazing how quickly officers are able to adapt and go from making decisions they regret to decisions they know are the best they’re able to make,” Ferris says.

Of course, virtual reality can never be a true proxy for the real thing. For starters, officers know they’re not going to get shot and that the person on the other end of their trigger is just a projection. Another issue is that, because it’s all shot on video, the camera angle dictates how the officer moves through space, much like a video game does. And while the 300-degree view makes that experience immersive, it’s not completely realistic. As Eugene Fluri, a SWAT team commander, noted after running through one traffic stop scenario for me, “The angle of the video shows the officer right in front of the window, and I wouldn’t have done that. If he moved his hands, I would have moved, but where am I gonna go?”

You can’t call for backup, open doors, or handcuff someone. All that action is trapped on a screen. Still, watching Fluri navigate the movie theater scenario, knees bent, gun drawn, and basically putting me to shame, it’s easy to see the advantages to this method of training. As he moves through the space, Fluri interacts with the video, telling scared moviegoers, “Out this way, out this way!” and asking victims, “Where’s the person who shot you?” He’s rotating left to right and back again, rehearsing for the real thing. And when he stumbles upon the shooter in the parking lot outside, he repeatedly insists that he drop the gun, until finally, the shooter fires, and Fluri fires back, eventually killing him.

Afterward, if the demonstration hadn’t just been for my benefit, a trainer would have reviewed Fluri’s every move to decide whether he’d made the right call. “At what point do you shoot? How many times do you say put the gun down? Because he just killed a bunch of people and is refusing to go, are you justified?” Digiralomo says. “Those are all the questions we review afterward.”

Compare that to the gun range on the lower level of the academy. It’s an expansive concrete void with little numbered corrals, at the end of which are faceless metal targets that have been pummeled with bullets over the years. There’s no space to run around, no judgement calls to be made, no nuance. Officers’ only job when they’re down here is to shoot and shoot and shoot, until they’ve proven they’re a good enough shot to keep their jobs. Given the complex web of historical and societal ills that have contributed to the current lack of faith between police and the public, it’d be unfair to say that this type of training alone is the problem. But it sure doesn’t seem like the solution.

Also in WIRED:

March 31 2015 7:08 PM

A Year Later, Americans Have Forgotten About Heartbleed

Do you remember Heartbleed? Yeah? The security bug discovered a year ago in one of the standard cryptographic libraries used across the Web? Are you sure you're not just nodding along? I ask only because a new survey says that 86 percent of Americans either never heard about Heartbleed or have forgotten about it in the year since its discovery. So if you don't actually know what it is, you have plenty of company.

But that doesn't mean it's OK. No matter how many high-profile hacks and vulnerabilities come to light, awareness and action still seem to lag. The poll, conducted by password manager and digital wallet maker Dashlane, surveyed 2,000 adult Americans about their knowledge of Heartbleed and their attitudes toward cybersecurity. Thirty-two percent said they are responsible for protecting themselves online, but 23 percent said tech companies should be responsible, and 24 percent didn't know who the onus should be on. Along with the survey, the company released a video of Heartbleed commentary from cybersecurity experts, including representatives from Center for Democracy and Technology and Georgetown University Cyber Project.


“It's clear that a year later the impact of Heartbleed is much less than we would have expected,” said Dashlane CEO Emmanuel Schalit in a video about the survey's findings (below). “Even if the public wanted to care, it’s very difficult to understand what's going on.” There’s an optimistic perspective.

You can see how Dashlane would have an interest in pointing out how scary the cybersecurity climate is, since the company sells products meant to address the issue. But given the number of people who still rely on good ol' Password123 for their accounts, the findings also seem plausible.

Fifty percent of the survey respondents reported changing at least one password in the wake of the Heartbleed revelation, but when asked which information they were most concerned about protecting only 1 percent said personal email. Social security number, banking information, and credit card numbers were all much higher on the list, even though most people’s personal email could give hackers valuable clues for ascertaining all of those other pieces of information.

“Now, a year on, I’d love to be able to say that we’ve learned many lessons from Heartbleed and that the web is now a more secure place,” wrote Yuval Ben-Itzhak, the chief technology officer of security company AVG, in a blog post. “Sadly, it’s not as simple as that.”

There’s still no answer to the question of how to get Americans fired up about cybersecurity.

March 31 2015 11:45 AM

The Dark, Twisted Tale of How a DEA Agent Became a Paid Mole for Silk Road

This post originally appeared on WIRED.

Nearly 18 months after the Silk Road online drug market was busted by law enforcement, the criminal charges rippling out from the case have now come full circle: back to two of the law enforcement agents involved in the investigation, one of whom is accused of being the Silk Road’s mole inside the Drug Enforcement Agency.


DEA special agent Carl Force and Secret Service special agent Shaun Bridges were arrested Monday and charged with wire fraud and money laundering. Bridges is accused of placing $800,000 of Silk Road bitcoins he obtained in a personal account on the Mt. Gox bitcoin exchange. But Bridges’ charges pale in comparison with the accusations against the DEA’s Force, who is additionally charged with theft of government property and conflict of interest in his investigation of the Silk Road. Force allegedly took hundreds of thousands of dollars worth of bitcoin payments from the Silk Road as part of his undercover investigation and transferred them to a personal account rather than confiscate them as government property. He’s also accused of secretly working for the bitcoin exchange firm CoinMKT, using his DEA powers to seize a customer’s funds from the exchange, and later using a subpoena to the payment firm Venmo to try to unlock his frozen funds there.

But there’s an even more surprising set of accusations against Force: That he acted as a paid informant for Silk Road’s recently convicted administrator Ross Ulbricht, allegedly selling information about the investigation back to Ulbricht under two different pseudonyms. Meanwhile, under a third pseudonym, Force is separately accused of trying to blackmail Ulbricht using other law enforcement data he believed might have been Ulbricht’s identity.

Force, a 46-year-old member of Baltimore’s Silk Road task force, began working in 2012 as an undercover agent on the case, communicating directly with Ulbricht, who was allegedly using the pseudonym the Dread Pirate Roberts. In that role, Force even served as a fictitious criminal named “Nob” who helped arrange a murder of Silk Road employee Curtis Clark Green for Ulbricht. That murder-for-hire didn’t happen; The entire killing was staged by the Baltimore task force. But the attempted murder was allegedly paid for by Ulbricht to silence Green as a potential witness, and it represents the first in a series of six killings prosecutors have accused Ulbricht of commissioning.

But allegedly, those anonymous communications with Ulbricht led Force down an even stranger, more corrupt path. “Force then, without authority, developed additional online personas and engaged in a broad range of illegal activities calculated to bring him personal financial gain,” according to a press statement from the Department of Justice.

The criminal complaint against Force and Bridges includes a detailed affidavit written by IRS agent Tigran Gambaryan. In that account, Force is accused of using his Nob persona and possibly others to sell Ulbricht law enforcement information, telling Ulbricht that a corrupt law enforcement official named “Kevin” was feeding him info. A folder on Ulbricht’s laptop at the time of his arrest, Gambaryan points out, was labeled “LE counterintel” and includes data that appeared to have been based on real internal materials from the federal investigation into Ulbricht’s activities.

At first, Force seems to have given Ulbricht only fraudulent information, according to Gambaryan, and Force kept his superiors aware of the fake informant scheme. But as time passed, more and more of Force’s communications with Ulbricht were encrypted, Gambaryan writes in the affidavit, preventing Force’s superiors and later Gambaryan from determining exactly what Force told Ulbricht. Eventually, Gambaryan writes, Force also asked Ulbricht to send him 525 bitcoins in payment for information about law enforcement’s investigation of the Silk Road—worth about $50,000 at the time—to a secret bitcoin address where he kept personal funds rather than the DEA’s confiscated money.

The revelation of an alleged Silk Road informant inside the DEA follows repeated hints in Ulbricht’s trial of those leaks. Ulbricht’s lawyer Joshua Dratel made multiple references to the Silk Road’s boss paying for counter-intelligence information from law enforcement officials. (He argued, however, that the Silk Road boss wasn’t in fact Ulbricht, but was instead using that leaked information to plan his or her exit from the Silk Road and to frame Ulbricht.) The operators of the Silk Road “had been alerted the walls were closing in,” Dratel said in his opening statement at trial.

Ulbricht’s journal, taken from his seized laptop, also references two pseudonymous individuals named French Maid and Alpacino, whom Ulbricht seems to have used as sources for information about law enforcement activities. At one point Ulbricht writes that he paid French Maid $100,000 for the tip that Mt. Gox CEO Mark Karpeles, who also ran a web-hosting company used by the Silk Road at one point, gave Ulbricht’s name to the Department of Homeland Security.

In his affidavit, Gambaryan writes that he believes Force was in fact French Maid. He points to Force’s knowledge not only of the DHS interview with Mark Karpeles, but also of the versions of PGP both French Maid and Force used and to the financial trail from Ulbricht’s payment to French Maid that eventually ended up in Force’s bitcoin account. He also points out a message where French Maid ends a message “Carl,” perhaps by accident. (Force allegedly “covered” for that error by explaining that French Maid also went by the name “Carla Sophia.”)

From there, the story gets stranger still: Under yet another pseudonym, “Death From Above,” Force is accused of telling Ulbricht he was a Green Beret and a friend of Curtis Green, whose murder Ulbricht allegedly believed he had paid for. “I know that you had something to do with [Green’s] disappearance and death. Just wanted to let you know that I’m coming for you,” Force allegedly wrote as Death From Above. “You are a dead man. Don’t think you can elude me.”

Death From Above later wrote to the Dread Pirate Roberts again and threatened to reveal his real name if Ulbricht didn’t pay him $250,000. According to Gambaryan, a screen-recording program on Force’s DEA computer captured video of him writing as Death From Above.

However, Gambaryan writes that Force was actually blackmailing Ulbricht by threatening to reveal the wrong suspect’s identity. Trying to show the seriousness of his threat, he sent Ulbricht the identifying details of an earlier suspect he believed to be the Dread Pirate Roberts, rather than Ulbricht himself. (That earlier suspect isn’t named in the affidavit.) Writing in his journal, Ulbricht dismissed the threat as “bogus.”

Bridges, for his part, is accused of a more traditional form of corruption: Quietly stealing money by exploiting a suspect’s arrest. Gambaryan describes how Bridges participated in the Baltimore Task Force arrest and questioning of Silk Road employee Curtis Green in Utah, and then used Green’s administrator account on the Silk Road to pull off a “series of sizable thefts” from Silk Road vendors. “The thefts were accomplished through a series of vendor password and pin resets, something that could be accomplished with the administrator access that [Green] had given to the Baltimore Task Force.” Bridges then allegedly moved that money through a series of accounts and eventually into the Fidelity account of a corporation he created as a money laundering vehicle.

It’s not yet clear whether or how all of the alleged corruption in the Baltimore Silk Road investigation might affect Ross Ulbricht’s own legal case. Ulbricht still faces murder-for-hire charges in Maryland as a result of that investigation, a case that could be tainted by the alleged, epic misconduct of these two investigators.

This afternoon, Ulbricht’s lawyer had this to say about the two arrests:

But Ulbricht was already convicted in February of seven felonies including conspiracy to sell drugs and launder money, as well as a “continuing criminal enterprise” charge often known as a kingpin statute. Based on the evidence in Ulbricht’s trial, that case seems to have largely been conducted by the New York division of the FBI and the Chicago Department of Homeland Security.

Given those two separate investigations, Ulbricht’s conviction or upcoming sentencing may not be affected by the charges against Force and Bridges. Instead, those charges merely add two more names to the long list of criminal suspects who allegedly gave in to the temptation of the dark web’s dirty money.

Read the full criminal complaint against both Force and Bridges here.

Also in WIRED:

March 31 2015 11:34 AM

Play Pac-Man in Google Maps Right Now. Go.

You can't always get what you want, but today you 100 percent can. As an early April Fools’ present, Google has rolled out Pac-Man Maps so you can gain world dot domination all over the globe.

Google did a Pac-Man Doodle in 2010 that was pretty awesome, but this is on a whole other level. You can toggle between regular maps and Pac-Man mode in the lower left corner of Maps, though I can’t imagine why you would need to turn Pac-Man off. Look out for more Easter eggs on Google and all over the Internet as April 1 approaches.

Go nuts.

Screencap of Google Maps

March 31 2015 10:10 AM

Our Data, Our Health. A Future Tense Event Recap.

When it comes to worries about medical devices, two very different threats are in play. On the one hand, we fear that corporate powers will take data about us from our devices and offer us little in return. On the other hand, we’ve been taught to fear that individual hackers will interfere with the devices themselves, potentially threatening our lives through the very tools designed to preserve them.

At a Future Tense event in Washington, D.C, on April 9, medical device security expert Kevin Fu worked to assuage the latter of these concerns, even as he indirectly approached the former. No matter what, Fu said near the start of his presentation, “Patients are much safer with medical devices than without, even when there are security problems.”


As Fu explained, the real threat with many medical devices can come from the doctors who work on them, especially when those physicians haven’t properly secured their own computers. When doctors use machines infected with malware to service pacemakers and other medical devices, that malware can potentially clog up the workings of those gadgets. Such interference is likely to be entirely accidental, a mere byproduct of the malware’s interaction with the systems it has infected.

Hospitals are especially vulnerable to such threats, partly because they tend to use computers running older operating systems. No longer updated by their manufacturers, these OS’s (cough, XP, cough) are often riddled with security vulnerabilities. It therefore becomes all the more important that physicians and medical technicians understand how to properly protect their computers—and thereby their patients. Just as we expect our doctors to scrub in before surgery, so too should we expect them to tidy up their hard drives.

Where poor digital hygiene and nonexistent digital literacy can interfere with medical technologies, manufacturers’ policies, technical complexity, and opacity can keep us from the data that our devices generate. Many of us now produce astonishing amounts of information—whether through dedicated fitness trackers, pedometers in our phones, or more particular devices like constant glucose monitors. The trouble is that few of us know what to make of that data, and fewer still know what the companies who collect it are doing with it. Joel Selanikio, a Georgetown University assistant professor of pediatrics and CEO of Magpi, said that we’ve made a deal with the companies that are monitoring us through these devices, but we rarely know what that deal is or what we’ve signed up for.

As Hugo Campos recently explained in Slate, implanted medical device manufacturers rarely grant patients access to data. What’s more Sara M. Watson, a fellow at the Berkman Center for Internet and Society, pointed out that many data collection companies aren’t currently offering their customers much in return. Fitbit, for example, is doing relatively little to recommend healthy habits, she said. This is partly because our data are too dense to be clear. Producing truly effective results from health data may mean allowing companies to measure our information against that of other people. Our individual data are unlikely to offer truly meaningful insights, and even the medical establishment lacks the resources to sort through the sea of information that we are generating.

As Deborah Estrin, a computer science professor at Cornell Tech and co-founder of Open mHealth, argued, this may mean moving away from a paradigm of data ownership. “From a pragmatic perspective,” she said, “I think a battle we should fight is that people should have access to their data.” That is, individuals should be allowed to examine their own results. Estrin believes they should also be willing to submit those results to others who can help them interpret it through large-scale comparative projects and other ventures. Casting a broader analytic net could reveal otherwise unforeseeable connections. Even an individual’s Netflix viewing habits might become a data point if they turned out to link up with other patterns. To yield such results, however, we would also have to cede control of even more of our data.

Naturally, these issues raise important privacy concerns directly related to the questions of cybersecurity. Lucia Savage, chief privacy officer for the National Coordinator for Health Information Technology, observed that current privacy standards are often ill-equipped to accommodate new forms of medical information. For example, she explained that HIPAA does not require compartmentalization of anything but mental health issues. Meanwhile, Alvaro Bedoya, executive director of the Center on Privacy and Technology, noted that not a single consumer privacy bill has been voted out of committee recently.

Ultimately, concerns about medical devices may be overblown. In a comment that came close to summing up much of the event, Fu said that his biggest concern is the possibility that patients might start to refuse medical care on the basis of sensationalized fears. Instead of wringing our hands, he and his fellow speakers suggested, we would do well to advocate for greater care, clearer policies, and more robust privacy standards.

March 30 2015 2:43 PM

Amazon Is Doing Clandestine Drone Tests in Canada

Amazon has been vocal about its frustration with current FAA restrictions on commercial drones. Now, true to its word, the company has taken its drone research out of the United States and is currently conducting delivery drone tests in Canada. The Guardian visited the company's drone range at an undisclosed location in British Columbia, 2,000 feet from the U.S. border. 

The company wants to use airspace above 200 feet and below 500 feet as a neutral zone for drones. This height range is above most buildings but below planes and helicopters. The Guardian reports that Amazon's drones would weigh less than 55 pounds and carry 5-pound or lighter loads (no lawn chair deliveries yet). The plan is for the drones to fly at 50 miles per hour.


The team running the tests includes aeronautics experts, software developers, a former NASA astronaut, and a former Boeing 787 engineer. The Guardian describes the scene at the test range:

Amazon’s drone visionaries are taking the permissive culture on the Canadian side of the border and using it to fine-tune the essential features of what they hope will become a successful delivery-by-drone system. The Guardian witnessed tests of a hybrid drone that can take off and land vertically as well as fly horizontally.

In December, Amazon’s vice president of global public policy Paul Misener wrote to the FAA, “Without the ability to test outdoors in the United States soon, we will have no choice but to divert even more of our [drone] research and development resources abroad.” Welp, here it is. Other companies could easily do the same thing, or may have already.

March 27 2015 6:22 PM

FCC Finally Releases (Heavily Redacted) Manual for Controversial Surveillance Device

Details about StringRays have trickled out slowly, but each new revelation comes with concerning implications for government agencies' ability to access the mobile communications of individuals. The surveillance tools, which pretend to be cell towers so mobile phones will be tricked into connecting to them and revealing their data, are manufactured by the Florida-based Harris Corp. In September, Matthew Keys at the Blot filed a Freedom of Information Act request with the FCC to see the manual for the controversial devices. Six months later, he finally got the documents.

The manual describes two surveillance products, the StingRay and the KingFish (a cheaper and smaller option). Keys writes:

The manual indicates the StingRay and KingFish devices are sold as part of a larger surveillance kit that includes third-party software and laptops. Tables that contain the names of the other equipment is redacted in the copy provided by the FCC, but other records reviewed by TheBlot indicate the laptops are manufactured by Dell and Panasonic, while the software is designed by Pen-Link, a company that makes programs for cellphone forensics.

Redactions in the manual cover things like instructions for operating the devices and diagrams. The manual is loaded with warnings that it contains proprietary information and shouldn't be shared or copied. The document also says that it includes information protected under the International Traffic in Arms Regulation.

The manual is difficult to read, to say the least. One chapter summary says, "This chapter provides a list of features and capabilities of the StingRay II hardware, an equipment inventory, system specifications, and StingRay II setup," followed by near-complete redaction. For example, "The StingRay II chassis REDACTED as shown in Figure 2-5." Figure 2-5 is ... also redacted. Shocking.

As the Blot notes, the manual says that its contents are “associated with the monitoring of cellular transmissions,” even though this phrase seems to be blacked out in other similar parts of the document. The FCC redacted information under the trade secret FOIA Exemption 4.

The most important thing the document reveals is concrete evidence of how StingRays are purchased and distributed, and hints about how they work. Plus, the FCC clearly feels that it and the company that makes these products have something to lose by revealing even just the use manual.

If you’re in the market for a powerful mobile surveillance device, keep in mind that StingRays come with a limited 12-month warranty!

March 27 2015 2:03 PM

Apple Watch Could Make You a Walking Weather Station

Apple’s new watch will come with a suite of health-centric sensors—including, perhaps surprisingly, a barometer intended to track elevation changes during a workout and whether it’s outdoors or within a building. But for meteorologists, the advent of widespread wearable barometers is a game-changer when it comes to weather forecasting.

Last fall, after the announcement of the iPhone 6 and its barometer, meteorologist Cliff Mass wrote a giddy blog post about the promise of smartphone barometers. He said experimental results from his research team show that dense networks of mobile barometers alone can create highly accurate three-dimensional weather maps. That “almost sounds like magic” to Mass.


The implications of highly localized weather forecasts are profound. The Weather Company, which owns the Weather Channel, has used local weather forecasts to drive increased revenue through context-specific advertising for years. But beyond creating clickier ads, dense networks of barometer-enabled smartphones in India and Africa could boost local economies by aiding agriculture and other weather-dependent sectors.

The potential for smartwatches to bring about a new generation of hyper-local forecasts reminds me of the below scene in Back to the Future. App developers are already working to make that past future 2015 a reality.

Adam Grossman is the co-founder of Dark Sky, the weather app that Apple featured during its launch event for the watch earlier in March. Though he’s excited about the potential of his new Watch app, he says Apple isn’t letting developers access the on-board barometer sensor yet. Since the watch requires an iPhone to work, its own on-board barometer is essentially redundant—but that will soon change.

“I don’t think Apple wants to require an iPhone with the watch,” said Grossman. “It always takes me longer to fish my phone out of my pocket than it does to check the weather. The watch is the right place for that kind of stuff.”

Nearly sentient weather forecasts will soon be a reality, thanks to Apple Watch.

Dark Sky

Grossman hopes that future versions of Dark Sky will collect barometer data in addition to “manual entry” data, like whether it’s currently snowing or raining. After that, he’ll focus on actually using the barometer data to improve the forecast.

Another company is a bit further ahead. Katerina Stroponiati is co-founder of Sunshine, a crowdsourced weather app that’s currently in beta and recently announced a significant new round of funding. Sunshine is planning a public launch in April.

“Sunshine is a weather network based entirely on mobile, which means that instead of just using traditional weather providers like [the National Oceanic and Atmospheric Administration], we use the sensors of the smartphones to collect the data,” Stroponiati said. “The more data, the better.”

Sunshine is planning to launch in cities across the United States once it receives enough data density to show a measurable improvement over existing forecasts—San Francisco will be one of its initial focus areas. Eventually, the goal is to “build a ground observation network of millions of devices.” Sunshine doesn’t yet have a Watch app but is planning one.

In addition to collecting weather data from phones and wearables, the company plans to use distributed computing on the mobile devices themselves to generate the forecasts. That would help bypass the need for expensive supercomputers.

Though small companies like Dark Sky and Sunshine are promising big results, Mass thinks a true transformation in meteorology will only happen when device makers like Apple and Samsung start to see themselves as weather data providers. Mass currently has access to about 120,000 pressure measurements an hour—enough to improve forecasts in some cases—“but there’s 40 million [mobile barometers] out there. There’s not many people with these apps, that’s the problem.”

March 26 2015 6:50 PM

Instapaper Joins the Slow Creep of Speed Reading

Instapaper, the reader app that lets you save Web pages and look at them later, released a new feature on Thursday called Speed Reading. Starting today, users can speed-read 10 articles per month for free, and premium users can do infinite speed-reading on their mobile devices. The feature joins a growing group of speed-reading software that's pushing the limit of how much content we can consume.

Instapaper offers speed reading as an option within its “action icon” on mobile as well as in its navigation bar on desktop. All you have to do to start blazing through all those articles you (optimistically) saved on the train yesterday is hit the “Speed” button and watch the words fly by. In a blog post, the company explains that this speed-reading approach is called rapid serial visual presentation (RSVP). It's “meant to help you eliminate subvocalization, that voice in the back of your mind repeating words as you read them, and reduce time lost scanning between words. The result is a more focused, faster reading experience,” the company writes.


Researchers have been studying RSVP for years, and it does seem to significantly increase reading speed, by eliminating the need for eye movement. But the research also indicates that comprehension can decline as reading pace increases. At some point there starts to be a tradeoff between speed and understanding.

The Instapaper feature is reminiscent of software from the speed-reading company Spritz, which started licensing its product for inclusion in other apps and services last year. There's also the app Spreeder, and others too, that have similar functionality. Instapaper developed its speed-reading capabilities in-house, but the goal of bringing instant reading-speed improvements to any user who wants them seems similar.

On Slate last year, Jim Pagels wrote of Spreeder, “My dependency on this application is so great that print text now seems difficult to focus on, and I find myself seeking out ebooks rather than print ones so that I can feed them into Spreeder.”

Maybe it's all an elaborate conspiracy to hook us on speed reading so we can ... OK, yeah, it's probably just a cool app feature.