Future Tense
The Citizen's Guide to the Future

Oct. 9 2015 11:42 AM

How Vermont Used Drones After a Train Derailment

When an Amtrak train went off its tracks in the forests outside of Northfield, Vermont, on Oct. 5, state authorities needed pictures of the site to determine how best to respond and to document the damage for later investigations. So they decided to call in the drones, operated by the Spatial Analysis Lab at the University of Vermont. Within two hours, the university’s drone disaster response teams were flying their fixed-wing eBee unmanned aerial vehicle, or UAV, over the crash site.

FT-drone logo

Within an hour of arriving on the scene, the team’s drone shot 280 images of the derailment, with each high-resolution image linked to a precise location on the ground. By the end of the day, the drone team had processed the images into a seamless orthophotomosiac, a photographic “map” of the crash site that would come in handy for investigators.


As the Vermonter crash response shows, drones have in recent years become an easy and inexpensive way to collect invaluable images of accident and disaster sites: faster than a satellite and cheaper than a manned helicopter or airplane. Disaster response drones have been used in Nepal, Haiti, and Vanuatu and are increasingly finding their way into the toolkit of disaster response teams in the U.S.—including in Vermont, a surprisingly ideal testing ground for disaster drones.

Jarlath O’Neill-Dunne is the director of the Spatial Analysis Lab at the University of Vermont and served as mission commander during the response to the derailed train. A trained forester with military experience, O’Neill-Dunne first became interested in using drones for responding to and documenting disasters after Hurricane Irene swept through Vermont in 2011, destroying homes, roads, and bridges.

“It caused havoc for us,” says O’Neill Dunne of the hurricane year. “We got a foot of rain in about an hour. … We had the ability to use satellites, but we found it wasn’t really that effective.” Clouds regularly covered relevant areas, and getting the right imagery on time to the right people proved to be a huge hassle. O’Neill-Dunne applied for a Department of Transportation grant, intending to study how drone imagery could help document destruction and get transportation going again after a natural disaster. Since, the lab has acquired two eBee drones from the French Sensefly company: black and yellow foam flying wings that weigh about 1.5 pounds and are equipped with sophisticated automatic flight-planning technology. As they’ve gained experience with flying their drones and using the data, the lab has developed a swift but formal protocol for deploying to the scene—with checklists and an established mission structure intended to keep the response organized and safe.

Unlike many other drone pioneers in the U.S., O’Neill-Dunne’s SAL lab has an easy relationship with state and federal regulators: the drone pilots work closely with state airspace authorities, and Vermont’s limited land area and small population have proved a good testing ground for drone disaster response tactics. The University of Vermont also holds a Section 333 exemption from the Federal Aviation Administration, allowing it to legally use the UAVs under controlled conditions out of the state.

Even before the derailment, the drones proved their utility for small-town disasters, after destructive rainstorms hit central Vermont on July 19. In the river-side town of Plainfield, the team used the eBee to carry out a rapid assessment of the damage to homes and roads and studied the aerial imagery to figure out how much wood and other debris was clogging local streams, potentially making bridges impassable.

In Barre, the team used the aerial imagery to document flooding destruction and to trace the spread of sediment in homes and on roads in exact detail—specific data that would make it easier for homeowners to get disaster relief assistance from FEMA. “Drone imagery didn’t help with the response—everyone knew whose house had flooded. But if you apply for disaster assistance, now you have survey-grade data,” says O’Neill-Dunne.

Will drones soon become a pedestrian part of disaster response, as expected as fire trucks and ambulances? While it’s quite possible, plenty of challenges still stand in the way of drone-disaster response spreading to the rest of the country. For one thing, there’s public concern over privacy and how camera-equipped drones might violate it. O’Neill-Dunne is sensitive to these worries but points out that the nadir (straight above) drone imagery he shoots isn’t much good for spying, considering that it’s hard to positively identify someone by looking at the top of their head. “It’s not a surveillance tool, it’s a mapping tool. We’re trying to use it in that realm, with the hope of alleviating the concerns of a lot of people,” he says.

Regulatory issues remain the biggest hurdle for disaster drones: The technology remains in a tricky legal gray area in the U.S., with final federal regulations set to come out sometime next year. That’s worrisome for drone disaster responders like O’Neill-Dunne. “If we had legislation banning drones flying over private property, especially in midmorning when people aren’t home and can’t give permission, we couldn’t do it,” he says. “We do need to consider privacy concerns, but is there a difference between someone shooting an image of your house from their car, or a drone, or a satellite?”

Ultimately, O’Neill-Dunne believes that the sheer usefulness of drone technology will overcome public trepidation, especially when it comes to small towns that lack the resources needed to send out helicopters or purchase pricy satellite information when disaster strikes. Instead of relying on bigger state or federal agencies, small American towns—already proud of their self-reliance—may soon be able to use drones to deal with both natural and man-made disasters.

This article is part of a Future Tense series on the future of drones and is part of a larger project, supported by a grant from Omidyar Network and Humanity United, that includes a drone primer from New America.

Video Advertisement

Oct. 9 2015 11:34 AM

California Now Has the Most All-Encompassing Digital Privacy Law in the Country

Wired logo

California continued its long-standing tradition for forward-thinking privacy laws Thursday when Gov. Jerry Brown signed a sweeping law protecting digital privacy rights.

The landmark Electronic Communications Privacy Act bars any state law enforcement agency or other investigative entity from compelling a business to turn over any metadata or digital communications—including emails, texts, documents stored in the cloud—without a warrant. It also requires a warrant to track the location of electronic devices like mobile phones, or to search them.


The legislation, which easily passed the Legislature last month, is the most comprehensive in the country, says the ACLU.

“This is a landmark win for digital privacy and all Californians,” Nicole Ozer, technology and civil liberties policy director at the ACLU of California, said in a statment. “We hope this is a model for the rest of the nation in protecting our digital privacy rights.”

Five other states have warrant protection for content, and nine others have warrant protection for GPS location tracking. But California is the first to enact a comprehensive law protecting location data, content, metadata and device searches, Ozer told Wired.

“This is really a comprehensive update for the modern digital age,” she said.

State senators Mark Leno, D-San Francisco, and Joel Anderson, R-Alpine, wrote the legislation earlier this year to give digital data the same kinds of protection that non-digital communications have.

“For what logical reason should a handwritten letter stored in a desk drawer enjoy more protection from warrantless government surveillance than an email sent to a colleague or a text message to a loved one?” Leno said earlier this year. “This is nonsensical and violates the right to liberty and privacy that every Californian expects under the constitution.”

The bill enjoyed widespread support among civil libertarians like the American Civil Liberties Union and the Electronic Frontier Foundation as well as tech companies like Apple, Google, Facebook, Dropbox, LinkedIn, and Twitter, which have headquarters in California. It also had huge bipartisan support among state lawmakers.

“For too long, California’s digital privacy laws have been stuck in the Dark Ages, leaving our personal emails, text messages, photos and smartphones increasingly vulnerable to warrantless searches,” Leno said in a statement on Thursday. “That ends today with the Governor’s signature of CalECPA, a carefully crafted law that protects personal information of all Californians. The bill also ensures that law enforcement officials have the tools they need to continue to fight crime in the digital age.”

The law applies only to California law enforcement entities; law enforcement agencies in other states would be compelled by the laws in their jurisdictions, which is why Ozer and others say it’s important to get similar comprehensive laws passed elsewhere.

The law places California not only at the forefront of protecting digital privacy among states, it outpaces even the federal government, where such efforts have stalled.

Civil libertarians and others have long lobbied federal lawmakers to update the Electronic Communications Privacy Act to offer such protection nationwide. An amendment to that law has been wending through Capitol Hill, where it has 300 co-sponsors. But the proposal is less comprehensive than the law Brown signed, and would merely focus on digital content. Currently, the federal ECPA requires a warrant for stored content that is newer than 180 days; the amendment would extend the warrant requirement to all digital content regardless of age.

California has long led the way in privacy protection. Voters amended the state constitution in the 1970s to provide explicit privacy rights far more robust than those guaranteed by the Fourth Amendment of the U.S. Constitution. But while the state amendment ensured a right to privacy for all Californians, lawmakers couldn’t envision the technological advances that would come in the decades to follow. The law that Brown signed today closes surveillance loopholes left by that amendment and “codifies what was intended by that privacy right,” Ozer says.

“We certainly hope that this bill is a clarion call [for the federal amendment],” she told Wired. “This is not only a comprehensive update for all Californians, but hopefully is a model for making sure that all Americans have this kind of digital privacy protection.”

Also in Wired:

Oct. 8 2015 1:11 PM

Here’s What Facebook Is Giving Us Instead of a Dislike Button

Mark Zuckerberg said last month that Facebook was building something along the lines of a dislike button, but he wasn’t clear on what that meant. Now, we can all stop speculating: Facebook is testing a new set of emoji-based “reactions” to add some nuance to the social network.

In addition to “liking” something with a thumbs up, you can now express sentiments of love, laughter, happiness, surprise, sadness, and anger—a bit like how you can react to messages on the chat platform Slack. The buttons line up with Zuckerberg’s assertion that “what [users] really want is an ability to express empathy.” The emoji rollout is the closest thing we’ll get to the fervently requested dislike button, which was probably never going to happen.


When Zuckerberg alluded to the feature in September, Slate’s Will Oremus prophesized about what it could be:

If I had to guess, I’d say the most likely possibility is this: Facebook will give you the option, when you post something, to enable your friends and followers to respond with a button other than “like,” such as “sympathize,” or “agree,” or, I don’t know, “hug”—but only for that specific post. It’s possible the word “dislike” will be among those options, although I still think that’s unlikely.

We can’t “hug” a post (yet), but Oremus was right about the expanded reactions. It will give users more control over how they interact with friends or news articles. If you really dig your cousin’s funny baby video, “yay” it. Shocked that McDonalds is serving breakfast all day? You can respond with a “wow.” The options also enable users to express sadness or anger over tragic news events, which don’t beg to be “liked.”

So far, Facebook is only testing the feature in Spain and Ireland, so it can work out the buttons’ usability before their full release. The new reactions will be available on both mobile and desktop.

Not only will the new button additions quench Facebook’s thirst for subtlety, but it will also help the data-hungry company “fine-tune its news-feed algorithms,” as Oremus wrote. Whether that’s good or bad, it’s sure to make news feeds that much more individualized. 

Oct. 8 2015 1:09 PM

Why You Should Eat Your Airplane Boarding Pass Once You Take Your Seat

In honor of National Cyber Security Awareness Month, here’s one more thing to worry about: airplane tickets. Earlier this week, Brian Krebs reported on his blog that it’s possible to extract a surprising amount of information from the barcodes and QR codes on boarding passes.

Krebs explains that one of his readers wrote in with this information after realizing that he was able to use an online barcode reader to decode the information in a photo of a boarding pass that a friend had posted to Facebook. In addition to easily accessible data like the ticket holder’s name, the code yielded details such as his frequent flyer number and his travel record locator.


If this doesn’t sound like much, consider that Krebs’ reader was able to access his friend’s account with Lufthansa using the data he’d pulled. Coupled with some basic social hacking—using publicly available Facebook information to guess the answer to security questions—this allowed him far reaching access to, and control over, otherwise private resources. (I reached out to Lufthansa for comment but did not get a response.)

As some commenters on Krebs’ post point out, different airlines package different amounts of information into the scannable portions of their documents. Some apparently include nothing more than what’s already indicated in plain text. Nevertheless, it’s important to recognize that the seemingly illegible sections of our tickets may be disquietingly chatty to those in the know.

Krebs cites one researcher who proposes, “I would also recommend not leaving your boarding pass on the aircraft when you disembark.” Meanwhile, Krebs himself writes that you may want to “consider tossing the boarding pass into a document shredder.” Here at Slate, we fear that that may not be enough. We suggest that you instead eat your boarding pass as soon as you have found your seat. Chew thoroughly. 

Who ever said airplane food was terrible?

Oct. 8 2015 12:06 PM

Netizen Report: Did Egypt Block Voice Calls on WhatsApp and Skype?

The Netizen Report offers an international snapshot of challenges, victories, and emerging trends in Internet rights around the world. It originally appears each week on Global Voices Advocacy. Mary Aviles, Ellery Roberts Biddle, Marianne Diaz, Lisa Ferguson, Weiping Li, and Sarah Myers West contributed to this report.

GVA logo

In Egypt, many users reported this week that they could not use voice-calling features on Skype, WhatsApp, or Viber. Technology consultant and digital activist Amr Gharbeia has been tracking these developments and interviewing users across the country to ascertain what exactly is happening and whether the apparent blocks on certain services came as the result of a government order. On Facebook, Gharbeia wrote that long-distance VoIP calls via Skype have been blocked in Egypt since 2010 but that this appeared to have extended to WhatsApp during the first week of October. He continued:

There are conflicting reports about the state’s policy regarding blocking. Official statements from NTRA [Egypt’s telecommunications regulator] deny blocking, customer service representatives publicly deny blocking, but after pressure they mention to a lot of complaining users that blocking decision is ordered by NTRA. …
Using the Internet for long distance calling is illegal, punishable by jail or fine according to the article 72 of the telecommunications law issued in 2003. This law is one of many flaws in the telecommunications law, but even with that, it is stated that protecting consumers’ interests is part of NTRA’s mission.

Thai netizens stage virtual sit-in
The Thai Ministry of Information and Communications Technology has been ordered to reduce infrastructure connecting Thailand to the global Internet so that all traffic will pass through a single gateway, allowing the government greater capacity to monitor and potentially filter online content. According to the Committee to Protect Journalists, the National Council for Peace and Order (the ruling junta) made establishing a single gateway for Internet traffic an “urgent priority” in the days following the coup in May 2014. In response to the proposal, Thai netizens staged a “virtual sit-in” on Sept. 30 by flooding government websites with traffic that forced them to go offline. The gateway is another in a series of measures the NCPO has taken to curtail free expression since the coup, including passing a draconian security law and cracking down on journalists and independent media.

Two arrested in Lebanon for Facebook posts
Journalist Mohammad Nazzal was sentenced in absentia to six months in jail and a fine of $700 over a post he published on Facebook two years ago that contained the sentence "the judicial system is as low as my shoes," which authorities say constituted libel and defamation. In a separate case, political activist Michel Douaihy was detained for nine days over a Facebook post in which he criticized authorities’ treatment of radical Sunni cleric Ahmed Al Assir during his arrest last August.

Vietnamese blogger released from prison
Vietnamese blogger Ta Phong Tan was released from prison after serving three years of a 10-year sentence. She was arrested on anti-state charges for her work reporting on corruption and abuse within the police and court systems.

Student detained in Indonesia for publishing evidence of police extortion
An Indonesian university student was detained and charged with defaming a police officer after uploading a video of the office attempting to extort him for money during a routine traffic stop. He has been charged under Indonesia’s Electronic Transaction and Information Act.

Jailed Syrian developer moved to undisclosed location
Syrian-Palestinian software engineer Bassel Khartabil was moved from his prison to an unknown location, leading to fears that his life may be in danger. Bassel was detained by the Syrian government on March 15, 2012, and tortured for five days before being held incommunicado for nine months. He was finally charged in December 2012 with “spying for an enemy state” and was transferred to Adra Prison, where he remained until Oct. 3, 2015. An online petition calls for his immediate release.

Organization of American States condemns Peru’s “Stalker Law”
The special rapporteur for free expression for the Organization of American States raised concerns about a proposed surveillance law in a letter to the Peruvian government. The law, formally known as the Legislative Decree No. 1182 but known among digital activists as “Ley Stalker” (“Stalker Law”), would allow law enforcement authorities access to mobile phone data without a warrant and requires telecommunication companies to retain data for up to three years. The rapporteur emphasized the need for such legislation to undergo public debate and asked for more information about the justification for and proportionality under existing laws.

No more ‘Safe Harbor’ for US tech companies, says EU court
The European Court of Justice declared the EU-U.S. Safe Harbor Agreement to be invalid in a landmark decision this week. Under Safe Harbor, the EU afforded the transfer of commercial data between the EU and the U.S. despite the EU’s higher standards for privacy protections, provided that signatory U.S. companies agree to comply with a set of privacy principles. However, the court indicated that the revelations of PRISM and U.S. government surveillance of data undermined this agreement and Europeans’ right to privacy.

New Research
Balancing Act: Press Freedom at Risk as EU Struggles to Match Action With Values”—Committee to Protect Journalists

Oct. 7 2015 1:40 PM

How Can Science Help With Diplomacy—and Diplomacy Help With Science?

Diplomacy is an art, not a science. But science is increasingly playing an important role in diplomacy. Some of our future’s biggest challenges—like climate change—can’t be contained within borders, which means that nations around the world need to get on the same page. Meanwhile, science itself can be used as an olive branch: Even when two countries' political leaders aren’t on good terms, their scientists can exchange ideas, paving the way for more communication down the road. It happened during the Cold War and more recently before U.S.-Cuba relations normalized. So how should science be used in diplomacy?

Join us in Washington, D.C., at 6 p.m. on Wednesday, Oct. 21, for a happy hour event at the Arizona State University Washington Center with Frances Colón, deputy science and technology adviser to the secretary of state, and Marga Gual Soler, project director at the Center for Science Diplomacy and assistant research professor at the School for the Future of Innovation in Society at ASU. While you enjoy drinks and snacks, Slate staff writer Joshua Keating will discuss science as a platform for diplomacy with Colón and Soler.


To attend, please RSVP to futuretensedc@gmail.com with your name, email address, and any affiliation you’d like to share. You may RSVP for yourself and up to one guest, and please include your guest’s name in your response. Unfortunately, only a limited number of seats are available.

Future Tense regularly hosts happy hours and other evening events—like our "My Favorite Movie" series, in which leaders in technology and science host a screening of their favorite film with tech and science themes. So subscribe to our newsletter below and follow us on Twitter to learn about our all our upcoming events.

Oct. 7 2015 11:59 AM

Future Tense Newsletter: How to Look Hot on Mars

Hi Slate-liens,

In the past week, Future Tense explored survival in space, the latest cybersecurity developments, and more. Join us, won’t you?


The Martian may be pure hokum, but according to Ellen Stofan, NASA’s chief scientist, if you look at  real science of the movie, it’s not entirely implausible hokum. In fact, as Rachel Gross wrote, the filmpractically worships science, but it also maintains its sense of wonder. What did I think? [redacted]loved every minute of it.

Maybe we still had space on the brain, because we also checked in with the designers who are trying to make spacesuits sexier. If we’re headed in that direction, though, we may want to rethink the often-sexist language of spaceflight.

Meanwhile, it’s the most wonderful time of the year: National Cyber Security Awareness Month! As part of the festivities, there was a scary new attack on Microsoft Outlook’s Web application. Ross Schulman discussed three laws that Congress should change to let cybersecurity researchers do their jobs. In the U.K., meanwhile, they’re attempting to woo a new generation of hackers with a buggy, terrible game. Good luck with that, folks.

Here are a few other stories that were really having a moment this week:

  • Digital rights: Rebecca Wexler argues that defendants should have the right to inspect code used to convict them.
  • Opting outVerizon is sharing private, identifying information about its subscribers with advertisers. We explained how you can opt out.
  • Butt dialsOur phones may be smarter than ever, but that doesn’t mean they aren’t calling 911 from our pockets.
  • Smart citiesStreetlamps of the future may be outfitted with sensors, cameras, and more. Does everything have to be connected to the Internet?

Upcoming Future Tense events:

En route to the Red Planet,

Jacob Brogan

for Future Tense

Oct. 6 2015 2:54 PM

Court Strikes Down Data-Transfer Pact That Lets Tech Companies Move European User Data

On Tuesday, the highest court in the European Union ended a data-transfer agreement known as the "Safe Harbor" pact after 15 years of use.

The agreement allowed tech companies to move user data from European to United States data centers if the companies offered certain privacy settings and met other minimum requirements. Eliminating the pact is a win for privacy advocates, who criticized it for potentially exposing EU user data to U.S. surveillance. But it could have ramifications for the tech industry, since companies will now have to rely on data centers that are physically in the EU or find other legal justifications to allow data to flow to the United States.


The Wall Street Journal estimates that roughly 4,500 companies, from tiny startups to tech giants, were invoking the pact in their daily operations. Some, like Microsoft and Facebook, have backup plans, but small companies with limited resources may struggle to implement new strategies. The worst-case scenario would be that European customers can't use certain U.S. services, leading to problems for international trade.

Brian Hengesbaugh, a privacy lawyer with Baker & McKenzie in Chicago who worked on the original pact, told the New York Times, “We can’t assume that anything is now safe. ... The ruling is so sweepingly broad that any mechanism used to transfer data from Europe could be under threat.”

The decision fits into broader discussion about how to defend users' privacy rights at the largest scale, though. "Today’s Judgment puts people’s fundamental right to privacy before profit," Renata Avila, the global campaign manager of the World Wide Web Foundation, said in a statement. "We hope that this EU ruling will also inspire countries around the world to review their data protection and exchange policies, and enhance the protection of their citizens."

Oct. 6 2015 2:44 PM

Here’s How To Opt Out of Verizon’s Scary New Privacy Violation

This October just got a little more chilling. On Tuesday, ProPublica’s Julia Angwin and Jeff Larson called attention to an announcement that Verizon had changed its privacy policy, allowing it to share personally identifying information about its subscribers with AOL’s ad network. In the wake of this change, if you’re a Verizon customer, your phone isn’t just tracking information “such as gender, age range, and interests”—it’s also using that information to help people sell you things.

Maddeningly, this intrusive policy shift requires that users actively opt out if they don’t want to be directly monitored. As Angwin notes on Twitter, you can exempt yourself from the initiative here (she and Larson point out that you can also call 866-211-0874), though you’ll still have to manually log into your account or otherwise wrestle with Verizon customer service.


This creepy corporate synergy comes on the heels of Verizon’s $4.4 billion purchase of AOL earlier this year. In its initial Privacy Notice, Verizon coyly suggests that this isn’t that big of a deal. One sentence reads, “We do not share information that identifies you personally as part of these programs other than with vendors and partners who do work for us.” That’s an awfully big “other than.” Per ProPublica, “AOL’s network is on 40 percent of websites,” which should make for quite a few “vendors and partners.”

Given the season, this news has an appropriately haunting character: It appears to be connected to controversial “zombie cookies” that relied on undeletable information buried in Verizon phones and tablets to track customers’ browsing habits, even if the user deleted the cookie. Though the company responsible for those cookies supposedly killed off the program after protests, the technology that empowered it seems to have risen once again. Privacy advocates should have gone for a shot to the head the first time around. When you’re dealing with zombies, it’s the only way to be sure.

Oct. 6 2015 12:28 PM

Convicted by Code

Secret code is everywhere—in elevators, airplanes, medical devices. By refusing to publish the source code for software, companies make it impossible for third parties to inspect, even when that code has enormous effects on society and policy. Secret code risks security flaws that leave us vulnerable to hacks and data leaks. It can threaten privacy by gathering information about us without our knowledge. It may interfere with equal treatment under law if the government relies on it to determine our eligibility for benefits or whether to put us on a no-fly list. And secret code enables cheaters and hides mistakes, as with Volkswagen: The company admitted recently that it used covert software to cheat emissions tests for 11 million diesel cars spewing smog at 40 times the legal limit.

But as shocking as Volkswagen’s fraud may be, it only heralds more of its kind. It’s time to address one of the most urgent if overlooked tech transparency issues—secret code in the criminal justice system. Today, closed, proprietary software can put you in prison or even on death row. And in most U.S. jurisdictions you still wouldn’t have the right to inspect it. In short, prosecutors have a Volkswagen problem.


Take California. Defendant Martell Chubbs currently faces murder charges for a 1977 cold case in which the only evidence against him is a DNA match by a proprietary computer program. Chubbs, who ran a small home-repair business at the time of his arrest, asked to inspect the software’s source code in order to challenge the accuracy of its results. Chubbs sought to determine whether the code properly implements established scientific procedures for DNA matching and if it operates the way its manufacturer claims. But the manufacturer argued that the defense attorney might steal or duplicate the code and cause the company to lose money. The court denied Chubbs’ request, leaving him free to examine the state’s expert witness but not the tool that the witness relied on. Courts in Pennsylvania, North Carolina, Florida, and elsewhere have made similar rulings.

We need to trust new technologies to help us find and convict criminals but also to exonerate the innocent. Proprietary software interferes with that trust in a growing number of investigative and forensic devices, from DNA testing to facial recognition software to algorithms that tell police where to look for future crimes. Inspecting the software isn’t just good for defendants, though—disclosing code to defense experts helped the New Jersey Supreme Court confirm the scientific reliability of a breathalyzer.

Short-circuiting defendants’ ability to cross-examine forensic evidence is not only unjust—it paves the way for bad science. Experts have described cross-examination as “the greatest legal engine ever invented for the discovery of truth.” But recent revelations exposed an epidemic of bad science undermining criminal justice. Studies have disputed the scientific validity of pattern matching in bite marks, arson, hair and fiber, shaken baby syndrome diagnoses, ballistics, dog-scent lineups, blood spatter evidence, and fingerprint matching. Massachusetts is struggling to handle the fallout from a crime laboratory technician’s forgery of results that tainted evidence in tens of thousands of criminal cases. And the Innocence Project reports that bad forensic science contributed to the wrongful convictions of 47 percent of exonerees. The National Academy of Sciences has blamed the crisis in part on a lack of peer review in forensic disciplines.

Nor is software immune. Coding errors have been found to alter DNA likelihood ratios by a factor of 10, causing prosecutors in Australia to replace 24 expert witness statements in criminal cases. When defense experts identified a bug in breathalyzer software, the Minnesota Supreme Court barred the affected test from evidence in all future trials. Three of the state’s highest justices argued to admit evidence of additional alleged code defects so that defendants could challenge the credibility of future tests.

Cross-examination can help to protect against error—and even fraud—in forensic science and tech. But for that “legal engine” to work, defendants need to know the bases of state claims. Indeed, when federal district Judge Jed S. Rakoff of Manhattan resigned in protest from President Obama’s commission on forensic sciences, he warned that if defendants lack access to information for cross-examination, forensic testimony is “nothing more than trial by ambush.”

Rakoff’s warning is particularly relevant for software in forensic devices. Because eliminating errors from code is so hard, experts have endorsed openness to public scrutiny as the surest way to keep software secure. Similarly, requiring the government to rely exclusively on open-source forensic tools would crowd-source cross-examination of forensic device software. Forensic device manufacturers, which sell exclusively to government crime laboratories, may lack incentives to conduct the obsessive quality testing required.

To be sure, government regulators currently conduct independent validation tests for at least some digital forensic tools. But even regulators may be unable to audit the code in the devices they test, instead merely evaluating how these technologies perform in controlled laboratory environments. Such “black box” testing wasn’t enough for the Environmental Protection Agency to catch Volkswagen’s fraud, and it won’t be enough to guarantee the quality of digital forensic technologies, either.

The Supreme Court has long recognized that making criminal trials transparent helps to safeguard public trust in their fairness and legitimacy. Secrecy about what’s under the hood of digital forensic devices casts doubt on this process. Criminal defendants facing incarceration or death should have a right to inspect the secret code in the devices used to convict them.