Maybe It’s Time for Reporters to Start Wearing Body Cameras
Maybe journalists should start wearing body cameras. Or maybe politicians should be required to wear them.
I’m kind of, sort of joking. Or maybe not.
Wednesday’s roughing up of a Guardian reporter by a Montana congressional candidate (Republican, of course) might have ended up as a battle of allegations. The candidate and his staff issued a statement blaming the “liberal journalist” and his “aggressive behavior,” but those claims were flatly contradicted—by eyewitnesses, from Fox News of all places, but most importantly by audio tape. Given that evidence, the local sheriff charged Greg Gianforte, a former technology executive, with assault (albeit only misdemeanor).
But let’s face it. Video would have been even better.
This logic parallels some of the reasoning people apply in other situations. In one way it’s like recent calls for police officers to wear body cameras that capture their interactions with the public, for the protection of both. But that’s not quite the right analogy, because in many ways cop cameras are seen in some communities as more about protecting civilians from abuse than police from false accusations; done right, of course, police body cameras protect both sides.
It’s also like the way that motorists in some countries, such as Russia, use dashboard cameras to create a video record of their driving, to help avoid insurance scams and lawsuits. (The world got a bonus from this practice when a dashboard camera captured a meteor searing through the atmosphere over the Ural Mountains in 2013.) This may be a closer analogy because it’s almost entirely about protecting the person doing the recording.
Donald Trump’s personal attacks on journalism, at least reporting that doesn’t praise him and pound his growing opposition, have so far been rhetorical, not physical (though he’s filed a number of libel suits over the years). But his supporters are upping the ante, with threats and arrests. Earlier this month, a journalist in West Virginia was arrested after “yelling” questions at Health and Human Services Secretary Tom Price. “[O]fficials say he was ‘trying aggressively’ to breach Secret Service security,” NPR reported. About a week ago, a CQ Roll Call reporter says he was “pinned” against a wall by security guards while trying to ask an FCC commissioner a question.
Now, in Gianforte’s “body slam” of Ben Jacobs, we have an actual pounding.
As American journalists slowly wake up to the uncomfortable reality that war has been declared on the honest members of the craft, they need to fight back—not against Trump and his acolytes but for freedom of the press and freedom of expression in general. In the process, they need to start taking self-defense a bit more seriously.
If I was a journalist heading into territory where many of the people in the room considered me the enemy, I’d take precautions. One of them might well be a body camera.
Every reporter carries a camera at all times in any case: the one in his or her phone. As electronic gear gets more powerful and shrinks in size, it’ll be trivial to have a body camera, or several, in one’s clothes. Knowing that journalists were wearing body cameras would surely deter some attackers. Not all, of course: Certain folks may well decide it’s fine to assault journalists and take their chances on friendly local law-enforcement or a hung jury in Trump-territory courts.
Of course, Trump might have one reason to favor body cameras on reporters. It would make life harder for anonymous sources. But not even this Constitutionally challenged president could entertain a law requiring such a thing.
While bodycams might be protective for journos, suggesting this kind of thing makes me uncomfortable. The more cameras we encounter in our daily lives, the more we’re being spied on, by governments, corporations, friends, and others. Technology is the modern double-edged sword, and a surveillance society feels like a cost not worth the benefit. We need to have norms, and laws, aimed in this case at boosting public knowledge—but not helping to create a Panopticon in the process.
So maybe the answer isn’t for reporters to wear cameras after all. Maybe—more like the police-worn body cameras—we should require politicians to wear cameras and microphones whenever they’re campaigning or involved in political activities of any kind. (Again, kind of, sort of joking...)
After all, they’re doing the public’s business. We should know what they’re up to, right? Deterring a politician with violent tendencies from slamming a reporter is fine, but deterring a politician from making sleazy deals with colleagues, lobbyists, and campaign funders sounds even finer.
Documents Suggest Vermont’s DMV Illegally Used Facial Recognition Software
The Vermont Department of Motor Vehicles landed in hot water Wednesday after reports surfaced alleging it used biometric facial recognition software to aid law enforcement investigations in defiance of state statutes.
The American Civil Liberties Union of Vermont was the first to sound the alarm. According to a Wednesday press release, the nonpartisan advocacy organization “obtained internal Department of Motor Vehicles records describing a DMV facial recognition program that is banned by Vermont state law and compromises the privacy and security of thousands of Vermonters.” It also delivered a letter to Vermont DMV Commissioner Robert Ide on Tuesday calling for “an immediate end to the program,” which the ACLU of Vermont says has operated since 2012. The organization obtained the documents through a 2016 public records request.
Netizen Report: In India and Jamaica, Women Face Threats for Resisting Misogyny Online
The Netizen Report offers an international snapshot of challenges, victories, and emerging trends in internet rights around the world. It originally appears each week on Global Voices Advocacy. Ellery Roberts Biddle, Leila Nachawati, Inji Pennu, and Sarah Myers West contributed to this report.
This week, the Guardian released a series of leaked documents outlining Facebook’s internal policies and practices for moderating content on its platform. Among the many revelations, the documents make clear that sexual and gender-based harassment—including threats of sexual violence and revenge porn—remain an endemic problem on the platform and in real life.
Facebook has also increasingly become a space where users document their experiences of such incidents firsthand. This too can carry consequences, especially for women.
One recent example surfaced this week when Varsha Dongre, an Indian civil service officer and deputy superintendent of a jail, was suspended and transferred to a remote prison about 200 miles from her current post. Her crime? Speaking about human rights violations against indigenous Adivasi girls on Facebook. In the post she wrote:
I have seen 14-16-year-old Adivasi girls being stripped naked in police stations and tortured. They were given electric shocks on their wrists and breasts. I have seen the marks. It horrified me. Why did they use third degree torture on minors?
The order calling for her suspension alleges that she made irresponsible statements, cited “false facts,” and went off-duty without approval.
The Indian government has come under harsh criticism from international human rights watch organizations for torture and abuse by the Indian police against Adivasi women. This has come as part of a broader crackdown on indigenous groups and their efforts to resist deforestation of their indigenous land.
In another recent example, in March activist Latoya Nugent was charged under Jamaica’s Cybercrime Law, after she publicly named on social media alleged perpetrators of sexual violence. Nugent is part of the Tambourine Army, a group of women and survivors of sexual violence who are using the internet to share their experiences online. The public prosecutor dropped all charges filed against Nugent on May 17.
Ethiopian opposition activist sentenced to six years for Facebook posts
Social media activist and opposition campaigner Yonatan Tesfaye was sentenced to six years in prison on May 24. This comes after he already spent more than a year in jail in Addis Ababa awaiting trial. The primary evidence against Tesfaye, who was convicted of violating the country’s notorious Anti-Terrorism Proclamation, came in the form of several Facebook posts that the court said amounted to incitement to violence. The screenshot below comes from the Ethiopian Human Rights Project’s translation of the charges filed against Tesfaye. The Facebook post quoted here is one of several that was presented in the filing:
Since November 2015, Ethiopian police have killed several hundred protesters taking part in a land rights movement.
Malaysian news sites face cybercrime charges for publishing video of public event
The heads of two independent media sites in Malaysia, KiniTV and Malaysiakini, are facing cybercrime charges for posting a video of a politician criticizing the attorney general. The video was posted last year and features a politician criticizing the office of the attorney general for failing to identify the role of the prime minister in a corruption scandal around 1MDB, a state-owned investment bank. They face a court hearing on June 15. If found guilty they may face prison terms of up to one year and fines of about $11,500. Malaysian officials suspended the licenses of several news outlets for reporting on the 1MDB scandal, in which the prime minister is accused of embezzling $700 million through the bank.
Azerbaijan censors media for promoting violence (aka covering corruption)
Azerbaijan blocked five independent media outlets, including three online news sites and two satellite TV stations, claiming they “pose a threat” to Azerbaijan’s national security by showing content that purportedly promotes violence and hatred, and violates privacy. Recent coverage by the outlets includes stories about public protests, suicide rates in Azerbaijan, and the financial dealings of Vice President Mehriban Aliyeva’s private foundation. (Aliyeva is also Azerbaijan’s first lady.) The order also establishes grounds to prosecute employees of these news outlets.
Thai police will target viewers of Facebook posts insulting the monarchy
Thai authorities have been at odds with Facebook in recent weeks concerning more than 300 pieces of content on the network that officials say are insulting to the Thai king, and thus violate the country’s notoriously harsh lese-majeste law. (The government is particularly displeased about a widely circulated video that appears to show the king wearing a crop top and eating ice cream in Germany.) Last month, Thai authorities advised Facebook users to unfollow three prominent writers who were known for their critiques of the king. Now, according to the Bangkok Post, they plan to target even those users who merely view content of this nature. Thailand’s Central Investigation Bureau told the Post that the move was “triggered by police limitations in tracking down producers of illegal content posted on social media outlets such as Facebook and YouTube.”
Colombian biologist finally absolved of copyright infringement charges
On May 24, a Colombian court acquitted biologist Diego Gomez of copyright infringement charges in a case that hit a nerve among digital rights advocates across the Americas. A fellow student sued Gomez in 2013 for posting his master’s thesis on the document-sharing website Scribd. Despite the fact that Gomez intended only to share the paper’s findings with his classmates, and that he earned no profit in doing so, the now 29-year-old could have faced a maximum sentence of eight years in prison.
The lawsuit proceeded thanks minimal legal protections for educational use of copyrighted material in Colombia. Copyright laws in Colombia were reformed in 2007 at the behest of the United States, as part of the two countries’ free trade agreement.
With social media banned, Kashmiris turn to local platform KashBook
A local social network called KashBook has spiked in popularity in the Kashmir Valley where the Indian government has banned dozens of social networks, including Facebook. Developed by 16-year-old Zeyan Shafiq, who described KashBook as “the answer to social media gag,” the website launched in 2013 but is enjoying a renaissance in the absence of other platforms. The website has been blacklisted in Kashmir on a few occasions, prompting Shafiq and his business partner Uzair Jan to transfer the site to a new server, a fix that at least temporarily allows users to regain access.
- “Revealed: Facebook’s Internal Rulebook on Sex, Terrorism and Violence”—The Guardian
- “Tainted Leaks: Disinformation and Phishing With a Russian Nexus”—Citizen Lab
- “The State of Internet Censorship in Indonesia”—Open Observatory of Network Interference
- “Analyzing Accessibility of Wikipedia Projects Around the World”—Berkman Klein Center for Internet and Society
China’s Best Go Player Lost a Game to an A.I. The Chinese Government Censored It.
There are trends, and then there are symbols: narrative inflection points that speak volumes. Tuesday became one of the latter when an A.I. called AlphaGo defeated Chinese national Ke Jie, the world’s top-ranked player in the ancient Chinese strategy game Go. The program narrowly bested Ke in the first of a series of engagements that lasts until Saturday. Held in Wuzhen, China, the matches are taking place under Google’s auspices as part of a conference called the Future of Go Summit.
But the man-vs.-machine battle was barred from view in China. As China Digital Times reported Wednesday, the Chinese government issued censorship reports decreeing that “this match may not be broadcast live in any form and without exception, including text commentary, photography, video streams, self-media accounts and so on. No website (including sports and technology channels) or desktop or mobile apps may issue news alerts or push notifications about the course or result of the match.”
Future Tense Newsletter: Expect More Massive Cyberattacks
Greetings, Future Tensers,
More than 150 countries and 300,000 machines later, many in the digitally connected world breathed a sigh of relief that amateur mistakes kept WannaCry—the ransomware that swept the globe earlier this month—from inflicting its full destructive power. But, as Rob Morgus points out, the digital pandemic shows us that it’s easier than ever to launch a large-scale cyberattack. And, he writes, we need to make serious counterproliferation and contingency plans before the next one strikes.
Part of that involves understanding just how predictable malware like WannaCry was. We’ve got pieces on how hackers schemed about the exploit on the dark web before WannaCry was unleashed, how the National Security Agency’s decision not to disclose it and other vulnerabilities it knew about leaves users at risk, and how software patches are harder to deploy than they seem.
Elsewhere on Future Tense, we’ve been looking at the cruelty and altruism shown by strangers on social media. Molly Olmstead wrote about the Twitter users who heartlessly posted fake pleas for help for fictional missing friends and relatives in the wake of the Manchester bombing. At the other end of the spectrum, Jacob Brogan delved into those viral posts soliciting living organ donations on Facebook. As it turns out, those outpourings of generosity come with complications.
Here are other things we read while scheming about how we’ll use Google’s new vision-based search engine:
- We’ve created a monster: The most resonant science fiction doesn’t predict the future, writes Cory Doctorow. It shows us that though technological change may be inevitable, the ways we build and use it are not. The essay is excerpted from Mary Shelley’s Frankenstein: Annotated for Scientists, Engineers, and Creators of All Kinds, edited by Arizona State University’s David H. Guston, Ed Finn, and Jason Scott Robert.
- Licensing your DNA: Jacob Brogan explores the thorny consumer data protection issues behind popular home genetic tests.
- Shift on neutral: Despite plans to roll back Obama-era regulations, Mike Godwin and Tom Struble argue that the FCC’s new approach to net neutrality doesn’t necessarily mean an end to an open internet.
- Rouhani revolution?: Iran’s recently re-elected president ran—and won—on a platform of liberalization and openness. But what of his pledges to make the country’s internet access easier, freer, and more affordable for its citizens?
- Robot-free sidewalks: Ian Prasad Philbrick describes one San Francisco representative’s fight against rolling food-delivery robots—and explains how it’s probably too late to put the brakes on the autonomous delivery vehicles.
The Trump Administration Reportedly Wants Authority to Track and Destroy Drones. That Could Be a Problem.
Rightly or wrongly, civilian-operated drones have a bad reputation. Maybe it’s because we conflate them with the military variety or maybe because there are so many myths about them. Whatever the reasons, the Trump administration seems to have bought in to the anti-drone narrative. As the New York Times reported Tuesday, the administration has been circulating a “draft and summary of legislation” that would “give the federal government sweeping powers to track, hack and destroy any type of drone over domestic soil with a new exception to laws governing surveillance, computer privacy and aircraft protection.”
That document—titled “Official Actions to Address Threats Posed by Unmanned Aircraft Systems to Public Safety or Homeland Security”—suggests that these changes are necessary because drone technology threatens a variety of government operations. Among other things, it argues that unmanned aircraft present risks to firefighting, fugitive apprehension, and even “transportation of special nuclear materials.” (Some of those potential dangers, most notably firefighting, appear to be based on real events.)
The draft goes on to say that the legislation will help ensure that personnel involved in such operations will not be held responsible if they take actions to protect missions that are under way. That may be reasonable enough, since, as the Federal Aviation Administration confirmed last year, it’s technically illegal to shoot down a drone. If emergency responders do have to stop a drone that’s actually disrupting their operations, we presumably wouldn’t want them to get into trouble.
Nevertheless, the Times flags some potential issues. For one, it notes, “The government would have to respect ‘privacy, civil rights and civil liberties’ when exercising that power, the draft bill says. But courts would have no jurisdiction to hear lawsuits arising from such activity.” In other words, you’d probably have no real recourse if the military shot down your drone, even if you were flying it innocently. Further, the draft document indicates that any drone brought down by authorities “is subject to forfeiture to the United States,” meaning that you might not even be able to reclaim its battered husk afterward.
There are plenty of other outstanding questions here, including how authorities would stop drones. The draft document calls for “research, testing, training on, and evaluation of any equipment, including any electronic equipment, to determine its capability and utility to enable” the tracking and destruction of drones. It’s not wholly clear what such equipment would entail (presumably something more sophisticated than trained eagles), but the draft suggests that it could involve accessing “radio communications or signals transmitted to or by an unmanned aircraft system.”
It’s worth noting that a system capable of penetrating drone communications might raise serious civil liberty issues, since it would presumably grant law enforcement access to the devices used to control unmanned aircraft, and not just the drones themselves. Given that many drones are controlled by smartphone, the Trump administration may be effectively requesting a back door into our devices, all in the name of protecting us from a risk that may or may not even be real.
Whether that’s even possible is a wholly different story, of course. Still, it may be worth keeping an eye on this legislation, if only to ensure that our fear of minor privacy violations doesn’t destroy our digital privacy all together.
Republicans Want the FCC to Let Them Drop Messages Directly Into Your Voicemail Inbox
The unceasing assault of robocalls makes constantly answering automated calls and deleting voicemail messages annoying enough. But if a Republican-backed proposal before the Federal Communications Commission goes through, you may find that your voicemail inbox has filled up without your phone even ringing.
On Friday, the Republican National Committee, which handles national fundraising and campaigning for the GOP, filed a public comment supporting a proposal currently awaiting judgment by the FCC, Recode’s Tony Romm reported late Tuesday. The petition, filed in March by the marketing firm All About the Message LLC, would permit private companies and political organizations to deposit automated messages into consumers’ voicemail inboxes without causing the cellphones themselves to ring. If the FCC rules in its favor, the proposal would move “ringless voicemail” robocalling technology from a regulatory gray area to legal fair game, potentially opening the floodgates for telemarketers and political organizations to inundate Americans’ voicemails with messages hawking products, services, and candidates for office.
Thanks to the Latest Android Update, Google’s Weird Little Blob Emojis Will Soon Be No More
In perhaps the biggest emoji news since the loss of Apple’s peach butt last November, Google is saying goodbye to the blob. At its annual I/O developers conference Thursday, the company announced what it’s billing as a “full redesign of the Android emoji font,” scrapping its infamous blob-like emoji in favor of more conventional, circular icons. The move coincides with the upcoming release of a new version of Google’s Android operating system, Android O, and will become available across all the company’s platforms this fall.
Unless you’ve seen them in action (or sent them yourself), it’s tough to convey just how bizarre the blob emojis really are. On Thursday, Google’s creative director, Rachel Been, and product manager Agustin Fonts delicately described them as “asymmetric and slightly dimensional shape[s]” in a Medium post. But they’ve also been in the targets of less flattering language. “Emoji intended to represent people look more like thumbs,” joshed the Verge’s Chris Welch. The icons “look like someone dropped Bart Simpson in a deep fryer,” groused David Goldman of CNN. UPI’s Eric DuVall dubbed them “a cross between melted lemon drops and the yellow ghost in Pac Man.” And as Shona Ghosh quipped in Business Insider, “Some of them are downright scary, others have a quirky charm to their squished expressions.”
After the Bombing in Manchester, Heartless People Tweeted Fake Stories About Missing Loved Ones
In the chaos following Monday night’s deadly bombing at an Ariana Grande concert in Manchester, England, friends and family of concertgoers took to social media to seek out information about their loved ones. But—out of tasteless humor, particularly exploitative attempt to gain retweets, likes, and followers on social media, or some combination thereof—some Twitter users sent out pleas of help for fictional missing friends and relatives.
In some of these cases, the photos used in the tweets could be traced back to images elsewhere online, as with this one that in Google images leads to a search for “cute white boys.”
Who Owns Your Genetic Data After a Home DNA Test?
AncestryDNA’s pitch to consumers is simple enough. For $99, the company will analyze a sample of your saliva and then send back information about your “ethnic mix.” While that promise may be scientifically dubious, it’s a relatively clear-cut proposal. Some, however, worry that the service might raise significant privacy concerns.
After surveying AncestryDNA’s terms and conditions, consumer protection attorney Joel Winston found a few issues that troubled him. As he noted in a Medium post last week, the agreement asserts that it grants the company “a perpetual, royalty-free, world-wide, transferable license to use your DNA.” (The actual clause is considerably longer.) According to Winston, “With this single contractual provision, customers are granting Ancestry.com the broadest possible rights to own and exploit their genetic information.”
Winston also noted a handful of other issues that further complicate the question of ownership. Since we share much of our DNA with our relatives, he warned, “Even if you’ve never used Ancestry.com, but one of your genetic relatives has, the company may already own identifiable portions of your DNA.” Theoretically, that means information about your genetic makeup could make its way into the hands of insurers or other interested parties, whether or not you’ve sent the company your spit. (Maryam Zaringhalam explored some related risks in a recent Slate article.) Further, Winston notes that Ancestry’s customers waive their legal rights, meaning that they cannot sue the company if their information gets used against them in some way.
Over the weekend, Eric Heath, Ancestry’s chief privacy officer, responded to these concerns on the company’s own site. He claims that the transferable license is necessary for the company to provide its customers with the service that they’re paying for: “We need that license in order to move your data through our systems, render it around the globe, and to provide you with the results of our analysis work.” In other words, it allows them to send genetic samples to labs (Ancestry uses outside vendors), store the resulting data on servers, and furnish the company’s customers with the results of the study they’ve requested.
Speaking to me over the phone, Heath suggested that this license was akin to the ones that companies such as YouTube employ when users upload original content. It grants them the right to shift that data around and manipulate it in various ways, but isn’t an assertion of ownership. “We have committed to our users that their DNA data is theirs. They own their DNA,” he said.
In his blog post, Heath further insists that the company has “not sold or provided your genetic data to insurers, employers, or third-party marketers.” He does acknowledge that Ancestry could provide information to a law enforcement agency, if “compelled to by a valid legal process.” According to the company’s own transparency report, it received just nine such “valid” requests in 2016, all of them “related to investigations involving credit card misuse and identity theft,” and none for “information related to the health or genetic information of any Ancestry member.”
That said, there are still potential concerns about Ancestry’s handling of customer data. As Heath explains, “Because genetic information is potentially useful to help cure disease, extend life, and improve science, we ask if you want to take part in research that may be conducted by third parties.” When customers consent, the company can send anonymized versions of their genetic data to “research partners” at both academic institutions and “for-profit research companies that are doing things like trying to understand if there are genetic markers related to longevity.” Despite the altruistic framing, the company is compensated for this material in some cases, offering it a source of profit in addition to the fee that it already charges for sample analysis.
Even if Ancestry maintains its current commitment to protecting its customers’ data, its willingness to profit from that information may raise red flags for the future of consumer genetic testing. “Whether or not they’ve sold information in the past, they legally have claimed the right to do almost anything with it,” Winston told me. (He also stressed that he’s not alleging any wrongdoing on Ancestry’s point, only calling attention to the potentially problematic breadth of its terms and conditions.) Down the road, similar licenses could open the path for more pernicious and willful exploitation of the genetically curious.
For now, at least, the Genetic Information Nondiscrimination Act of 2008, which prevents insurers and employers from using our DNA to make many decisions, protects us in most cases. That law is, however, as vulnerable as any other, and could end up on the chopping block under future healthcare legislation, much as consumer data protections have already begun to erode. If it does, genetic tests could become flashpoints for larger privacy debates.
In response to the controversy, Ancestry updated its terms and conditions on Monday afternoon to more clearly indicate that its customers still own their genetic information. The section in question now includes the statement, “AncestryDNA does not claim any ownership rights in the DNA that is submitted for testing.” Privacy advocates may still have concerns, but it’s a start.