Future Tense
The Citizen's Guide to the Future

July 25 2017 12:19 PM

Don’t Blame Online Anonymity for Dark Web Drug Deals

Last Thursday, the Justice Department announced that it had worked with European authorities to shutter two of the largest destinations on the dark web to buy and sell illegal drugs, AlphaBay and Hansa.

The shutdown followed reports from earlier in the month that AlphaBay, the larger of the two, had mysteriously stopped working, causing users to flock to Hansa.  But it turns out that Hansa had been taken over by the Dutch national police, who were collecting information on people using the site to traffic drugs.

Advertisement

European and American law enforcement collaborated to quietly arrest AlphaBay’s alleged founder Alexandre Cazes in Thailand on July 5. The 25-year-old Cazes later committed suicide in a Thai jail, according to the New York Times.

These dark web drug marketplaces are accessed using a service called Tor, which allows users to browse the internet anonymously. With Tor, you can circumvent law enforcement surveillance as well as internet censorship filters, which are often installed by governments or companies to restrict where people go online. Tor also allows for the creation of anonymously hosted websites or servers that can only be accessed via the Tor Browser. AlphaBay and Hansa were both hosted anonymously on Tor.

Though AlphaBay, Hansa, and, most famously, Silk Road depended on Tor to run their illegal operations, the Tor Project, the nonprofit that maintains the anonymous browser and hosting service, says that only 2 percent of Tor traffic has to do with anonymously hosted websites. The vast majority of Tor traffic is used for browsing the web anonymously. More than 1.5 million people use Tor every day, according to a spokesperson.

The U.S. government has a rather complicated relationship with Tor. On the one hand, documents revealed by Edward Snowden revealed how the National Security Agency had been trying to break Tor for years, searching for security vulnerabilities in browsers that would allow law enforcement to crack the online anonymity service. The Department of Defense has also invested in trying to crack Tor. During the 2016 trial of one of the administrators of Silk Road 2.0, another shuttered dark web drug-trafficking site, it was revealed that DoD hired researchers from Carnegie Mellon University to try to break Tor’s encryption in 2014.

Yet Tor also wouldn’t exist without the U.S. government—it was  originally built as a project out of the U.S. Naval Research Laboratory. The State Department continues to fund Tor (at least someone has told Rex Tillerson about it, presumably) because internet users around the world rely on the anonymity tool to access information and communicate safely online, particularly in countries where the internet is heavily monitored or censored by the government, like in China with its national firewall, or in Thailand, where it’s illegal to criticize the royal family online.

Cazes, the AlphaBay ring leader, was caught thanks to investigative work, not a break in Tor’s encryption. Cazes had sent password recovery emails to his email address, which investigators used to find his LinkedIn profile and other identifiers. (And no, the FBI did not dig up an email from Cazes asking to join his professional network on LinkedIn. According to The Verge, Cazes used the same address on a French technology troubleshooting website, which listed his full name, leading investigators to find a LinkedIn profile where he boasted cryptography and web hosting skills, as well as involvement in a drug front.)

And that’s good news for the vast majority of Tor users who aren’t interested in scoring molly. In 2015, a report from the U.N. declared that anonymity tools “provide the privacy and security necessary for the exercise of the right to freedom of opinion and expression in the digital age."

Anonymity tools, like so many technologies, have both good and bad applications. And in the same way cellphones aren’t evil just because some people use them to make drug deals, it’s important to not malign anonymity tools just because some people use them to sell drugs, too. If the U.S. government is ever successful in finding a way to disable Tor’s encryption to find criminals, it could put hundreds of thousands of people who depend on Tor at risk, too.

July 25 2017 8:59 AM

Whatever You Do, Do Not Take Your Eye Off Trump’s War on Science

Over the past 50 years, the U.S. standard of living has benefited dramatically from active and open government support of scientific inquiry. But three unrelated recent news stories, two out of Washington and one from China, together strike an ominous warning about a very different future.

In Washington, President Trump nominated talk radio host Sam Clovis to occupy the Department of Agriculture’s top science post—undersecretary for research, education, and economics. The position involves overseeing a budget of $2 billion for research and $1 billion for education. Clovis has no scientific training, but he does have a long history of denying the science demonstrating the reality of climate change.

Advertisement

Meanwhile, Joel Clement—who as director of the Department of the Interior’s Office of Policy Analysis oversaw such things as helping endangered communities in Alaska prepare for and adapt to climate change—blew the whistle the Washington Post about goings-on in his corner of the government. Clement writes that he was one of 50 senior employees reassigned to unrelated jobs. Clement, a scientist and policy expert, was shuffled into an accounting job in the office that collects royalty checks from fossil fuel companies. Clement had previously spoken publicly about the dangers of climate change in Alaska, and as he discusses in his Washington Post op-ed, he has filed a whistleblower complaint arguing the reassignment was retaliation against him for his speaking out publicly about possible impacts of global warming.

Finally, out of Beijing comes the story that China has created a development plan to become the world leader in A.I. by 2030. It hopes to “build a domestic industry worth almost $150 billion,” according to the New York Times. This comes at the same time as the Trump administration has proposed severe cutbacks in various government agencies that traditionally support A.I. research, and in high-performance computing that is an essential ingredient in much A.I. work. While it is impossible to predict what breakthroughs will come from this research, A.I. is likely to have a dramatic impact upon the future world economy, much as the internet has had over the past 25 years, and the development of transistors 25 years before that.

Muting or censoring government scientists, appointing unqualified senior government officials to scientific posts, and underfunding scientific research programs are all part of an insidious and worrisome trend. It is insidious because the impacts of such decisions are not immediate. Rather, they will affect the health and welfare of the country a generation from now.

Government should rely on the best and brightest to provide advice and leadership of the nation’s technological base. Government scientists who enter into public service could often gain significantly higher incomes by remaining in the private sector. But instead they help develop a national infrastructure that raises the economic tide for everyone. Or they provide crucial advice that helps direct government resources to support research that may result in significant breakthroughs, eventually aiding the nation and the world.

Instead, existing government scientists are leaving their posts and potential new young scientists will almost surely becoming disenchanted. As a result, the country will lose a valuable resource, and the investment in time and education that helped train these individuals.

When government appoints unqualified people to leadership positions in science and technology policy, the end result is bad advice and missed opportunities. The country can urgently benefit from support of new high-tech research in areas associated with renewable energy production and storage, for example. In the case of the Department of Agriculture, we need proactive research aimed at mediating the impacts of climate change in the coming century. The economic cost of ignoring these challenges will far exceed any immediate savings that may come from cutting existing programs.

The U.S. is already retreating from engaging the rest of the world in diplomacy and trade. It cannot afford to cede its leadership role in science and technology, too. The best young minds are attracted to our graduate schools by the opportunities to engage in exciting, cutting-edge research. Many of these individuals will return to their own countries, providing leadership that helps make those nations more economically stable, which itself benefits the international order. Many other students will remain here, making ground-breaking discoveries, training a new generation of young people, and sometimes going out into the private sector to create companies that bring immense wealth to the nation.

Short-sighted attempts to censor and distort science in the public domain, ideologically based efforts to ignore scientific advice, and efforts to cut support for long-term fundamental research for near-term gain all have the same effects. They may not be noticeable today, or tomorrow, and they may not stop a misguided administration from carrying out its agenda, or even getting re-elected. But they mortgage the future for our children. Good science is the basis of good public policy. We ignore that connection at our peril.

July 24 2017 1:13 PM

An Obituary for Microsoft Paint

On Monday morning, the internet was abuzz with sad news: Microsoft, we learned, is finally killing off Paint, the seminal drawing program that has been around since the first release of Windows in 1985. Getting into the mournful spirit, the Guardian asked its readers to submit their memories of the program. On Twitter, at least one user marked the program’s purported passing with a gravestone seemingly drawn—where else?—in Paint itself.

Look into the story a little more deeply, and you’ll realize that things aren’t quite so grim. In a post on changes coming to Windows 10 this Fall, Microsoft indicates that it won’t be stripping Paint from its flagship OS—at least not yet. Instead, Paint is one of many features scheduled to be “deprecated,” which means that it will no longer be “in active development and might be removed in future releases.” In other words, Paint isn’t quite dead yet; Microsoft is just sending it off to a retirement home and ignoring it while it lives out its final days.

Advertisement

Nevertheless, many have embraced the sorrowful spirit, offering celebratory remembrances and reminding us how great the program could be. Wired, for example, “tracked down some of the best art made within Microsoft Paint.” Wired notes the top posts on Reddit’s r/mspaint community will “put your terrible paint creations to shame.”

Many of those images demonstrate a striking willingness to laboriously grapple with, and push back against, the program’s limitations. The creator of a finely rendered version of the “astronaut sloth” meme writes that he or she “spent 45 hours on this guy, only using a mouse.” For such artists, Paint was a challenge to be overcome, a creatively generative form of restraint, not unlike the arbitrary restrictions that writers of the Oulipo school place on themselves.

It’s hard not to be impressed by their work, but dwelling on masterpieces misses what makes Paint worth mourning, even if it’s not actually passing into the digital twilight today. Microsoft Paint was always a clumsy program, but in its clumsiness, it offered a powerful demonstration of what computers could do. It was just sophisticated enough that there was always some new trick or tool to discover, but just simple enough that you could stumble through it on your own, sans tutorials or manuals. A virtual breach in the otherwise baffling circuitry of the machine, it offered a way in, acknowledging that computers were complicated, but told us not to fear them.

Try to recall your first time playing with it: Remember what it was like to hold the mouse’s left button down as you dragged your cursor across the screen. Maybe your hand shook slightly. Maybe you weren’t certain what you were trying to draw. One way or another, the line was jagged and rough, but it was still yours. Here there was evidence, however inexact the execution, that you could leave your own mark on the digital world.

Even if you never mastered the program, you likely found idiosyncratic ways to use it. An open, experimental canvas, it amplified your other interests instead of dictating your activities: I remember mapping out mazes for the Dungeons & Dragons games I would play with my friends—square rooms linked by rectangular corridors. Later, I tried to make my own Magic: The Gathering cards. No one would have confused my bootleg template with the real thing, but it still felt like I was contributing, not just playing a game that others had designed.

Those experiences are distant now, but I found myself thinking back on them recently as I experimented with Spaces, Facebook’s virtual reality meeting room product. Grabbing a blue pencil from the menu, I began to scribble, thick lines appearing in the air before me. As with my first experiences in Paint, my initial attempts were clumsy, the lines appearing and disappearing as I struggled to learn the controls. Trying to sign my name, I was left with something that looked like an imitation of Cy Twombly, and a bad one at that. And yet, as I had been when I first opened Paint decades before, I was still impressed with myself, newly convinced that I might one day master this strange interface.

Once, Paint served a similar function. Its awkardness was, to some extent, the point: The blank screen was a digital safe space, one in we could experiment on our own terms without risk. If Paint is fading now, it’s likely because we no longer need the assurances it offered us. Where it once helped us learn to think of the mouse as an extension of the hand, our fingers now slide over trackpads and touchscreens. We have long since internalized the things Paint taught us, allowing us to take up other tools.

Thus, while Microsoft may not actually be killing off Paint, it is, in a manner of speaking, dead already. But insofar as its lessons linger, Paint will never really die.

July 21 2017 3:34 PM

YouTube Starts Redirecting People who Search for Certain Keywords to Anti-Terrorist Videos

On Thursday, YouTube announced a new effort to push back against terrorist recruitment efforts on the site. As the company announced in a blog post, “[W]hen people search for certain keywords on YouTube, we will display a playlist of videos debunking violent extremist recruiting narratives.” Arising out of partnerships with nongovernmental organizations, this new feature is part of a larger project called the Redirect Method, an effort specifically targeted at those vulnerable to ISIS’s messaging.

It’s also part of a larger YouTube strategy, one that  Google (YouTube’s corporate parent) counsel Kent Walker laid out last month in a blog post. That announcement came in part out of a response to an advertiser boycott earlier in the year, one driven by companies frustrated to find their own clips running in front of terrorist videos. In response, as Variety reported at the time, Google claimed that it would “be taking new steps to improve controls advertisers have over where their ads appear on YouTube.”

Advertisement

But as Walker explained in his June post, the company was “pledging to take four additional steps” as it worked to actively combat extremism on its platform: It was stepping up technological-identification of terrorist videos, increasing human flagging of such content, more aggressively some videos that don’t directly violate the terms of service, and “expand[ing] its role in counter-radicalisation efforts.” This newly announced redirection strategy seem to be a a product of that fourth and final prong.

In framing both the problem and its approach to it, Google is careful to avoid rhetoric that would suggest it intends to engage in censorship. That’s less of a concern in Europe, where courts have found that free speech laws do not protect extremist videos. But tech companies walk a finer line in the United States, “where free speech rules are broader,” as the Verge observes in a post on related efforts to rein in terrorist content.

As it grapples with this potential concern, YouTube appears to be stressing that it stands in opposition to those who would silence others. Note, for example, how Walker opens his blog post with the phrase, “Terrorism is an attack on open societies, and addressing the threat posed by violence and hate is a critical challenge for us all.” If terrorists oppose “open societies,” then any attempt to combat them should be in the service of defending openness, a conceit that grows fuzzy if technology companies are seen to be silencing some of their users.

In this sense, YouTube’s embrace of the redirect method looks like a smart strategy. It is, as it makes clear, actively removing content that violates its terms of service. But it also gives the impression of a company more focused on drowning out ugly voices than in actively eliminating them. Here, there’s a small but potentially important detail in its announcement: As it moves ahead, YouTube hopes to collaborate “with expert NGOs on developing new video content designed to counter violent extremist messaging at different parts of the radicalization funnel.” Significantly, redirection has the potential to reach those who come looking for terrorist videos, whether or not they’re present on the site.

All that said, it remains to be seen how effective the redirect method will be.

As the Verge reports, “An earlier pilot of the Redirect Method led to 320,000 individuals viewing ‘over half a million minutes of the 116 videos we selected to refute ISIS’s recruiting themes.’ ” While that’s promising, it may run aground against the ways that terrorists get around YouTube’s existing content restrictions. In a long article on the topic, Motherboard writes, “[I]n order to prevent users from flagging explicit or inflammatory extremist videos, terrorist media groups and disseminators like The Upload Knights and AQ’s As-Sahab Media Foundation often label YouTube videos as ‘unlisted,’ meaning that the videos cannot be searched—only accessed if you are given the link.” If potential recruits are finding extremist material by other means, search redirects may not make that much of a difference.

July 21 2017 11:01 AM

Twitter Claims Its Changes Have Led to "Significantly Less Abuse." But Will They Be Enough?

Twitter, like many other social media platforms, can be a cruel place when people choose to make it one. Rude quips abound, as do, more troublingly, threats of assault and death.

For years, users have been calling on the company to make the site safer. And now, at least according to a recent blog post from Twitter’s General Manager Ed Ho, some of their appeals have been answered.

Advertisement

Back in January, Ho tweeted a thread about the social media network’s ramped-up efforts to tackle the issue, writing, “Making Twitter a safer place is our primary focus and we are now moving with more urgency than ever.” He admitted that the company didn’t move fast enough to address abuse in the past, and said that his team would start speedily rolling out product changes. That particular week, he tweeted, they were introducing overdue fixes to muting, blocking, and preventing repeat offenders from creating new accounts.

On Thursday, Ho’s blog detailed some of the other efforts Twitter has made in the past six months. Among others, he wrote, the company convened a Trust and Safety Council that included “safety advocates, academics, researchers, grassroots advocacy organizations and nonprofits focusing on a range of online safety issues—from child protection and media literacy to hate speech and gender-based harassment” to help Twitter tailor new policies and features.  The team also conducted research and made algorithmic changes like producing better search results and collapsing potentially abusive or low-quality tweets.

In the post, Ho seemed confident that the reforms were leading to substantial progress.

“While there is still much work to be done,” he wrote, “people are experiencing significantly less abuse on Twitter today than they were six months ago.”

But we’ll have trust to the company’s word on the metrics. Twitter hasn’t released any internal data yet, though Ho did disclose a few positive measures.

For one, he said, Twitter is taking daily action against abusive accounts at 10 times the rate it did this time last year. It also began imposing temporary limits on abusive accounts that he said resulted to 25 percent fewer abuse reports from those users. Of the accounts put on probation, 65 percent don’t have to be restricted again.

Yet on some other significant measures, the company remains opaque. Twitter hasn’t been as vocal about how other major changes, such as its new “algorithmic timeline,” have changed the nature of abuse, discourse, and engagement on the site. Nor has it addressed why its moderators are still missing some flagrant abusers.

As Slate’s Will Oremus detailed last year, Twitter also hasn’t explained how, exactly, it interprets its “hateful conduct” policy.  The site reportedly retrained moderation teams to enforce stricter anti-harassment policies last year. After the November election, it banned several alt-right figures, including Richard Spencer, for espousing racist views—though it declined to say what specific tweets led to the suspension.

Critics argue that this gives Twitter room for double standards. One prominent beneficiary: President Donald Trump. The commander in chief has posted content on his personal account that some think warrant a ban (does a GIF of him body slamming CNN ring a bell?).

In a meeting with journalists at Twitter’s San Francisco headquarters in July, a Recode reporter asked Vice President of Trust and Safety Del Harvey if the company treats Trump like everyone else’s.

“We apply our policies consistently. We have processes in place to deal with whomever the person may be,” Harvey told Recode. “The rules are the rules, we enforce them the same way for everybody.”

In April, Twitter co-founder and CEO Jack Dorsey also told Wired that his company held all users to the same standards, but added that company policy also accounted for “newsworthiness.” He said he thought it was important to “maintain open channels to our leaders, whether we like what they’re saying or not, because I don’t know of another way to hold them accountable.”

Though the post on safety updates this week said that users were experiencing significantly less abuse, it didn’t address whether individuals actually felt safer. Ho wrote that Twitter would continue to solicit feedback. He also said it would remain committed to making the site a safe place for free expression.

Its users will be the judges of that.

July 20 2017 6:02 PM

Netizen Report: Authorities in China and Indonesia Threaten to Ban Messaging Apps

The Netizen Report offers an international snapshot of challenges, victories, and emerging trends in internet rights around the world. It originally appears each week on Global Voices Advocacy. Ellery Roberts Biddle, Angel Carrion, Leila Nachawati, Inji Pennu, and Sarah Myers West contributed to this report.

new advox logo

On July 15, Indonesia’s Ministry of Communications and Information Technology threatened to ban the secure messaging app Telegram, reasoning that it is being used to “recruit Indonesians into militant groups and to spread hate and methods for carrying out attacks. ...”

Advertisement

As a partial measure, the government has already blocked access to 11 URLs offering the web version of Telegram. In response, Telegram has vowed to double its efforts to remove “terrorist” content from the platform by forming a team of moderators tasked with monitoring networks in Indonesia and removing such content as swiftly as possible.

Although Telegram may prefer this solution to being banned altogether, it may also increase the likelihood that the company will overcomply, which could lead to censorship of lawful speech.

On July 18, Facebook’s popular messaging app WhatsApp was blocked in China, following the funeral of Chinese Nobel laureate Liu Xiaobo. The world-renowned democracy advocate was sentenced to 11 years in prison in 2009 for “inciting subversion of state power” for his involvement in Charter 08, a manifesto that called for democratic reforms in China, and died of liver cancer on July 15.

Liu’s passing brought a new wave of censorship of social media, also affecting conversations on WeChat and Sina Weibo. Before his death, discussion of Liu on WeChat was allowed as long it did not touch on sensitive topics. After his death any mention of his name has resulted in the blocking of messages, including images sent over one-to-one chat. Until this week, WhatsApp had been the only Facebook app product accessible in the country.

Turkey detains human rights defenders with no charges
On July 5, 10 human rights defenders who were arrested while attending a digital security and information management workshop in Istanbul. On July 18, they received a preliminary ruling: Four of the defenders were released on bail, and the remaining six will be held in pre-trial detention while they are assessed for charges. They have been detained on accusations that they “aided an armed terror group,” though the authorities have cited no evidence to support this accusation and it is unclear whether they are formally charged. Protests have been held around the world calling for their release.

Ethiopia’s resistance musicians face arrest, censorship
Ethiopian authorities are now cracking down on musicians. Seven producers of and performers in a popular YouTube video were arrested several weeks ago, and last week they were charged with terrorism for producing music videos and “uploading them on YouTub.” Musicians—such as Seenaa Solomon, a well-known singer who is among those recently charged—became an important source of inspiration and provided the soundtrack to the resistance movement against government plans to expand the capital, Addis Ababa, into the Oromo region. The plan led to wide-scale protests and a violent crackdown between 2014-2016. Despite her jailing, Solomon’s music continues to flourish on YouTube.

In UAE, another arrest for “showing sympathy” with Qatar on social media
An Emirati man was arrested for showing sympathy with Qatar on social media. Ghanem Abdullah Mattar was detained after posting a video urging Emirati citizens to show respect for their “Qatari brothers” during the UAE’s blockade of Qatar. The UAE has criminalized any show of sympathy toward Qatar, punishable by a jail sentence of up to 15 years and a fine of up to $13,500. Mattar’s whereabouts since his arrest remain unknown.

Bangladesh’s ICT Act spawns record number of lawsuits against journalists
More than 20 journalists have been sued over the past four months in Bangladesh under the country’s controversial Information and Communications Act, which prohibits digital messages that “deteriorate” law and order, “prejudice the image of the state or person,” or “hurt religious beliefs.” The minister of law, justice and parliamentary affairs pledged in May to eliminate the Section 57 of the law, which has been used to file these lawsuits, but has shown no progress on this thus far. Nearly 700 cases have been filed under the law since it was amended in 2013.

China forces citizens in ethnic minority region to install mobile spyware
On July 10, mobile phone users in the Tianshan District of Urumqi City received a mobile phone notification from the district government instructing them to install a surveillance application called Jingwang (or “Web Cleansing”). The notification from police said the application would locate and track the sources and distribution paths of terrorists, along with “illegal religious” activity and “harmful information,” including videos, images, ebooks, and documents. Among other things, the application can negate the password requirement of a Windows operating system and access the computer hard disk with no restrictions.

Can Australia strong-arm U.S. tech giants into weakening their security standards?
The Australian government proposed a new cybersecurity law that would force Facebook and Google to give government agencies access to encrypted messages. The law, which Australian Prime Minister Malcolm Turnbull said would be modeled on the U.K.’s Investigatory Powers Act, would grant the Australian government expansive surveillance authority and require companies provide them “appropriate assistance” in investigations. When asked how the government plans to prevent users from turning to other software not controlled by tech companies that could turn over data, Turnbull asserted the laws of Australia would override the laws of mathematics.

July 20 2017 5:52 PM

Congress Is Considering Letting 100,000 Self-Driving Cars Hit the Road

Most people have probably never even seen a self-driving car, but that could soon change.

A House subcommittee voted late Wednesday to allow up to 100,000 self-driving automobiles onto American roadways.

Advertisement

These new robo-cars won’t have to meet existing safety standards for manned automobiles, but manufacturers will have to petition the National Highway Safety and Transportation Bureau, a federal agency tasked with reducing vehicle-related crashes, for an exemption, explained Bryant Walker Smith, a law professor and self-driving car expert at Stanford University. That means that automakers will need to make a clear case that their self-driving technology is safe enough to drive alongside cars with humans at the wheel.

If passed, the bill would permit autonomous cars to drive on U.S. roads before we have established benchmarks for what it means for self-driving technology to be designed safely. Instead of, say, regulators creating some baseline rules for safe self-driving cars, this legislation proposes that automakers self-certify that their autonomous cars are OK to drive on public roads.

The legislation would also bar states from passing rules to regulate self-driving cars, ostensibly to prevent a patchwork of legislation across the country. States can continue to make licensing, registration, and maintenance requirements for self-driving cars, though, which leaves them some room to control how the technology is deployed within their borders.

Ryan Calo, a law professor who specializes in technology policy at University of Washington, is concerned about how this legislation could play out. He thinks that regulatory agencies don’t necessarily have the expertise in robotics and artificial intelligence to determine whether an automaker’s self-driving car exemption will not be dangerous when the rubber hits the road. “This is an area where it’s especially important to make sure the technology is safe before it gets deployed,” says Calo.

If passed, automakers could ask for their self-driving car to not include a brake pedal, for example, because the vehicle will brake with software or a button, rather than the typical pedal currently required.

According to a statement from Rep. Debbie Dingell, a Democrat from Michigan who voted to pass the bill, she was motivated to back the proposal because human drivers kill a lot of people. More than 35,000 people died on American roadways in 2015, up nearly 8 percent from 2014, according to federal data. In 2016, traffic deaths rose another 6 percent. In fact, the past two years represent the highest uptick in automobile related deaths in more than half a century.

Automakers gunning to bring new self-driving tech to market regularly contend that if the human element is taken out of the equation, thousands of lives could be saved. After all, robots can’t drive drunk, text while driving, or do any of the other idiotic things that humans get up to while behind the wheel. Some 90 percent of vehicle crashes can be traced to human error, says Walker Smith.

But there’s actually no data to back the claim that self-driving cars will lead to fewer vehicle related deaths—after all, autonomous cars have yet to be deployed at any meaningful scale. And when the high-tech cars do hit the streets, things can go awry.

Take what happened in San Francisco earlier this year, when one of Uber’s self-driving cars ran a red light  the very first day it was on the road. Then there was the 2016 incident in Florida, when a person behind the wheel of their Tesla in Autopilot driving mode died after crashing into a tractor-trailer and ignoring the car’s multiple warnings to take the wheel.

If this particular legislation doesn’t pass, some other proposal to open roadways to more self-driving cars probably will soon. Even if the technology is ultimately safer than manned cars, when the rubber hits the road, it could get messy.

July 20 2017 3:11 PM

U.S. Customs and Border Protection Says It Doesn’t Look at the Cloud When Searching Digital Devices

Agents on the U.S. border have always had more leniency when it comes to searching people’s belongings. After Trump’s immigration ban was announced in late January, reports circulated that travelers’ personal digital devices were increasingly getting searched when they tried to enter the country.

In response to those stories, Sen. Ron Wyden from Oregon introduced a bill that, among other things, would require Customs and Border Protection to get a warrant before searching travelers’ digital devices.

Advertisement

Back in February, he asked the Department of Homeland Security questions regarding this issue and then followed up with CBP. Now it looks like CBP might be changing some of its ways.

On July 12, NBC published a document—dated June 20—that said CBP looks only at what’s physically on a laptop, smartphone, tablet, or other device. According to the document, agents don’t use travelers’ personal devices to look at information stored on the cloud during checks.

The release appears to be a response to Sen. Wyden’s questions to the agency.

On July 17, the Electronic Frontier Foundation, a digital rights group, published its thoughts about the new information. It reported that this represents a change from CBP’s 2009 policy, which “does not prohibit border agents from using those devices to search travelers’ cloud content,” but instead allows agents to search information they find at the border. The EFF interpreted the 2009 rules to mean agents were allowed to look at cloud content.

Border Patrol agents have certainly taken advantage of the vagueness of the previous policy.

In November 2015, BuzzFeed reported the story of a journalist who was detained before a flight to Miami. Police reportedly looked through his phone and data, including emails with sources and intimate photos.

And then came the immigration ban. In January 2017, Megan Yegani, an immigration lawyer, tweeted about Border Patrol checks: “US Border patrol is deciding reentry for green card holders on a case by case basis - questions abt political views, chking facebook, etc.” Her tweet went viral.

In February 2017, the Associated Press reported that the American Civil Liberties Union and the EFF “have noticed an uptick in complaints about searches of digital devices by border agents.” But the AP also said that the numbers were on the rise before Trump was inaugurated: the number of electronic media searches increased to 23,877 in 2016 from 4,764 in 2015

The EFF seems pleased with this recent announcement, but it’s also being a little cautious.

“EFF will monitor whether actual CBP practice lives up to this salutary new policy. To help ensure that border agents follow it, CBP should publish it,” the organization wrote.

As a next step, the EFF would like CBP to release information about how often it conducts searches for other agencies and to tell the public whether agents actually advise travelers that they have a right not to tell a border agent the passwords to their devices.

July 20 2017 1:29 PM

Two of the Six Missing Members of Burundi’s Robotics Team Spotted Crossing Into Canada

The FIRST Global Challenge robotics competition is making headlines again after six teens from the team representing Burundi disappeared. The mentor and chaperone for the team, Canesius Bindaba, informed FIRST organizers on Tuesday evening that he could not find the two girls and four boys, whose ages range from 16 to 18. They were last seen at 5 p.m. Tuesday, right before the competition’s closing ceremony. FIRST President Joe Sestak subsequently called Washington police, who began searching and tweeted out missing persons notices.

 

July 20 2017 8:47 AM

This Ride-Sharing App Now Offers Matchmaking, Too

Careem, a popular ride-sharing app in the Middle East, North Africa, and South Asia, is introducing a new feature in a bid to attract lovelorn humans in Pakistan. In the wee hours on Wednesday morning, the company sent out text notifications and email alerts to its users in Pakistan offering them the coveted opportunity to find their “Halal” lover on their next trip.

“Your Rishta (match) has arrived, you are no longer to be alone, from now on your status will be taken,” said the email advertisement. Careem says the feature allows riders to opt for a “rishta aunty”—a matchmaker to accompany them on their rides and connect them with potential mates from her network of friends and family.

Advertisement

While Pakistan is no stranger to Tinder and nosy family relatives engaged in the lucrative business of arranged marriages, this is the first time a ride-sharing app has merged with the matchmaking industry. Hopefully it will be the last, too.

Sanaa Jatoi, a friend of mine and a frequent Careem rider from Pakistan, told me in a Facebook chat that when she saw the notification, “I kind of panicked … I just wanted Careem credits, not an aunty. … I legit thought there was an aunty waiting for me downstairs.”

Careem’s foray in matchmaking also generated bewildered reactions on Twitter. “Careem now offers a ‘rishta aunty’ to accompany you on your ride….because my mom wasn’t enough! #DesiProblems,” one angry customer tweeted.

A staff writer at Express Tribune, a newspaper in Pakistan, experimented with feature to see exactly how it works. According to the article, the rishta aunty was already sitting in the car when the ride arrived. She proceeded to interrogate the writer and his accompanying friends to gauge their particular personalities and preferences for women:

She spoke fondly about the wonderful world of rishta aunties where the demand and supply of good rishtas are infinite, and all you had to do to meet “the one for you” was to answer her unlimited questions about yourself. So there, she bombarded us with questions about what we did, where we lived, and whether we were actually serious about getting married. When we asked her what the appropriate age to get married was she responded, “There is no right time, marriage can happen anytime.. just look at those in villages.” To this, we replied that early-age marriages were not just a village phenomenon, but they happened quite often in cities as well.

The ride ended with the aunty handing out her WhatsApp number and email address.

While the ride-sharing app maybe getting its fair share of laughs here, the company has also recently been under fire with allegations of sexual harassment from its female passengers. A young girl from Lahore, Pakistan, alleged in June that she was harassed by a Careem driver after she requested a ride to work. After the report surfaced, company spokesman told Express Tribune that the safety and security of Careem customers is its top priority. While little is known of the Gulf-based company’s operations in Pakistan, it is reported that Careem’s rival Uber is offering mandatory seminars on sexual harassment to all of its drivers in Pakistan in the wake of such allegations.

In that context, Careem’s matchmaking stunt feels, well, a little less funny

READ MORE STORIES