Future Tense
The Citizen's Guide to the Future

Nov. 13 2017 2:46 PM

A Computer Glitch Likely Extended a Man's Jail Sentence by Five Months

David Reyes was set to leave a Louisville prison on Sept. 25, 2016, after serving a nearly year-long sentence.

 

Yet he remained in the corrections facility until this February, five months after his release date. According to an internal investigation, the jail’s $1.5 millionsoftware system had glitches previously known to the facility’s technicians, which likely led to Reyes’s erroneously-extended sentence.

 

Nov. 10 2017 6:23 PM

Keep Twitter Scannable!

It’s been a few days since Twitter upped its character limit to 280, eliminating a feature that has defined the service since its founding, and so far reviews have been mixed. At Slate alone, Will Oremus declared it the right move, while Dan Kois and Greg Lavallee created a Chrome extension to circumvent it.

In rolling out the change, Twitter made the surprising contention that it doesn’t expect the new limit to substantially increase the length of most tweets: Its testing found that “people with the 280-character limit don’t actually tweet much longer, in most cases, than those confined to 140,” Oremus wrote. It’s true that there was an initial deluge of long-on-purpose tweets right out the gate as users tested the limits, literally, of the new character count. Hence a tweet from Law & Order: Special Victims’ account that included the entire opening narration to the show, because why the hell not, or one featuring an even-more-drawn-out-than-usual version of a famously drawn-out TV catchphrase. Also under the heading of innovative uses of the extra characters are the people who have turned Twitter into their own board game cupboards.

Once that’s out of our collective system, though, what will happen to our day-in, day-out tweeting? Will the experience of reading Twitter be, after all this hand-wringing, more or less the same as ever, or will we have to adapt to this brave, new, and potentially chunkier landscape?

One common fear is still that, contra to Twitter’s prediction, the 280 era will bring a wave of reeeeeally long tweets: huge blocks of text, as far as the eye can see, floating through the service like icebergs. Well, as huge as blocks of text can be while still being fewer than 280 characters long—but when you add in the possibilities of threads and quote-tweeting, that’s pretty long, indeed. Under the 140-character regime, tweets were never more than a few lines long, and this kept the platform scannable: You could glance at a tweet and get the gist in a way that becomes harder when a line stretches into a paragraph.

Indeed, one of the ironies of Twitter is that even though it’s a text-based medium, no one seems to want to look at blocks of text while using it. This is not to say that people don’t want to read on Twitter (though maybe they don’t). It’s more about how our eyes process text. On the internet, as with anywhere, the length and shape of the text figures into our comprehension of it.

Maybe the secret to keeping Twitter readable is to reassert some of the physical characteristics that made the platform so pleasantly scannable. Aspiring screenwriters are told to make ample use of “white space” in their scripts. As how-to guide Crafty Screenwriting puts it, “White space is your friend.” It’s become such a mantra that it’s now a cliché that people push back against, but nevertheless, the point of the cult of white space is that choosing your words carefully, keeping them to a minimum, and surrounding them with space (like line breaks), makes them easier and faster to read.

Some of that may be more about impressions than actual truth,

but impressions are

hard
to
shake.

How to put this into practice in your tweets? Hard returns. Use Shift+Enter liberally—you can still say more, but do us all the courtesy of breaking up your points with a little space. It’s easier on the eyes. Or take a lesson from NASA and surround your words with a little flourish.

Nov. 10 2017 1:54 PM

British Court Upholds Ruling That Uber Must Give Drivers Benefits

British courts delivered another blow to Uber on Friday, rebuffing the company’s appeal of a 2016 ruling that it must treat drivers as workers rather than contractors.

The original case, from October 2016, involved James Farrar and Yaseen Aslam, two of Uber’s 50,000 drivers in the U.K., who were arguing for worker benefits like holiday pay, paid breaks, and a default minimum wage. The country’s labor system has three main tiers: contractors, workers, and employees. Aslam and Farrar were seeking to be classified as workers, which would give them fewer benefits than employees but more than contractors.

Nov. 10 2017 8:35 AM

This Hilarious Chatbot Messes with Scammers for You

The next time a fake prince emails you asking for money to access his trust fund, you can recruit a chat-bot to mess with the scammer. Netsafe, an online safety group from New Zealand, created a program called Re:scam that will engage digital conmen in an interminable conversation.

 

Nov. 9 2017 5:12 PM

Should We Worry Apple Offered to Assist the FBI with the Texas Gunman's Case?

The FBI may be looking to face off again with Apple over encryption.

 

In 2016, the FBI and Apple fought a lengthy legal battle over breaking the encryption on an iPhone belonging to Syed Farook, one assailants in the 2015 San Bernardino office shooting. This time, the conflict comes the wake of Sunday’s mass shooting, in which Devin Kelley opened fire in a Texas church and killed 26 people.

Nov. 9 2017 4:02 PM

White Nationalist, Verified

Less than a day after Twitter came under fire for bestowing a coveted blue checkmark on white nationalist Jason Kessler, it has decided to appease angry users by suspending its verification program all together. Kessler considers himself a “white civil rights” leader, has claimed that government is pursuing “displacement-level policies which are removing the indigenous people from the country”—by which he means white people—and most famously, organized the alt-right rally in Charlottesville this summer that turned deadly. (After the rally, he tweeted that Heather Heyer, who died in Charlottesville, was “a fat, disgusting Communist.”)

Like the kindergarten teacher who punishes the whole class because one bully said something mean, Twitter is putting everyone else waiting to get verified in timeout, too. It’s good that the company is taking a breath to reassess what the verification status means, and whether boosting the profile of a person who organized a deadly rally, which the Daily Stormer billed as an effort “to end Jewish influence in America” should receive it. At the time of publication, Kessler, whose profile is adorned with a massive Confederate battle flag, is still verified.

He celebrated the initial news with a tweet:

Of course, Kessler isn’t the only questionable figure Twitter has graced with the status symbol. Richard Spencer, who argued in an interview with a British journalist this week that the trans-Atlantic slave trade was good for Africans, is verified. So is Ann Coulter, as well as Mike Cernovich, the alt-right media personality who helped promote the “Pizzagate” conspiracy theory and used his verified Twitter platform last year to proclaim in a since-deleted tweet:

Today we have a moment of silence for Trayvon Martin's rape victims.
Kidding!
He got got before he was able to rape anyone.
— Mike Cernovich (@Cernovich) Feb. 5, 2016

But the checkmark isn’t just a symbol. Being verified is a sort of like being in an elite club that guarantees your words are more likely to be read and that your videos are more likely to be seen. (I’m not verified, but I recently applied to be.) When accounts are verified, they float to the top of the platform’s search results for “top tweets.” They also appear more likely to show up in Google search’s Twitter bar, which populates at the top of search results for important news events. When verified Twitter users engage with other users on the platform, their engagement is more likely to send a notification, and verified users have the option to only get notifications from other verified users.

Twitter’s decision to verify people who coordinate racist campaigns, like Kessler, is a political one. It is, in effect, saying that his presence is important to boost and protect, even if he uses his Twitter account to promote counterfactual, dangerous narratives. Kessler’s YouTube channel is loaded with videos where Kessler refers to the strength of patriarchal societies and rejects what he describes as an “anti-White movement.” He also says that when alt-right people call one another Nazis it’s like when “black people were called the n-word for so long they just started calling each other that as a term of endearment.”

Twitter CEO Jack Dorsey tweeted on Thursday that the company “realized some time ago the system is broken and needs to be reconsidered. And we failed by not doing anything about it. Working now to fix faster.” And Twitter’s own support team pointed out that his verification has been misunderstood. “Verification was meant to authenticate identity & voice but it is interpreted as an endorsement or an indicator of importance,” the company tweeted.

But the truth is that giving someone a blue checkmark is by definition an indicator of importance. In order to get one, users are asked to describe “their impact in their field” and to provide URLs “that help express the account holder’s newsworthiness or relevancy in their field.” In other words, if you’re not newsworthy by Twitter’s standards, you may be denied verification.

Twitter’s verification of Kessler comes as the company works to strengthen its protections against hate speech. In October, Twitter went into detail about how it is working to weed out violent groups, hate speech, and nonconsensual explicit photos of women that are shared on the platform. Many of those changes are going into effect this month. If the company is serious about its new policies to demote hate speech, verifying a popular white nationalist and anti-Semite is probably a step in the wrong direction.

In the meantime, those who are actually trying to get verified—journalists like me who need to assure sources that they are not being duped—will have to wait even longer while Twitter cleans up yet another mess.

Nov. 8 2017 5:44 PM

A Quick Refresher on the FBI's Fight With Apple Over Encryption

Round One of the FBI’s battle with Apple over iPhone encryption ended in 2016 when the bureau found a workaround. But we may be about to see Round Two.

This time, the battle is over unlocking the iPhone of Devin Kelley, the shooter at the Sutherland Springs, Texas, church. The Washington Post reports that, unable get into the shooter’s phone to uncover more about his motive, the FBI flew the device to its forensics lab in Quantico, Virginia, where it is investigating alternate pathways to the phone’s data, such as cloud storage backups or linked devices. Like San Bernardino shooter Syed Farook, Kelley used an iPhone, and if the FBI’s initial efforts to access its contents come up empty-handed, it’s possible the government will re-litigate the 2016 court battle over whether Apple has an obligation to help law enforcement break into Farook’s phone.

Fuzzy on the details of this whole government-Apple faceoff? Here’s a brief refresher on the encryption debate.

First of all, what is encryption? In a 2015 overview written for Slate, Danielle Kehl explained:

Encryption is the process of combining the contents of a message (“plaintext”) with a secret password (the encryption “key”) in such a way that scrambles the content into a totally new form (“ciphertext”) that is unintelligible to unauthorized users. Only someone with the correct key can decrypt the information and convert it back into plaintext.

Using codes to communicate sensitive information is nothing new—it’s been around for millennia—but encryption breakthroughs in the 1970s cleared the way for today’s data protection. Today’s iPhones use 256-bit AES key encryption, which means that each device has a randomly generated, unique key that is one of a nearly unfathomable number (the exact figure is 78 digits long) of possible patterns and therefore virtually impossible to guess. Apple doesn’t keep a copy of this key, so the only way to use it to unscramble data that’s only on your phone is to enter your personal passcode.

iPhone users are able to set alphanumeric passcodes as well as four or six-digit passwords. Apple has also added Touch ID (starting with the iPhone 5s in 2013) and now facial recognition software, both of which allow users to forego manually entering a passcode in many situations. The more complex the passcode, the harder to hack into a phone, and this problem is compounded by tiered time delays that kick in after a certain number of incorrect passcode entries: one minute until you can try again after five wrong attempts, a one-hour wait after nine tries, etc. Users can also set their phones to erase all data after 10 consecutive wrong attempts. Together, these security measures present serious obstacles for brute-force attacks—that is, inputting every possible passcode.

The impressive security of your standard iPhone poses a problem for law enforcement. When the bureau couldn’t crack Syed Farook’s iPhone 5c after he and his wife, Tashfeen Malik, killed 14 in a terrorist attack in 2015, a federal magistrate judge used the All Writs Act of 1789 to order Apple to build software that would make it easier for the FBI to unlock the device without risking an erase of data. But Apple refused. As CEO Tim Cook wrote in a letter, “We fear that this demand would undermine the very freedoms and liberty our government is meant to protect” because the “backdoor” exception that would help potentially help law enforcement learn about any contact between Farook and ISIS could, if stolen, be exploited by hackers. Civil rights groups like the ACLU and Amnesty International voiced their support of Apple’s stance.

But before the case could end with a loaded legal precedent, the FBI paid $900,000 to an undisclosed third party that helped them bypass the phone’s iOS 9 security.

Phone encryption has remained a point of frustration for law enforcement, however. In October, FBI Director Christopher Wray said that in an 11-month period, his agency had been unable to access half of the 14,000 devices they’d targeted. Deputy Attorney General Rod Rosenstein made similar remarks that month when he called for “responsible encryption” at the U.S. Naval Academy. Now, in the wake of another mass shooting and so-far-inaccessible phone, it looks like the debate over encryption won’t end anytime soon.

Nov. 8 2017 5:10 PM

Facebook’s Tone-Deaf Plan to Tackle Revenge Porn by Having Victims Upload Nude Photos

In March, a private Facebook group of Marines with nearly 30,000 members was outed for hosting hundreds, potentially thousands, of explicit photos of female Marines and veteran service members without their consent. Only men were invited to join the group, called United Marines, and members were implored to share more photos of women, encouraged by lewd comments. One Marine in the Facebook group suggested under a photo of a women that the person who took the picture should “take her out back and pound her out,” according to a report from the Center for Investigative Journalism.

Now Facebook wants to make it harder for men to post nude photos of women without their consent—a practice often referred to as revenge porn, since the men often who do this are ex-partners attempting to damage a woman’s reputation or to impress their peers.

Facebook’s latest answer is to ask women to upload naked pictures of themselves to Facebook via Messenger. Facebook’s artificial intelligence software would then read the image and assign it a secret code, the Australian Broadcasting Corporation reported last week.

“They’re not storing the image. They’re storing the link and using artificial intelligence and other photo-matching technologies,” Australian eSafety Commissioner Julie Inman Grant told the Australian Broadcasting Corporation. “So if somebody tried to upload that same image, which would have the same digital footprint or hash value, it will be prevented from being uploaded.”

Facebook hasn’t written a public blog post announcing this new pilot program. And it’s not clear if this only works with a single photo at a time, meaning you’d have to upload every photo you are afraid will be leaked in order for it work. Nor is it clear whether you could upload images after someone has started to share them online to stop them from being spread further. Slate reached out to the company for clarification, and we will update once we hear back. (Update, Nov. 9, 2017: Facebook has now released a blog post describing the pilot program, and Alex Stamos, Facebook's chief security officer, has addressed some concerns about the project on Twitter.)

Facebook is partnering with the Australian eSafety Commissioner, a federal agency for online safety education, to test this new approach. If someone contacts eSafety with a complaint, Inman Grant said, the agency may recommend they try Facebook’s new nude photo–blocking algorithm to prevent future nonconsensual sharing of nudes on the social network. Australia is one of four countries Facebook is working with on this pilot program—the other three countries have not been publicly shared.

Facebook’s goal here is laudable. But there are some clear problems. For one thing, if someone is able to hack into someone’s Facebook account (like say, if the person is already logged in or the password is stored in the browser), could the image be retrieved? Grant says Facebook won’t store the images, but few things online are ever truly permanently deleted.

More fundamentally, though, asking women who have been victims to upload naked photos of themselves is a rather tone-deaf approach, one that’s not particularly trauma-informed. When a naked photo of a person is circulated without her consent, it can be ruinous emotionally and professionally. Requesting that women relive that trauma and trust Facebook, of all companies, to hold that photo in safekeeping is a big ask.

Facebook, after all, is one of the primary places where these images are shared without consent. The company has been sued multiple times by women for hosting revenge porn, including a case that was allowed to move forward last year involving a 14-year-old girl in Belfast, Northern Ireland, who says a naked picture of her was posted on a “shame page” on Facebook. In the United States, 4 percent of internet users—about 10.4 million people—have fallen victim to threats or have experienced explicit images of them posted online without consent. For women under the age of 30, that figure reaches 10 percent, according to a 2016 study by Data & Society.

The new pilot in Australia builds on a set of global initiatives rolled out earlier this year that gave users an option to report if a “nude photo of me” is posted. It also launched a system that is supposed to prohibit further sharing of banned photos. Facebook, to its credit, is working with experts in online civil rights and domestic violence to build these tools.

But if Facebook wanted to build a software tool to combat revenge porn without accidentally flagging photos that are in the public domain or are of historic significance—like when it wrongly blocked the iconic photo of the nude girl in the Vietnam War—it's probably going to have to be more complex than this. Perhaps once an algorithm recognizes when a photo of a partially clothed body is uploaded, it should then run facial recognition software on the photo. If it detects that this is possibly another Facebook user, then it should flag the image to be reviewed by a professional who works at Facebook. It might mean forcing a lag time on any photos that appear to include nudes, and users will just have to get used to this new restriction.

Whatever the solution is, it shouldn’t ask women who are afraid of abuse to make themselves feel even more vulnerable. In theory, Facebook’s experiment here could work. But after dealing with the fallout of learning you’re a victim of revenge porn, the last thing you probably want to do upload a naked photo of yourself to Facebook.

Nov. 8 2017 4:09 PM

When State Election Boards Try to Increase Voting Machine Security, They Can Run Into Obstacles

Voting in the United States is highly decentralized—and in many ways that’s a good thing when it comes to security. Having different regions operate their own elections and count their own votes makes it harder for someone to forge, compromise, or change a large number of votes all at once. But that decentralization also means that individual states, counties, or districts are also often free to make bad decisions about what kind of voting technology to use—and it’s surprisingly hard to stop them.

Earlier this week, North Carolina’s state elections board made a last-ditch attempt to convince a judge to prohibit counties in the state from using voting software manufactured by VR Systems on the grounds that the board hadn’t officially certified the software since 2009. On Monday—the day before Election Day—that attempt failed when Superior Court Judge Paul Ridgeway declined to intervene.

The situation in North Carolina highlights just how hard it is to make progress securing elections at the state level even at a moment when there’s more interest in and attention to state election security than ever before. Much of that interest stems from reports of Russian attempts to infiltrate and compromise the voting infrastructure of 21 states in the lead up to the 2016 election. According to the Intercept, VR Systems—the electronic voting company North Carolina’s election board was concerned about—was the target of a series of phishing attempts that were intended to enable Russian hackers to impersonate a voting software vendor and distribute malware to local election officials. Besides, five Durham County precincts experienced problems with VR Systems software in 2016 and were ultimately forced to give out paper ballots instead (probably an improvement in terms of security).

It's unclear whether any of Russia's attempts were successful and, if so, what the consequences were. The NSA document obtained by The Intercept indicated that it was "likely" that an employee account had been compromised at an unnamed election software company selling a VR Systems product and that access was probably used to gather information for the next round of phishing, directed at local governments, during which the hackers impersonated VR Systems employees. VR Systems disputes this account and says that no employee credentials were compromised.* And the fact that hackers were targeting the company and impersonating VR Systems vendors in their efforts to distribute malware does not necessarily indicate that the company’s voting software is vulnerable. And it’s possible that the Durham County problems were user error, as VR claims. But even without these red flags, it would be pretty reasonable for North Carolina to do another security audit after an interval of eight years.

But what’s most astonishing about the North Carolina saga is just how little it matters what the state wanted—and just how little power state elections boards appear to have over voting technology. The North Carolina elections board was not even permitted to revoke its own certification of VR Systems software eight years after it initially issued it. It’s hard to imagine how an elections board could ever feel comfortable certifying voting technology under those circumstances.

The lack of supervisory power at the state level is especially striking at a moment when the federal government is pushing to give more support to states to beef up their elections security. In late October, Sens. Martin Heinrich and Susan Collins announced a new bill intended to help strengthen voting security primarily through partnerships and funding provided at the state level. The Securing America’s Voting Equipment Act would give the federal government the ability to share more classified information with state election officials about potential threats to their voting systems. It would also establish a grant program for states to upgrade their election technology subject to recommended best practices for security developed by the Department of Homeland Security, the National Institute for Standards and Technology, the National Association of Secretaries of State, and the National Association of State Election Directors. In the House, on Wednesday, Rep. Debbie Dingell introduced a similar bill, the Safeguarding Election Infrastructure Act of 2017, which would also provide states with additional intelligence and resources to protect voting systems.

Helping states buy more secure election technology that meets baseline security standards and providing them with more information about threats sounds like the sort of legislation state governments would support. But, in fact, previous efforts by the federal government to take similar steps have not been welcomed by all states. For instance, last year, when the Department of Homeland Security offered to help states scan their voting systems for security vulnerabilities, Georgia flatly declined. Georgia Secretary of State Brian Kemp said at the time he thought the government was “federalizing elections under the guise of security.” Georgia, meanwhile, has struggled considerably when it comes to dealing with security threats to its elections, signaling just how much it needs the kind of help it so aggressively refused. (Other states, including Florida and Ohio, were more willing to accept assistance help from DHS.)

Trying to prevent meddling in U.S. elections seems like an issue that voters and government offices at every level should be on the same side of—and yet it’s remarkably adversarial. The federal government can’t enforce security standards at the state level, and, at least in North Carolina, the states can’t even necessarily enforce their own security decisions at the county level. This profound lack of coordination and cooperation speaks to the disadvantages of letting everyone run elections in their own way. Yes, the decentralization of voting in the United States makes our elections harder to hack in some ways—but it also makes them harder to secure.

*Update, Nov. 9, 2017: An earlier version of this piece asserted that a VR Systems employee account had been compromised as a result of hacking attempts based on The Intercept's analysis of a classified NSA document about hacking efforts directed at a U.S. elections software company selling a VR Systems product. VR Systems denies that any employee accounts were compromised and the piece has been updated to reflect that.

Nov. 8 2017 3:23 PM

Is Uber Seriously Promising Flying Cars in Three Years?

If you believe the hype, Uber is taking to the skies in three years.

READ MORE STORIES