Future Tense
The Citizen's Guide to the Future

May 19 2015 6:48 PM

Microsoft Solitaire Is 25. Join the Tournament!

The best part of encountering an old PC—whether it's your ancient IBM Thinkpad or your great-aunt's Gateway desktop—is playing Solitaire on a long-obsolete version of Windows. Change the card art to the spooky castle and go nuts. And for the beloved classic's 25th birthday, Microsoft is launching two tournaments to identify the ultimate Solitaire addicts.

Microsoft says that the first competition will be internal at the company this month. Then in June it will publicly release the same challenges it gives its employees for an Internet-wide showdown. As Slate's Josh Levin wrote in 2008, "Though on its face it might seem trivial, pointless, a terrible way to waste a beautiful afternoon, etc., solitaire has unquestionably transformed the way we live and work."

Microsoft offers a whole Solitaire Collection for download now, but there's nothing like the original that first awakened pure digital procrastination in each of us. And by the way, if you haven't played FreeCell in a while, it's still a nightmare. 

Video Advertisement

May 19 2015 3:12 PM

Innocence of Muslims Can Go Back on YouTube. Good.

On Monday the 9th Circuit Court of Appeals reversed an earlier ruling that had forced YouTube to take down Innocence of Muslims, an inflammatory anti-Islam film that may have helped spark the Benghazi attack. Because this is America, the decision did not deal directly with blasphemy—a constitutionally protected form of expression—but with copyright and intellectual property. Yet lurking just beneath the court’s opinion lay a vigorous defense of free speech, individual liberty, and the right to disseminate even hateful, noxious ideas.

The strange case arose after Cindy Lee Garcia accepted $500 to appear briefly in what she believed was an action-adventure thriller set in ancient Arabia. Garcia’s only line was “Is George crazy? Our daughter is but a child?” In postproduction, however, producers overdubbed her line with the words, “Is your Mohammed a child molester?”

In the final cut of the film, Garcia appeared on screen for five seconds. But after the film premiered and spurred riots in the Middle East—and a fatwa against its actors in Egypt—Garcia sued YouTube and its parent company, Google, demanding they take down the film. Initially, Garcia asserted that the film was hate speech and violated her right to privacy. Eventually she settled on the copyright claim, insisting that she held a copyright over her five-second appearance, which gave her the right to force Web hosts to remove the film.

As the 9th Circuit acknowledged on Monday, Garcia’s copyright claim was, in short, ridiculous. The “author” of a film is usually its director, perhaps jointly with its producer and screenwriter. Individual actors can’t “author” a film for copyright purposes; otherwise, every actor would hold a copyright over her individual scenes, creating what Google called a “Swiss cheese of copyrights.”

It gets worse for Garcia. The Copyright Office registers movies as a single “work” and refuses to splinter every film in smaller copyrightable bits. Pragmatism dictates such a rule—otherwise, the court says, each of the estimated 20,000 extras in Lord of the Rings might assert copyright ownership of their individual scenes. And oddly, Garcia’s copyright claim is even weaker than a Lord of the Rings extra’s: While Frightened Hobbit No. 2 might have actually spoken his lines, Garcia’s one line was overdubbed, meaning she didn’t even utter a single word in the film. By manipulating her role, the movie’s director became the indisputable author of even Garcia’s five-second cameo.

All of this stuff is good law, well applied. But luckily the court recognized that there’s more going on here than just a dry intellectual property dispute. At the outset the majority wrote that the appeal “teaches a simple lesson—a weak copyright claim cannot justify censorship in the guise of authorship.” Later on it reprimanded a panel of judges who had previously ordered YouTube and Google to remove the video:

The takedown order was unwarranted and incorrect as a matter of law, as we have explained above. It also gave short shrift to the First Amendment values at stake. The mandatory injunction censored and suppressed a politically significant film—based upon a dubious and unprecedented theory of copyright. In so doing, the panel deprived the public of the ability to view firsthand, and judge for themselves, a film at the center of an international uproar.

In a separate opinion the 9th Circuit’s liberal lion Judge Stephen Reinhardt benchslapped the panel once again, sternly noting, “This is a case in which our court not only tolerated the infringement of fundamental First Amendment rights but was the architect of that infringement”:

[W]e issued an order that prohibited the public from seeing a highly controversial film that pertained to an ongoing global news story of immense public interest. … By suppressing protected speech in response to such a threat, we imposed a prior restraint on speech in violation of the First Amendment and undermined the free exchange of ideas that is central to our democracy and that separates us from those who condone violence in response to offensive speech

Intellectual property experts generally agreed that the copyright ruling was correct. But you don’t have to be an IP professor to know that the Constitution does not permit courts to censor expression through the vehicle of a thinly veiled copyright claim. Innocence of Muslims may be blasphemous, hateful, and inane, but it’s also a textbook example of highly political speech on a matter of fierce public debate. Its controversy demonstrates precisely why it needs constitutional protection. Free speech is a very nice idea for a democracy. But it means nothing when judges can toss it out the window under the pretext of a laughable copyright suit.

May 18 2015 6:31 PM

Gorgeous, Algorithmically Generated Time-Lapses of the World’s Most Popular Landmarks

Time-lapse photography is fascinating because it can reveal changes that transpire too gradually to observe in real time. The problem is that, well, it takes a long time.

Researchers from Google and the University of Washington have found an elegant way around that, at least for some of the world’s most-photographed landmarks and scenes. In a paper published online, the researchers show how publicly available images shot by countless amateur photographers over a period of years can be algorithmically transformed into beautiful time-lapse videos. They call the process “time-lapse mining.”

koh nang NEW

The researchers started by gathering 86 million time-stamped images publicly uploaded by various users of photo-sharing sites such as Google’s own Picasa and Panoramio. They used image-recognition software to automatically pick out thousands of “clusters” of photographs that all showed the same landmark, such as the Salute in Venice or the Mammoth Hot Springs at Yellowstone National Park. Then they developed algorithms to warp a subset of photos in each cluster to a common viewpoint and scale, and ordered those by time stamp. 

Throw in a few image-stabilization techniques and correct for lighting differences, and voila: an automatically generated time-lapse video of each landmark that looks almost as if it were shot with a single camera. At the top of this post is the full video that the researchers published in conjunction with their paper. 

lombard NEW

“Whereas before it took months or years to create one such time-lapse, we can now almost instantly create thousands of time-lapses covering the most popular places on earth,” the researchers wrote in their paper. (Here is the PDF.) “The challenge now is to find the interesting ones, from all of the public photos in the world.”

Figuring out what’s interesting, you see, is a task that’s still beyond the ken of machine-learning algorithms. The Google and UW researchers had to go through the time-lapse videos themselves to determine which were worth highlighting in their paper. They homed in on several categories of subject, including waterfalls, seasonal changes in vegetation, geological changes, construction projects, and city scenes. Sprinkled through this post are a few of our favorites, in GIF form, including the one of Las Vegas' changing skyline below.

vegas-new

And here is the full video that the researchers published in conjunction with their paper, including a slew of other impressive time-lapses. It's very much worth watching.

May 18 2015 5:07 PM

United Offers Reward for Spotting Security Flaws in Its Website, Not Its Planes

United Airlines is offering 1 million rewards miles to hackers who report vulnerabilites in the company's website or apps. The company claims it's the first cybersecurity incentive program in the industry. But noteably, the "bug bounty program" does not apply to "bugs on onboard Wi-Fi, entertainment systems or avionics." Basically, the company doesn't want independent researchers vetting the systems that actually make planes fly.

The program debuted last week, less than a month after security researcher Chris Roberts was banned from flying on United after he tweeted about onboard Wi-Fi security vulnerabilities while on one of the company's flights. Roberts implied in a tweet that he could access navigation systems and control passenger oxygen masks. In response, the FBI met him at the gate in Syracuse, New York, when he landed. Later, in conjunction with TSA, the agency issued a warning to airlines to be on alert for hackers.

The bug bounty program could have been in the works before this incident but either way it certainly speaks to the importance of engaging security researchers, sometimes called white-hat hackers, instead of alienating them. As Dan Gillmor wrote on Slate, "If United and the aviation industry as a whole want to earn customers’ confidence in this situation, they should put Roberts and a bunch of other white-hat hackers on retainer."

That's not exactly what United is doing, though. The company has made good security updates to its Web site, and a beta version that includes default page encryption ("https" at the beginning of the URLs) launched last week. Testing this and other new security features is important, but if bug hunters are discouraged from testing their ability to access flight systems, the bounty won't help with the most crucial (and dangerous) vulnerabilities.

To be fair, it's probably not safe for security researchers to mess around with critical systems while flying to win miles. You can see how that could lead to a tragic accident. Maybe airlines should run a few controlled hacking flights every year and voluntarily give researchers the opportunity to look for dangerous bugs. We all know how much companies love scrutiny of their proprietary systems!

May 18 2015 12:31 PM

Obama Gets His Very Own Twitter Account

The official (verified!) Barack Obama Twitter account has more than 59 million followers. The handle is run by Obama's aides, so tweets personally written by the president are signed "-bo." It seems like a reasonable arrangement for someone who's, well, pretty busy. But Obama apparently wanted an account all for himself, and with @POTUS, which sent its first tweet this morning, he just got one.

Something so public can't just be personal, though. The account bio warns that, "Tweets may be archived: http://wh.gov/privacy," referring to the White House's data privacy policy and, presumably, the scandal over Hillary Clinton's use of a personal email account while secretary of state without adequate public records archiving. The White House wrote in a blog post that, "President Obama is committed to making his Administration the most open and participatory in history."

It's not clear yet whether Obama has snapped up @POTUS for himself, or whether it will be handed down like the official Pope account, @Pontifex

Since the account is new, there was a brief period where the Twitter-famous could honestly boast that they had more followers than the leader of the free world.

But the account is rapidly gaining followers. Duh.

It seems that it wasn't until the home stretch of Obama's presidency that anyone (perhaps himself included) trusted him with a personal account. I think I speak for everyone when I say: What could possibly go wrong?

Update, May 18, 2015, 1:45 p.m.: It seems that Obama sent his first @POTUS tweet from an iPhone, which is weird because he's always been very vocally frustrated about having to use Blackberries as part of the White House's cybersecurity approach.

May 15 2015 12:47 PM

Google’s Fully Driverless Cars Are Ready for the Road. Well, Some Roads.

Google has been testing self-driving car technology on American roads for six years now, with mostly encouraging results. Until now, however, it has used specially outfitted versions of mass-production cars like the Toyota Prius and Lexus RX 450h.

On Friday, the company announced that it is ready to put its own custom-built, fully self-driving cars on public streets for the first time. Yes, those cute little Koala-mobiles are apparently road-ready, just a year after Google introduced them to the world. They’ll start out by tooling around the intimately familiar Mountain View, California, roadways that Google’s self-driving Lexuses have been cruising—and painstakingly mapping—for the past couple of years.

Google’s prototypes are designed to be not just self-driving, but fully driverless: They have no steering wheels, brakes, or gas pedals, just buttons you push to start the ride. As you can see in the video below, they’re designed to be capable of completing their journey without anyone in the driver’s seat at all.

Unfortunately for Google, California’s self-driving car law doesn’t allow that. It requires all autonomous vehicles to be street legal and to have a human behind the wheel. So for the time being, Google says it will outfit its robotic Totoros-on-wheels with removable steering wheels, accelerators, and brake pedals so that the person in the driver’s seat can take over at a moment’s notice. It will also cap their speed at 25 mph, making it unlikely that anyone will die even on the off chance that they accidentally mow someone down. (For the record, while Google’s self-driving cars have been involved in a handful of minor accidents over the years, Google says not once has its autonomous driving system been at fault.)

What’s interesting about all this is that Google doesn’t really believe that putting a human behind the wheel makes its self-driving cars safer. It’s happy to comply with California’s law for now, of course, while it’s still developing the technology and mapping the terrain. But in the long run the Googlers behind the self-driving car project are convinced that driving will be safer once humans are removed from the equation altogether.

That’s a view not widely shared—or, at least, not widely voiced—among mainstream automakers, which is part of why Google had to build its own steering wheel–free prototypes in the first place. But at least a few manufacturers may be starting to come around: At the Consumer Electronics Show in Las Vegas in January, for instance, Mercedes rolled out a futuristic concept car in which four passengers face each other rather than the road ahead. Whether we’ll see anything like that on public streets in our lifetimes will depend in part on the success of the types of tests that Google is doing today. It certainly won’t happen anytime soon.

One other thing that isn’t likely to happen anytime soon: Sighting a Google self-driving car prototype on a random street in Anytown, U.S.A. As the Atlantic’s Alexis Madrigal explained last year, near-perfect information about a given roadway has been crucial to self-driving cars’ success so far. Google has turned parts of Mountain View into a virtual test track by mapping literally every speed bump and stop sign. Google said in a statement Friday that it is starting to send some of its self-driving Lexuses into new territory, including San Francisco, where they’ll encounter fresh challenges like hills and fog. Until further notice, however, the driverless prototypes will stick to the cozy confines of Google’s hometown.

Previously in Slate:

May 14 2015 5:25 PM

Reddit Decides It Might Be Time to Crack Down on Harassment

On Thursday, Reddit announced that it wants to "curb harassment" among users. The company says it has "improved our practices" and that users will be able to report problematic private messages, posts, and comments using Reddit's internal messaging system or through contact@reddit.com.

The site has been going through some changes over the last six months. In February it banned revenge porn, and co-founder Alexis Ohanian said in a statement, “We also recognize that violent personalized images are a form of harassment that we do not tolerate and we will remove them when notified.” At the time, Slate's Amanda Hess noted that though the change was positive, it was not clear how Reddit would authenticate requests or how it would address nuanced fringe cases, like photos of public nudity.

Reddit's new initiative is equally vague. It presents a definition of harassment:

Systematic and/or continued actions to torment or demean someone in a way that would make a reasonable person (1) conclude that reddit is not a safe platform to express their ideas or participate in the conversation, or (2) fear for their safety or the safety of those around them.

And then it explains how people can bring unacceptable behavior to the attention of Reddit employees so they can, presumably, take action. It doesn't say anything, though, about how Reddit will approach or deal with these submissions. Of course these are hard topics to present succinctly in a blog post, but a little specificity would be useful. When asked about how complaints will be evaluated and how harassers will be reprimanded, a Reddit representative referred me to the announcement.

When Twitter expanded its attempts to combat harassment in April, the company talked about how it had created a temporary lock function for user accounts under investigation. Examples of this additional granularity for Twitter included: "An account may be locked for a pre-defined time period. A user may be asked to verify their phone number. A user may be asked to delete certain Tweets. After completing the requested actions, their account is unlocked." There are problems with Twitter's plan, as some of my Slate colleagues pointed out, but at least we know what the plan is.

In its statement, Reddit says, "One of our basic rules is 'Keep everyone safe'. Being safe from threat enables people to express very personal views and experiences—and to help inform and change other people’s views." The link goes to five "Reddit Rules," which have overlap with but are not the same as the classic five. A Reddit representative said that the company requires users to follow both sets of rules, and that they are complementary to each other.

This subtle adjustment to the rules, plus the broader approach in Thursday's announcement, seems aimed at reducing harassment without enraging a certain population of Reddit users who don't want any company intervention and prefer to rely on appointed moderators. "This change will have no immediately noticeable impact on more than 99.99% of our users," the statement says.

Reddit points to a survey it conducted last month of 15,000 users to explain its decision to make changes.* The poll "showed negative responses to comments have made people uncomfortable contributing. ... The number one reason redditors do not recommend the site—even though they use it themselves—is because they want to avoid exposing friends to hate and offensive content." The survey revealed that 20 percent of women versus 12 percent of men would describe themselves as unhappy with the Reddit community.

Reddit's example carries a lot of weight, so the decision to publicly combat harassment is certainy a positive one. The question now is just whether these changes will actually help.

*Correction, May 15, 2015: This post originally misstated the number of users Reddit surveyed. It was 15,000.

May 14 2015 4:49 PM

This AI Engine Promises to Identify Your Photos. It Often Fails Spectacularly.

Here's a riddle: When is a goat a dog?

Answer: When you run a picture of it through the Image Identification Project.

Released to the public yesterday, Wolfram Research’s ImageIdentify promises to accurately describe the contents of any picture that you show it. The setup couldn’t be simpler: Upload a picture and it’ll tell you what it sees. In a lengthy blog post, computer scientist and Wolfram Research CEO Stephen Wolfram describes this program as “a nice practical example of artificial intelligence.” He suggests that it might be used to automatically classify the contents of albums, offering “statistics on the different kinds of animals, or planes, or devices, or whatever, that appear in the photographs.”

The trouble, unsurprisingly, is that ImageIdentify appears to go wrong more often than it goes right. Wolfram acknowledges this difficulty, and gamely offers a handful of interesting errors in his post. Given an image of Indiana Jones, for example, “the system was blind to the presence of his face, and just identified the picture as a hat.” It’s certainly impressive that it recognized and correctly labeled a hat. But such mistakes would seem to constrain the project’s usefulness, at least for the time being.

Like Microsoft’s How-Old.net, ImageIdentify is most interesting when it gets things wrong in spectacular ways. As Wolfram notes, many of its errors make sense. It confused my bike with a bicycle rack, presumably because it saw the primary object correctly, but assumed that it was somehow attached to the old fashioned radiator behind it. Here, the system’s error likely derives from its effort to identify a single subject in each image, a propensity that sometimes leads it to ignore key details (as in the Indiana Jones example) and sometimes leads it to conflate distinct elements (as in the case of the bike).

wolfram image identifier: bike rack

Screencapture of ImageIdentify. Photo by Jacob Brogan.

Sometimes, however, ImageIdentify is just plain weird. When we fed it a picture of a croissant, it told us that we were looking at shellfish. Wolfram claims the system’s mistakes “mostly seem remarkably human.” But that pastry-mollusk confusion feels uncanny—more like a metaphor than an ordinary misapprehension.

ImageIdentify will no doubt improve in time—it can be trained to better understand what it’s looking at—but for now it’s at its best when it’s at its strangest. Here are a few of our favorites:

pollock chicory root

Screencapture of ImageIdentify.Image of Autumn Rhythm by Jackson Pollock.

Feline English setter

Screencapture of ImageIdentify. Photo by Heidi Strom Moon.

Oscar Meyer Bobsled

Screencapture of ImageIdentify. Photo by June Thomas.

lemon shark cat

Screencapture of ImageIdentify. Photo by Abby McIntyre.

wolfram person

Screencapture of ImageIdentify.Photo by Joi Ito/Flickr.

May 14 2015 4:00 PM

Code-Breaking Robot Proves Your Combo Locks Are No Good 

Wired logo

Careful what you leave in your lockers, high school students and gym-goers. An invasion of 3-D–printed robots may be coming, capable of popping one of the world’s most ubiquitous brands of combination locks in as little as half a minute.

On Thursday well-known hacker Samy Kamkar published on his website the blueprint and software code for a 3-D–printable Arduino-based lock-opening robot he calls the “Combo Breaker.” Attach it to any of millions of Master Lock combination locks, turn it on, and it can take advantage of a Master Lock security vulnerability Kamkar recently discovered to open the lock in a maximum of five minutes with no human interaction. “The machine pretty much brute-forces the lock for you,” says Kamkar. “You attach it, leave it, and it does its thing.”

In fact, the Combo Breaker is programmed to do far better than a mere brute-force attack. It takes advantage of a mathematical trick Kamkar revealed last month that allows anyone—with a little practice—to find the combination of a low-end Master Lock combination lock in only eight tries. That technique takes advantage of a manufacturing flaw: When the U-shaped shackle of one those combination locks is pulled while its rotor is turned, the cracker can feel resistance on certain numbers that help to reveal the position of the “combination disks” that determine the combination that opens the lock. In combination with some restrictions in possible combinations that Kamkar mathematically deciphered and encoded in a web-based tool, Kamkar exploited that information leak to cut out all but a few possible combinations. The resulting manual technique is easy enough—writers at Ars Technica who tested it, for instance, were mostly able to pull it off after a couple of tries.

The Combo Breaker goes even further, automating the process with zero skill or practice required from the user. But a Master Lock cracker willing to learn just one step in the process can also give the Combo Breaker a manual head start by merely turning a target lock’s rotor while tugging the shackle to find the first number that offers resistance and starting the robot at that position. Doing that, Kamkar says, enables his device to then crack a Master Lock combination in just 30 seconds. “Without doing any work, this can open the lock entirely automatically in 80 combinations,” Kamkar explains. “If you do that one little test first, it can crack the lock in eight combinations or less.”

Kamkar’s robot consists of little more than a stepper motor, an Arduino chip that runs his cracking algorithm, a lever to pull the shackle, a rotor with a 3-D–printed attachment to the lock’s face, and an optical sensor that tracks the location of the lock’s dial as it turns. All together, he says, he built his prototype for less than $100. Here’s Kamkar’s video breakdown of the robot’s creation.

Master Lock didn’t immediately respond to Wired’s request for comment. But Kamkar says his cracking technique is likely no major surprise to the lock maker, nor should it necessarily register as a serious security crisis. Master Lock gives its locks a 1-to-10 security rating displayed on its packaging, and the locks he tested were all rated 3. “The moral is pretty simple,” he says. “If you’re trying to protect valuables in a storage locker, you should probably be using a better lock.”

In fact, Kamkar’s method builds off a trick that’s been known for years that reduces the number of possible combinations of those cheap Master Lock locks from 64,000 to just 100. Kamkar’s original goal was to build his robot to automate that tedious 100-combination guessing. But when he drilled off the back of the locks to learn more about how they work, he soon discovered his own additional trick that further honed the attack, vastly reducing his robot’s cracking time. (Watch Kamkar explain the technical details of that technique here.)

The Combo Breaker robot is only the latest in a long career of clever hacks for Kamkar, who works as an independent software developer and consultant. Kamkar gained fame in 2005 for creating the “Samy worm,” an attack that spread virally across Myspace, adding over a million friends to Samy’s Myspace account in less than 24 hours. Kamkar’s more recent work has included a drone designed to seek out and wirelessly hijack other drones and “evercookie,” a browser-tracking cookie designed to be nearly impossible to remove.

Kamkar says his goal in freely releasing the plans for the Combo Breaker was mostly to foster hacker experimentation and share his own enjoyment of what he describes as “James Bond”-style gadgetry. But he also hopes to teach the public that their low-end combination locks are laughably insecure. “Security people know about this, but the general public doesn’t,” Kamkar says. “I try to build things that are interesting to a general audience. And I hope getting this out there helps people make better decisions about the locks they use.”

Also in Wired:

May 14 2015 11:47 AM

Thanks to a Security Flaw, Apple Watches Are Really Easy to Steal

The basic security measure on the Apple Watch makes a lot of sense. When you put the watch on, you enter a passcode (or unlock your iPhone) and it senses contact with your wrist. As long as you keep the device on and maintain that contact, the watch assumes that it's still safe with its trusted BFF. But if you take the watch off, it starts demanding the passcode again. It's simple and smart, but there's a problem.

As iDownloadBlog discovered, Apple Watches have a bug that allows someone (like a thief, for example) to do a hard reset on a unit without knowing its passcode (see video below). Pressing and holding the Apple Watch's Contacts button initiates a sequence that includes the option to erase everything on the watch, including settings like the passcode. And you don't need the passcode to do it. Once the unit has been wiped, it's essentially brand new again, and ready to be paired with a new iPhone.

Apple has made progress closing this type of loophole on iPhones themselves, which has successfully reduced theft rates. Apple introduced Activation Lock in 2013 as part of iOS 7, and the feature is turned on by default in iOS 8. Activation Lock demands the owner's Apple ID and password to turn Find My iPhone off, no matter how many times someone hard resets a device. It also asks for the Apple ID and password if the owner remotely wipes the device.

The Apple Watch is in its early days, so it's not surprising that there are issues Apple needs to work out. But given that this is a particular security area where Apple has done a lot of mobile development, it seems like a strange and problematic oversight. The company needs to release a fix quickly.

READ MORE STORIES