Future Tense
The Citizen's Guide to the Future

June 23 2016 9:02 AM

Why It Seems Like Everyone Is Getting Hacked Lately

I have long made it a policy not to pay attention to news about data breaches. I'm starting to think that was a bad idea.

Well, it's understandable. In the past five years, high-profile hacks and data breaches have gotten a lot more common. But lately the frequency has really ramped up. Data from old breaches has been resurfacing, new breaches have been occurring, and particularly there have been a number of political hacks related to the U.S. presidential race. At a certain point you probably just started tuning it out, but the general apprehension remains.

Can you catch me up on all the stuff I've been avoiding?

Yeah, let's do it. In May, data from old breaches of LinkedIn, Tumblr, Myspace, and the dating service Fling resurfaced and wreaked havoc for users who hadn't changed their passwords, or who had reused those old username/password combinations on other sites. Fling's original breach was in 2011, LinkedIn's occurred in 2012, and Tumblr's happened in 2013. It's not clear when the Myspace breach took place, but it was almost definitely before 2012. All that data came pouring back out and was being sold by the same hacker, known as Peace or peace_of_mind. Ars Technica estimated that the these four data troves together comprised 642 million passwords.

There was a lot of prominent fallout from these four data dumps. A credit monitoring firm wrongly reported that Dropbox had been hacked. And 32 million Twitter logins leaked online. Twitter firmly denied that it had been hacked and the data may have come from credential reuse or possibly from data-collecting browser malware. Additionally, hackers infiltrated some celebrity social media accounts—like Mark Zuckerberg's Instagram, LinkedIn, Pinterest and Twitter—using credentials from the breaches. Keith Richards, Kylie Jenner, and Tenancious D were also hacked, among others.

Geez, that's a lot.

Actually, I wasn't done.

While all of that was going on, the poltical breaches were also gearing up. Throughout March and April the hacking collective Anonymous talked a big talk about attacking Donald Trump. It leaked some of his voicemails and eventually revealed his supposed social security number and cellphone number.

Last week the Democratic National Committee announced that its networks had been breached by two hacking groups. One lurked on the network for about a year to surveil communication like email, and the other infiltrated in the last few months and took the DNC's file on Donald Trump. The DNC said it suspected that Russian hackers were behind the attacks.

Finally, on Tuesday, Bloomberg reported that the Bill, Hillary, and Chelsea Clinton Foundation was breached as part of the the same Russian hackers' "dragnet of the U.S. political apparatus." The Clinton Foundation said it didn't know about the hack and wouldn't comment, but Bloomberg claims that government investigators identified the intrusion in the last week or so.

So, are things permanently worse than they were before?

Well, who can say, but probably! One concerning aspect of the LinkedIn/Tumblr/Myspace/Fling situation was that it really highlighted how much we don't know about what gets stolen during breaches. In 2012, hackers only (only!) released about 6.5 million LinkedIn passwords, and the company didn't indicate that more had been stolen. Four years later more than 100 million other credentials popped up from the same breach. Not ideal. The series of breaches was also a reminder that even old data can be valuable. As security researcher Troy Hunt wrote in a blog post about the series of data dumps, "If this indeed is a trend, where does it end? What more is in store that we haven't already seen?"

Since people are so inconsistent about using strong passwords, changing their passwords frequently, and using two-factor authentication when available, old data is still sought after on the black market. Even if someone has updated a lot of their passwords, hackers can still find valuable information like a credit card number in old accounts the person forgot about. Outdated credentials can also be used for phishing scams. For example, a hacker can try to make her con seem more legitimate by referring to old but accurate information about her targets.

Meanwhile, the political hacks say something about what it means to be controversial and/or part of power structures in the United States today. If politicians, celebrities, and other public figures weren't worried about being hacked before, they should definitely be worried about it and changing their passwords now. Some have even added cybersecurity consultants to their entourages.

Is there any good news in all of this?

Actually, there is one thing! The cybersecurity firm FireEye published research Monday indicating that since 2014 China has been reducing its cyberattacks against the United States. This may be partly because so many of its initiatives have been exposed, and partly because the U.S. prepared possible trade sanctions against China in the weeks before Chinese President Xi Jinping visited the United States.

Of course, there is also bad news in the report. It concludes, "The landscape we confront today is far more complex and diverse, less dominated by Chinese activity, and increasingly populated by a range of other criminal and state actors." Great. Also, maybe China is just taking a few years off now that it has a Facebook-like database of Americans.

I think I will probably go back to repressing all of this, because it's too creepy. But first, is there anything I should be doing to protect myself?

Yes, good question. The common thread in the recent hacks that have affected consumers is definitely reused credentials and credentials that remain unchanged for years. If you're going to keep your passwords forever, at least have a different one for every site. But ideally you would use strong, unique passwords that you also change periodically. Maybe you've heard this so often that you tune it out, too, but using a password manager is the easiest way to accomplish all of this. Setting it up definitely requires an up-front investment of time, but once you have it going it's a very solid solution that doesn't really take any more work than forgetting your passwords all the time and having to reset them. Enabling two-factor authentication whenever possible is the other easy step you can take to secure your accounts. Taking these steps won't make you impervious to cybercrime, but it will help a lot.

June 22 2016 4:02 PM

Mark Zuckerberg Is Right to Tape His Webcam. But He Shouldn’t Have To.

A photo that Facebook chief Mark Zuckerberg posted Tuesday has inadvertently sparked a welcome discussion about hardware security.

Meant to celebrate an Instagram milestone—the Facebook-owned photo-sharing app topped 500 million active users—the image’s background shows a desk with a laptop that sports a couple of non-standard features. As one Twitter user noticed and Gizmodo highlighted, the MacBook in question sports small pieces of tape over both its webcam and the audio input area. (“Wow, Mark Zuckerberg is Paranoid as Fuck,” read the Gizmodo headline.)

There’s some sweet irony in the apparent discovery that the founder and CEO of Facebook—a company famous for inhaling and capitalizing on its users’ personal data—guards his own computer so carefully. But Gizmodo’s conclusion requires a couple of jumps.

First, it isn’t entirely clear that the computer belongs to Zuckerberg, although Gizmodo provides some anecdotal evidence that it does.

Second, and more importantly, putting tape over your webcam cover is not really such a paranoid thing to do. My laptop has tape over its webcam as I type this. So does the laptop of my colleague Lily Hay Newman, who covers cybersecurity. Neither of us are in the habit of wearing tinfoil hats. But unlike alien mind control, webcam hacking is a demonstrably real phenomenon, so much so that the Atlantic once dubbed it an “epidemic.” It isn’t just criminal hackers and voyeurs who are breaking into people’s webcams. The Snowden leaks revealed that the NSA does it, too.

You and I are probably relatively unlikely to be targeted in such an attack, although you never know. Zuckerberg, in contrast, is highly likely to be the subject of various attempted hacks, to the point that it would be rash of him not to take extra security measures. FBI Director James Comey does it, too.

But even if you aren’t Zuckerberg or Comey, putting tape (or a cute cat sticker, as one of my editors does) over your webcam is a pretty simple step you can take to provide yourself a little peace of mind at very little cost. When you need to use the webcam, just move the tape or sticker an inch to the side. When you’re finished, put it back. As Tyler Lopez wrote in Slate in 2013: “You Should Never, Ever Leave Your Webcam Uncovered When You Aren’t Using It.”

The practical effect of webcam tape is not only to thwart would-be hackers, but to ensure that you aren’t unwittingly broadcasting yourself—for instance, by forgetting to close a videoconferencing app. And the psychological effect is to make you more aware of the sensors that could potentially be monitoring you in your own home, including smart speakers such as Amazon’s Echo. Both effects are salutary.

The value of that second piece of tape on the computer that might be Zuckerberg’s is a little less clear-cut. Some initially assumed it was covering the computer’s headphone jack, but as Fusion’s Kashmir Hill points out, all that would accomplish is to make it hard to plug your headphones in. More likely, it’s covering the tiny dual microphones built into the side of the machine. In an informal test, Hill found that would muffle the audio signal, but wouldn’t block it out.

When you think about it, the surprise here is not that Zuckerberg, or anyone else, would want to cover their webcam when it isn’t in use. It’s that he has to use a piece of tape to do it. As privacy expert Adam Harvey points out, in an era when we’re increasingly aware of all the threats to our personal security, it would make a lot of sense for Apple and other computer makers to simply build webcam covers into their machines.

June 22 2016 3:43 PM

The California Drought’s Lessons for Food Security

Despite the arrival of increased rain and snow from El Niño this winter, California enters the fifth straight year of its worst drought in 1,200 years. The drought has been especially acute in the state’s Central Valley, which ranges from extreme to exceptional drought.

With its fertile soil, moderate climate, and unparalleled irrigation system, the Central Valley is one of the most productive agricultural regions on the planet, producing nearly all of America’s almonds, olives, walnuts, and pistachios; the vast majority of its grapes, strawberries, avocados, carrots, tomatoes, and lettuce; and $13.1 billion worth of milk and cattle. As a result of the drought, California’s 39 million residents are competing for fewer available water resources. Water prices have spiked, increasing tenfold in some areas.

The drought and the subsequent increased cost of water have led to declines in agricultural production across the state. According to the U.S. Department of Agriculture, production of oranges are down 9 percent, avocados down 13 percent, garlic down 6 percent, romaine lettuce 15 percent, and olives 29 percent in 2014—the most recent year on record—compared with the previous three-year average. The drought cost California’s farmers $1.5 billion in 2014 alone due to the combination of revenue losses from lower production and additional pumping costs.

The decline in California agricultural supply has resulted in higher prices for some fresh fruits and vegetables on supermarket shelves, but so far, the drought hasn’t led to a significant increase in food shortages or insecurity here or abroad. In fact, world food prices are currently at a seven-year low, according to the U.N.’s Food and Agricultural Organization. Two factors have helped mitigate the price changes caused by decreased agricultural production in California: substitution between goods and global markets.

First, unlike staple crops like wheat and rice, consumers can easily swap many of the products grown largely in California, like almonds and strawberries, for other nuts, fruits, and vegetables grown in areas not being hit by drought. Substitution isn’t just happening at the consumer level either. Producers in California are substituting away from traditional staples like oranges and avocados and towards fruits like grapes, pomegranates, and dragonfruit that use less water and are more economically productive. This is part of a longer-term trend, according to Daniel Sumner at the University of California–Davis, who notes that over the last two centuries, California has shifted away from wheat and cotton production to its current mix of crops.

Second, though California may be the biggest American producer of many crops, it is far from the only source of avocados, olives, or grapes internationally. As the USDA notes, though California produces 86 percent of American-grown avocados, 82 percent of the avocados consumed in the U.S. are imported from other countries like Mexico. Indeed, the U.S.’s imported share of fruits and vegetables such as olives, peaches, beans, and lettuce has grown with the drought. Though production costs may be increasing for California farmers, they haven’t changed elsewhere around the world. As the price of produce goes up due to the decreased supply of California goods, competing farmers from Latin America to Southeast Asia may now find it worthwhile to increase their production, even if their marginal costs are slightly higher, which helps limit the potential for shortages and price increases.

The reactions of California farmers, foreign producers, and domestic consumers to the drought highlight a few key lessons for food security moving forward. First, substitution and diversification are critical. Both producers and consumers have been able to shift to other agricultural products that cost less, use less water, and are more economically efficient than those hit hardest by the drought. Just like a stock portfolio, a more diversified agricultural portfolio and diet help limit vulnerability to major price fluctuations by any individual good in the basket.

While these two principles of substitution and diversification may work well for the nuts, fruits, and vegetables of California’s Central Valley, it also highlights the risk of supply disruptions to crops with few options for substitution, such as the three mega-crops of rice, wheat, and maize, which are responsible for half of the world’s calories. Wheat alone accounts for more calories and protein than any other crop on the planet, and as a result the demand for wheat is highly inelastic, making slight changes in the supply and price of wheat felt strongly throughout the world.

Relatedly, the global food supply chain can handle disruptions in one major breadbasket like California’s Central Valley, but food insecurity becomes a real challenge when drought, floods, or extreme weather events affect multiple major food producers at once. A major food security simulation run last fall by the Center for American Progress and the World Wildlife Fund highlighted how climate change is increasing the likelihood of simultaneous disruptions to major food producers. Part of the reason why wheat prices spiked so dramatically in 2010 was because three major wheat producers—Russia, Kazakhstan, and Ukraine—were all hit with extreme weather that reduced production at the same time.

Most Americans live in a state of blissful ignorance about the food that appears on our grocery store shelves. At best, we might think about which fruits and vegetables are in season, but rarely do we consider the global supply chain required to bring the food to market, the equilibrium of supply and demand that influence the prices we pay, or the delicate balance of heat, soil nutrients, and water required to grow each crop—and how those pieces of the puzzle might be changing forever.

We take our food security for granted until it’s not so secure anymore. The California drought didn’t cause any food riots, but it provides a window into the future of many other breadbaskets around the world. The best practices in conservation, substitution, and diversification we’re learning from California are the things we’ll increasingly need at a global scale to keep affordable food on the table.

June 22 2016 1:33 PM

Like Auto Racing Before It, Drone Racing Could Spur Innovation

Over the past 15 years, drones have progressed from laboratory demonstrations to widely available toys. Technological improvements have brought ever-smaller components required for flight stabilization and control, as well as significant improvements in battery technology. Capabilities once restricted to military vehicles are now found on toys that can be purchased at Wal-Mart.

Small cameras and transmitters mounted on a drone even allow real-time video to be sent back to the pilot. For a few hundred dollars, anyone can buy a “first person view” (FPV) system that puts the pilot of a small drone in a virtual cockpit. The result is an immersive experience: Flying an FPV drone is like Luke Skywalker or Princess Leia flying a speeder bike through the forests of Endor.

Perhaps inevitably, hobbyists started racing drones soon after FPV rigs became available. Now several drone racing leagues have begun, both in the U.S. and internationally. If, like auto racing, drone racing becomes a long-lasting sport yielding financial rewards for backers of winning teams, might technologies developed in the new sport of drone racing find their way into commercial and consumer products?

June 22 2016 10:41 AM

Future Tense Newsletter: Why Does the Government Take So Long to Regulate Emerging Technologies?

Greetings, Future Tensers,

If you’ve been following along with this month’s Futurography course, you’ll probably agree that self-driving cars are well on their way. The trouble, as Kevin C. Desouza shows in a recent article, is that they aren’t going to show up all at once, which means we’ll need regulations and protocols for the time period in which operator-controlled vehicles share the road with fully autonomous ones. We may, for example, have to develop new licensing standards for drivers, but we’ll also have to rethink much of the received wisdom about urban planning and street design. As Desouza suggests, working through such challenges means acknowledging that we’re now living in an “era of human-machine relations.”

Accepting that truth doesn’t always come easily, as the vagaries of many recent attempts to regulate technological show. The Federal Election Commission, for example, has struggled to figure out how it should best regulate paid political speech online. Similarly, Eric Null writes that the Federal Communications Commission has yet to successfully impose strictures that would keep internet service providers from abusing their customers’ data. And it took years of pressure to get the Federal Aviation Administration to issue rules for commercial drone pilots, a decision that finally went through this week.

It’s tempting to blame all these difficulties on governmental incompetence or partisan gridlock, but even in the best circumstances hard to make clear, broadly beneficial decisions about such issues. That’s clearly not going to change as we cede more control of our lives to automated systems.

Here are some of the other stories that we read while contemplating jobs on Mars:

  • Cybernetics: While prosthetics are getting more and more advanced, neuroscientist Patrick McGurrin argues that it’s important to keep more traditional functionality in mind as well.
  • Photoshop: ModCloth has thrown its support behind an anti-Photoshopping bill, but who would the legislation really benefit?
  • Environmentalism: Six Flags is cutting down a whole forest at a New Jersey park to put up some solar panels. Is that really environmentalist?
  • Intellectual property: Weak IP protections may actually help countries develop. Are we holding other nations back when we try to impose our own standards on them?

Jacob Brogan

for Future Tense

June 22 2016 9:47 AM

Tell Netflix What TV Shows and Movies You’d Like to Watch With This Cool Feature

455692374-in-this-photo-illustration-the-netflix-logo-is-seen-on
PARIS, FRANCE - SEPTEMBER 19: In this photo illustration the Netflix logo is seen on September 19, 2014 in Paris, France. Netflix September 15 launched service in France, the first of six European countries planned in the coming months. (Photo by Pascal Le Segretain/Getty Images)

Photo by Pascal Le Segretain/Getty Images

Netflix’s streaming services have always operated according to the illusion of choice. Though its inventory is enormous, the specifics of its listings are controlled by licensing agreements and audience analysis that can feel obscure, if not downright alienating, from the outside. Titles appear and disappear so arbitrarily that Slate maintains a monthly column rounding up the films and shows you should watch before they’re gone. In fact, Netflix’s sophisticated recommendation algorithms arguably exist in part to paper over its finite selection. Never have we had more power over our media consumption and less control over what we consume.

Now, the company is promising to allay that dilemma, if only slightly: It’s created a page that allows users to suggest TV shows or movies they’d like to watch. The page’s interface offers options for three proposals, letting visitors type in whatever they want. Refresh after submitting them, and you’re welcome to propose a few more. Since the submissions appear to be tied to individual user accounts, however, it seems as though it would be difficult to individually game the system.

There are plenty of caveats about this feature, not least of which is that it may not be all that helpful. Users on Reddit’s r/Netflix subreddit have acknowledged as much, even as they’ve enthused over the page’s existence. “Fat chance [Game of Thrones] will ever come to Netflix since HBO is rushing to make their own streaming service better,” one complains. Even Netflix itself points to similar limitations, encouraging its customers to familiarize themselves with the complexities of its licensing deals—or at least with the mere fact that those deals are complex—before they dive in. But you can imagine groups of fans creating campaigns for small films that might not have been on the site’s radar before.

This isn’t the first time Netflix has incorporated an option of this nature. Almost a decade ago, the site Hacking Netflix identified a similar suggestion box buried within the company’s “Contact Us” interface. According to comments on Hacking Netflix, however, that feature was later removed. Other, less formal, channels—including customer support live chat—have remained, and at least one commenter on r/Netflix claims that those suggestions really do work. This newer system, with its multiple suggestion boxes, appears to be much more robust.

As always, though, there’s reason to be skeptical here.  Netflix did not immediately respond to a request for more details, and there’s little information on the page about how—and if—user suggestions will be put to work. Its primary function may well be to free up customer service representatives, who apparently spend a lot of time fielding requests, if Reddit’s users are any indication.

Much like complaining to a representative who may or may not have any power to help, the suggestion page helps to give back some semblance—if only a semblance—of control to the company’s customers. Whatever practical functions it serves, it’s also akin the company’s other PR efforts, including its in house anthropologist, who—as I’ve argued in the past—helps to put a human face on the big data-driven company without really changing its operational strategies.

In truth, Netflix probably already knows a great deal about what you want to watch. It was, for example, probably aware that I had repeatedly searched for the Claire Denis classic Beau Travail long before I submitted a formal request. And it probably has exacting data on how many customers who wanted to watch The French Collection (not available) settled for that film’s 1975 sequel (available, inexplicably) instead. Those metrics, which map on to actual viewing and engagement patterns are likely far more useful to the company than whatever its customers claim they want to watch.

Nevertheless, in an increasingly algorithmic world, it’s nice to be given some sliver of agency. If you like watching movies on Netflix, you might also like telling Netflix what to stream.

June 21 2016 7:00 PM

We’ve Needed Commercial Drone Rules for Years. The FAA ​Just​ Released a First Step.

On Tuesday, the Federal Aviation Administration finally announced rules for small drones (under 55 pounds) flown by commercial operators. “Part 107” allows commercial drone use under 400 feet without a pilot’s license. Instead, operators will take a test and get certified to fly particular types of drones, which should streamline the process significantly.

The new rules will spur drone proliferation and innovation. The FAA says they could stimulate $82 billion in industry and create upward of 100,000 jobs within 10 years.

“With this new rule, we are taking a careful and deliberate approach that balances the need to deploy this new technology with the FAA’s mission to protect public safety,” said FAA Administrator Michael Huerta in a statement. “But this is just our first step. We’re already working on additional rules that will expand the range of operations.”

Indeed, the new rules do not allow for one major commercial use: drone delivery. Operators can only fly drones during daylight, can’t fly them over bystanders, and must keep their drones in their line of site at all times. Amazon Prime Air will have to wait.

The Federal Aviation Administration has taken its time deciding on commercial drone rules. Congress began seriously demanding guidelines in 2012, and the agency set a September 2015 deadline in response. By the end of 2014, though, it was clear that the agency wouldn’t meet this goal. Nine months later we still only have a “first step.” This measured approach, rightly, has a lot to do with safety, and could have had benefits from a privacy standpoint as well. (Safety and privacy are quite intertwined when it comes to drones.)

But privacy advocates still feel that their concerns are going unacknowledged. Camera-equipped drones can fly over or hover near buildings, photograph things without people’s consent, or even stalk people. As the Intercept points out, the FAA doesn’t address privacy concerns in detail. “The FAA has repeatedly acknowledged the privacy risks of drone deployment, but has so far refused to adopt any privacy safeguards,” the Electronic Privacy Information Center wrote in a statement.

While the FAA was figuring out what it wanted to do, time was passing and the drumbeat of capitalism pulsed on. People found ways around the FAA imposed limitations and companies like Amazon got frustrated. Now we’re finally getting regulation, when the rules could have been on their second or third revision.

At least the manufacturers are excited. DJI spokesman Adam Lisberg told the Verge, “This is a major development for the future of drones in America.” They better be happy—they’re about to sell a lot of drones.

June 21 2016 4:13 PM

The Future of Social Media Is Video. Tweet “Goodbye” to Text.

Social media is on the brink of another fundamental shift. Over the past five years—counting roughly from when Instagram was founded—photos have overtaken text as the dominant mode of communication on the major social platforms, and mobile apps have surpassed websites. Now comes the next era. Like it or not, the future of social media is mobile video.

Facebook, the largest of the social networks, has already made its intentions clear. Threatened by Snapchat and other video-centric upstarts, it has spent the past two years encouraging people to create and post videos, starting with recorded videos and now pushing into live streams and 360-degree video. A Facebook executive said last week that she believes the news feed in five years will be all mobile, and “probably all video.” That may be hyperbole, but there’s no evidence that she intended it as such. Already the company is seeing “a year-on-year decline of text,” she added.

Twitter, for its part, has been gradually moving away from the 140-character text-based update—too gradually, no doubt, for the taste of investors. The common assumption has been that, when and if Twitter does remake itself, it will do so via a Facebook-style algorithm that decides which tweets you see in your feed. But while Twitter has toyed in a limited way with this sort of automated personalization, CEO Jack Dorsey recently clarified the company’s focus on “live,” which he has managed to turn into a noun. Semantics aside, it’s a smart move, because the real-time feel of Twitter’s chronological timeline is what sets it apart.

Instead, a series of moves announced this week suggests that Twitter’s real makeover won’t involve the order in which it shows us content, but that content’s very nature. It’s the same change Facebook is beginning to undergo: the move from text and still images to video, and live video in particular. It's a transition for which Twitter has long been preparing, beginning with its acquisition of Vine in 2012 and continuing with its launch of Periscope in 2015. This week, it shifted into a higher gear.

First, Twitter announced on Monday that it’s buying Magic Pony, which is neither a children’s show nor the street name for an illicit drug, but a London-based machine-learning startup. Yes, machine learning is fast becoming a business cliché, which might be why commentators like Fortune’s Mathew Ingram were quick to dismiss the acquisition. But in Magic Pony’s case, the term machine learning holds a specific meaning. And contrary to Ingram’s take, it has little to do with “curating content.”

Rather, the startup, which counts 11 Ph.D.’s among its ranks, has published papers and filed patents that center on the application of neural networks to video encoding. In English, that means they’ve developed fancy new ways to make sure videos look good on your phone, even if you don’t have a great data connection. The “magic” here is that they do this by compressing videos to relatively poor quality in transmission, then running software on your phone to reconstruct a higher-quality picture. Clever, right?

Twitter may have paid in the neighborhood of $150 million for the company, according to TechCrunch’s sources, which sounds like sort of a lot for 11 Ph.D.’s and a product that sounds strikingly similar to that of Silicon Valley’s Pied Piper. But remember that Twitter also recently signed a $450 million-per-year deal with the NFL to stream live football games. Streaming live games to your phone has historically been a pretty dicey proposition, not to mention an expensive one. If Magic Pony’s technology significantly improves the mobile viewing experience while trimming the data load, it could pay dividends. In the best-case scenario, Twitter could establish itself as a place to watch not only the NFL, but also all sorts of other live video.

Of course, it will face tough competition from Facebook, which has its own big plans for live video and far more resources with which to pursue them. To that end, Twitter launched a set of new features and tweaks on Tuesday, just a day after the Magic Pony announcement.

First, it will allow users to tweet videos as long as 2 minutes 20 seconds, almost five times the current 30-second limit. (Yes, that works out to 140 seconds.) As BuzzFeed’s Alex Kantrowitz points out, the likely effect will be not only to encourage people to create original videos to Twitter, but to allow them to post full videos to Twitter that were created elsewhere, rather than linking out to them.

Vines, previously capped at six seconds, will now in some cases include a “Watch More” button, which will lead to a longer video—again, up to 140 seconds. This will effectively turn the looping six-second Vine into a trailer for the full clip.

In a related and equally important move, Twitter is launching a new, dedicated video-watching page in its mobile apps. Tap any video in your timeline, and the app will switch to a full-screen viewing mode that recommends a series of related videos to watch after you finish the first one. As with Facebook’s “Suggested Videos” feature, the rationale is that if you’re willing to watch one video, there’s a good chance you’d be willing to watch more.

That sounds obvious, but the more interesting flipside is a recognition that a lot of social media users still don’t want to watch videos in their feeds. Maybe they’re worried about data overages. Maybe they don’t have headphones on. Maybe they just prefer to take in information at a glance, via text or still image. Facebook, Twitter, and others have to be careful not to alienate these users in the pursuit of video views.

I’ve argued in the past that live video on social media could be a fad, since most people are unlikely to embrace the idea of self-broadcasting. Yet it’s increasingly clear that the big social media companies aren’t going to let some users’ aversion to the format stand in the way of the social-media video revolution.

The motivation is twofold. First, a significant and growing segment of social media users really do like watching videos on their phones, even if they don’t necessarily want to create them. Second, and perhaps more importantly, mobile video is where the ad money is going. And where the ad money goes, content inexorably follows. Snapchat now reportedly boasts 7 billion video views a day, approaching Facebook’s numbers despite a much smaller user base. Even Tumblr is getting into live video, it announced on Tuesday.

Twitter will not insert commercials into its video-viewing page for now, as Facebook does. But that’s probably just a matter of time. Meanwhile, it’s already showing ads before some Twitter videos via its Amplify program, and it announced Tuesday that it will begin offering pre-roll ads on some Vine videos as well. Vine creators will keep 70 percent of the revenue, while Twitter will take 30.

Twitter Engage
Twitter's "Engage" app is a Trojan Horse for video.

Screenshot courtesy of Twitter

Finally, Twitter on Tuesday announced a new companion app called Engage. Aimed primarily at celebrities and video creators, it will filter out mentions and replies to your tweets from the riffraff, in favor of interactions with people whose accounts are Twitter-verified or who have large followings of their own. (Twitter already offers some filtering options in its main app, but on Engage they’ll be the default.) It also includes an analytics dashboard from which to assess your fans’ adoration of various tweets and videos. And, notably, it will feature a third tab dedicated to encouraging you to post new videos.

Is Engage elitist? No doubt, although in a nod to its democratic roots, Twitter will make the app available to anyone who wants it. It turns out that elitism is a tradeoff social media companies are happy to make in the name of mobile video growth. (Facebook, for its part, has been paying handpicked partners to produce live videos, just in case making them go viral via its News Feed algorithm wasn’t enough incentive.) The average user may not have the time or inclination to post videos or broadcast themselves, let alone the know-how to do it well. But getting them to watch high-quality videos produced by professionals should be an easier sell.

Engage has been glossed by tech blogs as a new tool for filtering the noise from your timeline and mentions. But I think it’s also a Trojan horse. The filtering features are meant to lure celebrities and social-media pros into an app that practically begs them to post videos, or even live videos, rather than text updates. Even the section that lets you analyze engagement post-by-post is divided into three sections: “videos,” “photos + gifs,” and “other.”

For better or worse, that appears to be the future of social media, in a nutshell: videos, photos, gifs, and … other. And if you don’t like it, you’re always free to tweet your displeasure—especially now that all the important people can easily tune you out.

June 21 2016 1:24 PM

Google Wants to Help You Research Medical Symptoms Without Activating Your Hypochondria

Sometimes when you’re feeling ill, the only thing worse than uncertainty is a little bit of knowledge. Start searching for more information about your symptoms, and you’ll be sent spiraling, each twitch and twinge a signifier of some unfolding tragedy. A source of anxiety for patients, the practice is also predictably maddening for physicians, forcing them to reckon with often-dubious internet-enabled portents before they can issue their own findings.

Now, Google is trying to do something about this informational impasse, tweaking the way it presents its search results in an attempt to provide better information. In an official blog post, cleverly titled “I’m Feeling Yucky,” the company explained that it would be incorporating symptom searches into its Knowledge Graph feature.

Instead of simply providing you with a list of possible responses to your query, Knowledge Graph offers an array of options across the top of the screen. Search for “Stephen King novels,” for example, and you’ll see a neatly ordered lineup of covers. If you click on one of those covers, Google will direct you to a search page for that book. In theory, this system allows you to start broad and subsequently delve deeper, helping you settle on a path when you have only the vaguest sense of your destination.

Now, Google will use that same setup to provide more information about the possible causes of symptoms. In its blog post, the company gives the example of a search for “headache on one side,” claiming that Knowledge Graph will soon furnish an array of “related conditions,” such as “migraine,” “sinusitis,” and “common cold.” (No word yet on whether they’ll add accompanying images, which hopefully won’t be as scary as those King covers.) Though it’s not yet widely available, Google says it is “rolling this update out on mobile over the next few days, in English in the U.S. to start.”

Symptoms Search

Google

The good news here is that by showing off an array of possibilities Google will now be less likely to lead patients directly to cancer—or whatever other worst-case scenario the algorithms would have prodded you toward in the past. With Knowledge Graph, you should immediately see a wider range of possibilities, making it a little harder to immediately assume the worst.

It’s also a promising development for beleaguered physicians tired of panicked patients bringing in reams of inaccurate information to the consultation room. Google is working to help users figure when and how to get more information, offering details about “self-treatment options and what might warrant a doctor’s visit,” meaning it might stave off a few unnecessary emergency-room visits. It also claims, “We worked with a team of medical doctors to carefully review the individual symptom information, and experts at Harvard Medical School and Mayo Clinic evaluated related conditions for a representative sample of searches to help improve the lists we show.”

That’s helpful to know, but it’s also a reminder that Google relies less on autonomous algorithmic calculations than some assume. That may be for the best: Researchers have repeatedly shown that Google’s automated search results are often strangely biased—especially where political queries are concerned—and sometimes downright bizarre. As Mark Graham has argued in Future Tense, this is at least partly because the web is now set up to meet the needs of machines. If nothing else, Google’s new medical search system aims to put some of the focus back on the very real concerns of human users by providing them with results vetted by actual physicians.

There is, of course, still reason to be skeptical here. As it’s currently laid out, the Knowledge Graph system appears to be set up to provide users with more information about single data points. Accurate diagnosis, however, still typically requires symptomatological thinking—that is, a reflection on full array of a patient’s symptoms. In that sense, it’s entirely possible that Google will continue to lead the sick (and the merely anxious) astray, but at least it promises to direct them to some more accurate information in the process.

June 21 2016 11:02 AM

The FCC Wants Your Mobile Data to Get a Lot Faster

News about building 5G wireless broadband in the United States has been ramping up for months. Both AT&T and Verizon have promised to offer customers these ultra-fast networks within a few years, and now the Federal Communications Commission is setting the stage.

It’s still unclear what 5G will really deliver at scale. Verizon’s early 5G testing is promising gigabit speeds, and CNET reporter Roger Cheng watched Verizon produce a 3.77 gigabits-per-second download in a demo. In general, the vague range people talk about is a 10 to 100 times speed improvement over 4G. Any improvement would be useful, but that order of magnitude makes all the difference.

On Monday, FCC Chairman Tom Wheeler announced that the agency will vote in July on a proposal called Spectrum Frontiers that would open up new portions of the radio spectrum to 5G. He explained in a speech that the FCC plans to make high-frequency bands available on the radio spectrum for 5G. (Refresh your memory of how the radio spectrum is allocated here.) “Current blocks of licensed low-band spectrum are usually 5 to 10 MHz in width. With 5G, however, we are looking at blocks of at least 200 MHz in width. This will allow networks to carry much more traffic per user,” Wheeler said. “Opening up spectrum and offering flexibility to operators and innovators is the most important thing we can do to enable the 5G revolution.”

Telecom lobbying groups agree. The CTIA Wireless Association published an extensive report last week encouraging the FCC to open up high-band spectrum for 5G. And individual companies that are invested in 5G technology are happy, too. “Qualcomm strongly supports FCC Chairman Tom Wheeler’s 5G initiative,” senior vice president Dean Brenner told Broadcasting and Cable. “Qualcomm has been developing the building blocks for 5G for years. ... While 3G and 4G connected people, 5G will connect everything.”

This is the larger promise of 5G that everyone is touting. With lower latency and faster speeds, the Internet of Things could not only be continuously connected, but could be processing and sharing an almost unimaginable amount of data in very small amounts of time. This would be especially useful for things like medical devices and self-driving cars where speedy algorithmic decision-making and implementation can avert catastrophe.

Crucially, a lot of fundamental 5G technology is still not in place, though. As my colleague Will Oremus wrote on Slate in February, “The industry has yet to agree on the metrics that will define 5G, a standards-setting process that is expected to take until 2018.” Four months later, that is still the status of things. Wheeler said in his Monday speech that the FCC “won’t wait for the standards to be first developed in the sometimes arduous standards-setting process or in a government-led activity. Instead, we will make ample spectrum available and then rely on a private sector-led process.”

5G holds enormous promise, not just for faster browsing on your smartphone but as potential competition for the gridlocked high-speed broadband market, which is currently rigged by cable companies with natural monopolies in too many U.S. regions. For 5G to bring change, though, it has to first exist as a working technology. As Wheeler said on Monday, “If anyone tells you they know the details of what 5G will deliver, walk the other way.”

READ MORE STORIES