U.S. Government to Labs: Take an Inventory of Your Pathogens
Correction, Aug. 28, 2014: This post originally quoted Science Insider's report that the White House planed to request all federally funded labs suspend work for 24 hours. Science Insider later clarified that while it will request an inventory of pathogens, it will not ask labs to "suspend" their work. The headline on this post as well as the text have been corrected.
On Wednesday afternoon, Science Insider reported that the U.S. Government was planning to request that all federally funded laboratories working with “high-consequence” pathogens suspend work for 24 hours so that personnel may take stock of what they have stored. However, on Thursday the White House released a statement explaining that this was not the case. Instead, the White House is asking these laboratories to “conduct a ‘Safety Stand-Down,’ ” so that laboratory safety and security, as well as practices and protocols, may be reviewed. This near-term solution is to accompany the longer-term establishment of parallel processes for federal and non-federal review and recommendations. However, contrary to the original report, the administration has not requested that work cease. (Science Insider cleared things up in a follow-up post.)
Manmade pandemics have indeed occurred before, and occured because the pathogens were being worked with in laboratories to prevent the outbreaks they ended up creating, as was the case with the H1N1 human influenza pandemic of 1977.
The governmental request follows the potential exposure of workers to anthrax after the inadequate inactivation of samples, a mix-up involving a fatal flu strain that could put the global population at risk, and the discovery of smallpox in an unsecured government lab. (The six vials of smallpox were found along with 321 other vials, some of which were infectious pathogens that are “serious enough to be considered potential bioterror agents.”) However, the inventory stock is not expected to result in new policy or regulations.
No, Out-of-Control Groundwater Pumping in California Won’t Cause the “Big One”
Californians have enough to worry about these days, what with the historic drought. Are they also unwittingly ice-bucket-challenging their way to an earthquake disaster? Probably not.
Sunday’s magnitude 6.0 earthquake was the Bay Area’s largest since 1989, when a magnitude 6.9 famously hit during the World Series. A new estimate by the U.S. Geological Survey shows the weekend quake dealt a billion-dollar blow to the state’s economy. The damage was concentrated in the wine region of Napa and Sonoma Counties, where the value of individual bottles can run into the thousands (and also made for some impressive post-quake photos).
The House and Senate Intelligence Committees Need Privacy Advocates, Too
“It’s called protecting America,” Sen. Dianne Feinstein, chair of the U.S. Senate Select Committee on Intelligence, asserted in June 2013. In the aftermath of the Snowden leaks, she has defended the domestic surveillance conducted by the NSA as something that has “not been abused or misused” and is “essential,” “necessary and must be preserved.”
The chair of the U.S. Senate Committee on the Judiciary offers a sharply divergent view. We “have to have some checks and balances before [we] have a government that can run amok,” Sen. Patrick Leahy said in January. He has warned that the NSA’s domestic surveillance could lead to “the government controlling us instead of us controlling the government.”
Nixing Net Neutrality Would Produce More Healthcare.govs
Last week, the White House hired a head of the U.S. Digital Services to get the whole government to adopt technology processes like the ones that saved Healthcare.gov after its disastrous launch. But, what the White House giveth, the Federal Communications Commission taketh away. The FCC is an independent agency outside the White House, and its chairman, Tom Wheeler, is proposing to adopt an online discrimination rule that will result in more disastrous websites from federal agencies, and from cities and states, despite the White House’s new service. Worse, we taxpayers will have to pay through the nose for these unworkable government sites.
Back in May, the FCC chairman proposed a rule that would permit cable and phone companies to create slow and fast lanes on the Internet by giving them “substantial room for … discrimination,” including cutting exclusive deals, and the power to impose new tolls on websites. Three million people, including hundreds of businesses, and dozens of civil liberties groups, have already filed comments in nearly unanimous opposition to Wheeler’s surprising proposal. But it is not just the private sector that will feel the pain.
Survey: More Than One-Third of Young Workers OK With Bosses Monitoring Their Tweets
Will employers in the future watch what their staffers get up to on social media? Allowing bosses or would-be employers a snoop around social media pages is a growing trend in the United States, and now a new report from PricewaterhouseCoopers and the Said Business School at Oxford University suggests it may well become the norm.
Drawing on a global survey of 10,000 workers and 500 human resources staff, the report predicts that employers’ monitoring of workers’ lives on social media will increase as they “strive to understand what motivates their workforce, why people might move jobs and to improve employee wellbeing.”
More than one-third of the young workers surveyed said they were happy for their employer to monitor their status updates and tweets in return for greater job security.
Facebook’s Not-So-Evil Crusade Against Clickbait
Facebook brought clickbait into this world, and now it’s trying to take it out.
In a blog post Monday, the company announced a change to the algorithms that govern what you see in your Facebook news feed. The change is aimed at filtering out “click-baiting headlines”—that is, headlines that entice people to click on them, but lead to stories that fail to satisfy. The goal, Facebook says, is “to help people find the posts and links from publishers that are most interesting and relevant, and to continue to weed out stories that people frequently tell us are spammy and that they don’t want to see.”
This should come as a welcome change for just about everyone. One of the loudest complaints about Facebook in recent years has been the profusion of viral junk that is carefully designed to game the site’s algorithms by attracting cheap clicks and likes. (See the post below for an example.)
It would be bad enough if this sort of content were confined to Facebook itself. Unfortunately, it has also infected the wider Web due to Facebook’s outsize influence on other media organizations’ fortunes. You can now find headlines that oversell their corresponding stories just about everywhere, from Upworthy to Business Insider to the Atlantic. Yes, Slate too has been guilty of this on plenty of occasions, despite our writers’ and editors’ genuine efforts to walk the fine line between entertaining headlines and sensational ones.
The fact is that most journalists don’t want to oversell their stories. But Internet advertising and social media have ushered in a free-for-all marketplace in which the grabbiest headlines tend to win the readers—even if the ensuing content doesn’t deliver on their promise.
Some of the most irksome excesses been driven by Facebook’s news feed algorithms, which have historically rewarded stories that get clicks and likes, regardless of whether those stories are actually any good. Sites that don’t attempt to game those algorithms risk irrelevance or extinction at the hands of those that do. So if any single entity has the power to tilt the incentives back in the direction of headlines that actually tell readers what a story is about, it’s Facebook.
Ah, but how can Facebook know whether a story is any good? That is, how does it define clickbait? Those are important questions—and Facebook has surprisingly good answers.
Clickbait, says Facebook, is “when a publisher posts a link with a headline that encourages people to click to see more, without telling them much information about what they will see.” That’s a pretty fair subjective definition of the term. As BuzzFeed’s Matt Lynley explains:
This is not to suggest that all stories that have clickable headlines will be penalized. While the term “clickbait” is often a placeholder to describe undesirable internet content, the clickbait that Facebook will look to eradicate is made up of posts that often fail deliver on the headline’s promise or posts that leave readers feeling tricked.
OK, so how can Facebook’s algorithms recognize clickbait when they see it? They do it by looking beyond the standard metrics—total likes and clicks—to focus on what happens after a user clicks on a story. Do people actually spend some time reading the post once they’ve clicked through? Do they go on to like it, comment on it, or share it with their friends? If so, Facebook assumes that they got some real value out of it.
If, on the other hand, people click on a story only to end up right back on Facebook moments later, that raises the probability that it was clickbait. Likewise, if most people are liking a story before they’ve read it rather than after, that’s an indication that they’re responding to the headline and/or the photo rather than the substance of the story.
As with any change to Facebook’s algorithms, this one has sparked its share of carping and conspiracy-mongering despite its apparent good intentions. Who is Facebook, critics demand to know, to tell us what to read and what not to read? If people like clickbait headlines, why should Facebook withhold them from us? What’s the secret agenda here?
These questions rest on flawed premises.
First, Facebook is not telling people what to read. Like any media company, from CNN to the New York Times, Facebook’s goal is to present its users/readers with a selection of content that it thinks will interest and inform them. If it fails in that task—if readers don’t like what they see—they’ll go elsewhere. Schoolteachers tell people what to read. The Chinese government tells people what not to read. Media organizations in a competitive marketplace—including social-media sites—simply do not have that power.
Second, if people really liked clickbait headlines, Facebook probably would keep showing them to us. Facebook isn’t waging war on clickbait out of some paternalistic sense of responsibility. It’s doing it because Facebook’s own users have explicitly told Facebook in surveys that they don’t like clickbait. Yes, they may succumb to teaser headlines, but they usually end up feeling cheated and annoyed. That feeling, in turn, makes them less likely to spend time on Facebook in the long run. And that is the worst thing that could happen to Facebook’s business.
Whether this strategy will work as intended is another question. It’s quite possible that Facebook’s implementation of this change will backfire somehow, or open up new ways for publishers to game the system. No single metric, including “attention-minutes,” can fully capture the value of a given story to readers.
Facebook understands that, and is likely to keep tweaking its algorithm to respond to new traffic-grubbing tactics as they emerge. This is exactly what Google has been doing for years to combat shady search-engine optimization strategies that skew its search results.
Facebook’s rise as a portal for news has profoundly changed journalism in just the past few years. Some of those changes are welcome, like the way the social network can deliver a great story—or even a life-saving one—to a far wider audience than it would have reached otherwise. Others are insidious, like the way it can deliver wildly sensationalized or inaccurate stories to a wide audience at the expense of more nuanced ones.
Fortunately for all of us, Facebook is beginning to realize that those skewed incentives risk harming its own brand in the long term. The better Facebook gets at understanding what its users actually like, as opposed to what they just Facebook-like, the more its positive effects on journalism will balance out the insidious ones.
Previously in Slate:
All of Your Facebook Friends Already Agree With You
How do most of the people on your Twitter timeline feel about Ferguson? About foreign affairs? About the latest pop culture scandal? Except for that one weirdly conservative uncle or those random people to whom you never really spoke in high school, the answer is probably: a lot like you do. And if not, you aren’t going to tell them so.
A new study by the Pew Research Internet Project found that social media sites like Facebook and Twitter do not offer a platform for those hesitant to speak up in public on policy issues when they feel their views are in the minority. On the contrary, a survey of 1,801 adults focused on the divisive public issue of Edward Snowden’s NSA revelations revealed that people were even less willing to discuss the surveillance is online than they were in person (42 percent compared with 86 percent, respectively). And online, as in person, people were more willing to speak up if they thought others agreed with them.
Furthermore, those who went on Facebook and Twitter a few times a day were less likely to share their opinions offline. For example, a person who checks Facebook multiple times a day is half as likely to share his opinion offline as someone who does not go on the site as frequently. Those regular users who felt themselves in the majority on Facebook were “still only .74 times as likely to voice their opinion” offline as those who did not quite so frequently go on Facebook.
The report offers a few theories as to why social media perpetuates what it refers to as the “spiral of silence.” (It also acknowledges that there are limitations to a study that focuses on but one policy, and that other factors, like confidence in one’s knowledge and opinions, matter, too.) Perhaps people do not speak out of fear of isolation. Perhaps they do not want to lose friends and alienate people. And “as to why the absence of agreement on social media platforms spills over into a spiral of silence in physical settings,” the report ventures that “social media users may have witnessed those with minority opinions experiencing ostracism … this might increase the perceived risk of opinion sharing in other settings.”
That could certainly be. But maybe, since regular users of Facebook and Twitter who feel themselves to be in the majority are still less likely to express their opinions offline, the problem isn’t the spiral of silence. Maybe it’s all the noise.
Maybe those who spend time online reading the same views, over and over again, on the same topic, are tired of hearing it. Maybe they don’t want to contribute to the cacophonic chorus telling itself, over and over again, how right it is. Online or off.
Disney Applies for Three Drone-Related Patents for Theme Parks
If the screaming tots, steep prices, and humid Orlando weather weren’t enough to deter you from a trip to Disney World, then perhaps the prospect of a giant Cruella de Vil marionette being wielded by flying drones may do it.
According to the Wall Street Journal’s MarketWatch, Disney has applied for three drone- (or unmanned aerial vehicle-) related patents to be used in their theme parks. The first describes an aerial display system, with drones choreographed to form displays of floating pixels (or "flixels," as the application calls it), in a manner potentially similar to fireworks. The second uses drones to fly and position flexible projection screens on which light could be reflected above a crowd. And lastly, of course, to manipulate the appendages of “blimp-sized” Disney character string puppets. (You really must check out the patent illustration for that one.)
The patent applications suggest that the use of drones could eliminate some major issues associated with outdoor events. “Presently, aerial displays have been limited in how easy it has been to alter the choreography and to provide a repeatable show,” the application reads. Further down, it says, “Other aerial shows rely on fireworks, which can be dangerous to implement and often provide a different show result with each use. Other displays may us aircraft such as blimps dragging banners or even large display screens. While useful in some settings, these aircraft-based displays typically have been limited in size and use only a small number of aircraft and display devices.”
Even friendly-sounding drones performing aerial spectaculars above “Typhoon Lagoon” may seem like a dubious idea to some observers, given how drones loom in popular imagination. But Disney has not shied away from using controversial new technologies in the past. Its RFID-equipped “MagicBands,” announced in 2013, can act as ticket, credit card, and even hotel room key for Disney World visitors, and also use sensors to track their movements and personalize park activities. Concerns were immediately raised about the bands potential use as data trackers that could compromise patron privacy, especially that of young children.
If Disney starts using drones in their theme parks, you can be sure that other entertainment venues will follow suit. It can’t be long until the Macy's Thanksgiving Day Parade features a giant drone-flown turkey floating high above Central Park West.
An Easy Tool for Turning YouTube Scenes Into GIFs
If you've been following Slate's guide to loving and sharing GIFs, but what you really want is an easy way to make your own, GIF YouTube has you covered. The service has a quick trick for converting any clip.
All you have to do is add “gif” to any youtube URL between “www.” and “youtube.com.” So when you're on a YouTube video and you have the sudden urge to make it a GIF, you make your URL look like www.gifyoutube.com/xyz and you'll be transported to another world.*
TechCrunch reports that GIF YouTube is made by the developers of the all-GIF messaging app Glyphic, so you know you're in the hands of true GIF devotees. But it also points out that since GIF YouTube isn't sanctioned by real YouTube, the service could hear from YouTube legal any time and disappear. Better enjoy it now.
GIF YouTube has a simple interface. You click on the point in the video where you want your GIF to start (you can't control it down to the frame), and then you adjust the length of your GIF in one-second increments from one to 10—the default is five. Then you name your GIF and hit “Create GIF.” You approve a proof and then the magic happens. Intuitive!
Once your GIF is ready (they can take a few seconds to load), you can share it on social media or download it for future use. GIF YouTube is nice because it doesn’t add a watermark or any type of branding to the GIFs it spits out.
For now, you wield all the GIF power. Share responsibly. Or recklessly.
*Correction, Aug. 26, 2014: This blog post originally included an incorrect sample URL for using GIF YouTube.
An Open Letter to the Director of the National Weather Service
Dear Dr. Louis Uccellini:
Before we get started, let me just say I’m a big fan of your work. Your book on snowstorms was pretty much the best thing ever, and I think you’ve done an excellent job over the last year or so as the leader of the National Weather Service, especially when it comes to upgrading American weather models after Superstorm Sandy.
Which is why it bothers me that major glitches like this keep happening. Pretty soon, your IT people may want to get in touch with the Healthcare.gov people for help. Today, a single Android app that kept pinging your servers kept lots of people from being able to access the main NWS website. It never went down for me, but it seems like I was one of the lucky ones:
@EricHolthaus Been having trouble for the last 2-3 hours.— Eric Berger (@chronsciguy) August 25, 2014
As you know, this is a pretty big deal, since your supercomputers power pretty much everyone’s forecasts—from the Weather Channel to my local TV station.
Your spokesman Chris Vaccaro emailed me with this status update a little bit ago:
An outside app that relies on our data has a programming error that is causing it to request data from us too frequently. We are experiencing occasional outages and are actively working with the developer to resolve their programming errors.
Now, I’m no tech expert, but if this was just an accident, I’d hate to see what would happen if someone deliberately tried to shut you guys down, say, during the middle of a tornado outbreak or a major hurricane strike on the East Coast. To be clear, since the current problem is affecting only one part of the National Weather Service system—forecast.weather.gov, the part that holds all the local forecast information—chances are that warnings and forecast information are still getting out in other ways. But these days, as you know, the Internet is a pretty big deal.
For example, right now there’s a big heatwave hitting half the country, there’s flooding in Arizona, and we’re just days away from the peak of hurricane season. Last time something like this happened, there was an EF-3 tornado that went almost totally unwarned in your home state of New York, for example. It’s probably just good luck that no one died then.
The latest glitch was first reported by Gawker’s Dennis Mersereau, my nominee for your replacement, should things like this keep happening. Looks like you’re going to be on the Weather Channel’s new weather talk show this Sunday, where hopefully you’ll address exactly what happened today. (Update, Aug. 26, 2014: According to the NWS Telecommunications Operations Center Status, it seems like the problem has been fixed. Your spokesperson, Chris Vaccaro, also just sent me another email saying you would be talking about this issue this weekend on the Weather Channel. He also directed me to an independent report mandated by Congress last year that recommended changes to the National Weather Service's data dissemination structure.)
But, hey, it’s probably nothing. Let’s hope it just works itself out.