If You Can’t Control It, You Can’t Own It
When Green Mountain Coffee Roasters announced its quarterly financial results last week, the news was bad for the company—but good for people who believe that when they buy something, they should decide how to use it.
Here's what happened. A year ago, Green Mountain announced it would be selling a new version of its Keurig coffee machine, a single-cup brewer, that would work only with coffee sold by, you guessed it, Green Mountain. This would be accomplished through the use of digital rights management—technology that restricts how customers can use their machines.
Customers rebelled, I'm glad to say. They broke the DRM and then voted with their wallets. And the company's CEO, Brian Kelley, found himself telling journalists and analysts last week that he'd made a mistake with the DRM. Even though his mea culpa sounded more grudging than heartfelt, he said his company would henceforth allow its customers to use whatever coffee they liked.
This was a rare but important victory for you and me in an ongoing war. At stake is who will control the devices we use, and our communications. Increasingly, it's not us.
Device makers have been taking liberties for years. Apple, for example, lets you decide what apps you want to load on an iPhone or iPad, but you can choose only from apps Apple allows in its online store. Amazon has actually removed book files (including, oh the irony, 1984) from its customers' Kindles. The list is long and getting longer.
The war for control will expand in coming years, because computers and software are becoming part of almost everything we touch. The people who sell us things are embedding them with programmable chips, then connecting them to digital networks—and making choices for their customers about what's allowed after we buy them.
Now, adding connected intelligence and memory to our everyday environment can have real benefits. We can understand our world better. If you've used traffic maps generated by monitoring the locations and velocities of countless cars, you know how helpful embedded, connected intelligence can be.
We hear a lot about the surveillance implications of this trend, and we're right to worry. But we don't talk enough about the outright control can give others when we use devices in such a world where we truly own nothing. If you need permission from a third party, it can be revoked.
The Keurig saga is, by itself, relatively trivial, in part because we have so many other options for coffee. But DRM and other third-party control mechanisms have had an enormous impact in areas such as the arts. The copyright cartel has squashed (or tried to) any number of useful innovations in recent decades and works constantly to torpedo or control any technology that might possibly be used for copyright infringement, never mind how much it might advance artistic creativity or spread useful information.
This is why the Big Sports branch of the entertainment business is going berserk over several new mobile apps, notably Meerkat and Twitter's Periscope, that record video and simultaneously stream it to the Internet. An amazingly large number of people paid good money to watch the recent megabucks boxing match via live TV, but there were plenty of live streams available from folks who just pointed their phones at the TV screen. In a truly bizarre development, people are actually watching Game of Thrones via Periscope.
Hollywood and its allies have been in a minipanic for years about what they call the “analog hole”—another expression for our eyes and ears, since at some point audio and video have to be made available in a format that our analog selves can watch and hear. The copyright profiteers have periodically pined for new kinds of DRM to blind and deafen devices by requiring camera makers to build in technology that refuses to capture another video if that video says “don't record me.”
If our cameras can be ordered not to record one kind of thing, they can surely be told to stay off in other circumstances. Want to take a picture of public building deemed “sensitive” by some paranoid government agency? Nope. Those videos of police misbehavior that are, at long last, shining a light on a national disgrace? Sorry, no longer allowed.
The implications of this go much further. Consider this example, less about DRM and more about ultimate control. Tesla, the innovative electric-car maker, periodically sends software updates to its customers' Model S cars. One recent update improved the car's performance, including faster speeds.
But Tesla's cars, and increasingly all cars, are computer networks on wheels. And if a car company can remotely give vehicles a way to go faster, it can also remotely take that away. Maybe Tesla won't do that by its own accord, but you can bet it will someday when ordered to do so by a bureaucrat or judge.
Tesla's software and hardware updates are also pointing toward the driverless car era, when algorithms make the key decisions about how we'll get where we want to go. Companies that issue subprime auto loans, for instance, have installed starter interrupt devices in cars, so they can remotely disable a vehicle if someone misses a payment. Now consider what happens when a government decides to shut down auto access to a certain geography, or decides you shouldn't be allowed to drive, period. It'll issue an order to the companies that control the vehicles, and that will be that.
We've barely begun to think through these issues, unfortunately. We prefer to deploy and react, because we love our new gadgets and capabilities. Can we afford to do that now?
The Agony of Taking a Standardized Test on a Computer
Remember the days when computers were a passing fad and information was derived from dusty encyclopedias after hours of searching? Me neither. I was born in 1998. By the time I was 14, my teachers stopped asking if we had a computer and Internet available because they had become a necessity. By now, at age 16, I can code a website, use Photoshop, and do more with an iPhone than the folks at the Apple Genius Bar.
But this spring I, like 11th-graders around the state, had to take the new California State Testing exclusively on computers. And my feelings about the testing, which started just this spring, are decidedly mixed.
To the good: Having a computer was much handier for writing essays on the English section of the test, since I could type faster than I could write longhand. Editing text on a computer was so much easier. Of course, the questions felt dated—one part of our test examined the pros and cons of newspapers and blogs using social media, a decade-old question. And the software was definitely outdated—one classmate lost part of hers when her computer crashed. But overall, English went smoothly. Most people managed to finish in the time allotted. I didn’t feel as nervous as I would have if I had to do the entire test by hand.
But testing math on computers? “Horrible and ridiculously hard,” in the words of my friend and classmate Caleigh Zwahlen. The problem goes beyond computers. The new Common Core Standards make the questions more confusing and difficult than they need to be.
For example, we students could not respond to the geometry questions by drawing out geometric figures—because the computer did not permit it. Instead, we had to write out our answers in words, and then explain, also in words (as opposed to graphs or figures) how we got the answers. This felt like testing a contestant’s eye-crossing skills on the show So You Think You Can Dance. It missed the whole point of the exercise.
Computer testing posed other challenges for my school in the San Gabriel Valley. Our campus had only a certain number of computers, fewer than the number of students tested. For months, we heard rumors that we would switch to a block schedule so all students would have the time necessary to test. Typically, classes are 50 minutes long with a 25-minute study hall, with seven periods in the course of the day; under a block schedule, each class is two hours long—but we only have four classes a day. The rumors proved true—and when testing time came in April, the changes to the school schedule affected my life schedule as well.
The testing took place over 16 days; during each two-hour testing period, a new class would come into the computer lab and test. The test consisted of two sections which were spread out over three or four days. Sometimes, students wouldn’t finish so they would have to be pulled out of another class later on to finish. So depending on how quickly the student could write essays, completing the entire test could take anywhere from four to seven hours.
I normally got out of school at 1:45 p.m. Unfortunately, the new schedule had me getting out at 2:40 p.m. almost every single day. And that wasn’t the worst of it. The schedule took away our study period, when we can reach teachers outside of class and complete various other assignments. There were weird holes in the new schedule—some days, I went to my first class, then had two hours until my next class. The administration suggested we go to the library and do homework. I have a lot of homework, but some of my peers, as you might imagine, were not that happy about it.
The last straw for me was when they eliminated the late start for school that we had on Wednesday, forcing me to be at school even earlier. Because of testing, I spent more time at school last month than I ever had in three years of high school—even though I was spending less time in actual classes.
When you consider all the impacts, the cons of online testing far outweigh the pros. Yes, I love technology. Yes, my older brother is a Web designer, and my 12-year-old brother is already working on taking apart and putting together computers. And yes, my generation will bring forth a multitude of Web designers, software developers, and mechanical engineers.
But that doesn’t mean we’re dependent on technology or unable to live without it. For all of technology’s uses, there are times when it is better to honor time-tested traditions. And when it comes to testing, I think it’s best if we stick to the good ol’ pencil and paper. If the state continues with the Common Core Standards and online math testing, they can expect scores to plummet as fast as the iPhone 3’s popularity.
Self-Driving Cars Have Been Getting in Accidents. Is That a Problem?
First, the bad news: Self-driving cars have been getting in accidents. An exclusive Associated Press report Monday revealed that four self-driving cars have been involved in fender benders on California’s streets just since September. That’s out of a total of fewer than 50 legally permitted self-driving cars on the state’s public roadways.
Three of the cars were Google’s; the fourth was a self-driving Audi owned by Delphi Automotive. Two of the four were under human control when they wrecked; the other two were in autonomous mode. All of the accidents were minor, with no injuries reported. The AP obtained the data from an unnamed source who wasn’t allowed to talk about accident reports publicly.
On its face, that sounds pretty bad for self-driving cars. Google told the AP its autonomous vehicles have covered a total of roughly 140,000 miles in the state since September. As the AP points out, three wrecks per 140,000 miles is actually significantly worse than the national average of 0.3 “property-damage-only” accidents reported per 100,000 miles driven. Hey, wasn’t the whole point of these things to make us safer? And what happened to all those triumphant media reports about how far Google’s self-driving car had gone without a single accident in autonomous mode?
Well, here's the good news. Shifting its PR team into overdrive, Google responded to the AP’s report within hours in the form of a detailed Medium post by the self-driving car project’s director, Chris Urmson. And he said not one of the three accidents was caused by the self-driving car in question.
In fact, Urmson writes, Google’s self-driving cars have logged some 1.7 million miles over the six-year life of the project, and they’ve been involved in a grand total of 11 accidents. Not one has been serious. More importantly, if you believe Google, not a single one of those accidents was caused by the self-driving car. Eleven minor accidents in 1.7 million miles is a much less alarming ratio. And, Google noted, comparing it to the national average may be misleading, since a great many minor accidents go unreported.
Oh, and zero accidents caused in 1.7 million miles is obviously a strong record by any standard.
So what’s the takeaway here? Are Google’s robot Lexuses supersafe or superscary? The answer, actually, is the same as it’s been all along: As far as we know, they’re really quite safe—but to render a verdict would still be premature.
For one thing, we assume Google is telling the truth and not fudging its figures. But it would be nice, if we’re going to trust these things with our lives in the foreseeable future, to be able to verify such assertions with something other than the occasional anonymously sourced investigative AP story.
Meanwhile, there’s another confounding factor in these numbers that’s often overlooked. When a Google employee behind the wheel of a self-driving car sees a risky situation developing, her instructions are to take the wheel herself. That means Google’s cars don’t even have a chance to cause an accident unless the person behind the wheel fails at her job first. No wonder crashes in autonomous mode are rare!
Now, whenever this happens, Google takes the car back to its shop and simulates what would have happened if the driver hadn’t taken over. In every case, Google says, the simulations show that the car would have automatically avoided causing the accident.
That’s reassuring if true, but there also seems to be some circularity at work here. Remember, these are Google’s own computer simulations we’re talking about—presumably using the same assumptions that are built into the car itself. How can we be sure the simulations aren’t mistaken?
To recap, here’s what we can say with confidence: Self-driving cars are not running amok and causing accidents at an alarming rate. If anything, they appear to be excellent drivers based on the limited evidence we have so far. By the same token, they will not save you from all the other idiots who populate our nation’s fair roadways.
What we cannot say with confidence yet is whether self-driving cars will in practice substantially reduce the rate of traffic accidents in a world where they share the road—and sometimes, the wheel—with human drivers. Which is a very good argument for exactly the kind of road testing that California and several other states are now permitting on their public roads. It’s also a good argument for a little more transparency as to how those self-driving cars are performing.
Sales Exec Says She Was Fired for Uninstalling GPS App That Tracked Her Constantly
It might sometimes feel like your boss is always on your case, but this is a whole other level. A former sales executive from the wire-transfer company Intermex alleges in a lawsuit filed May 5 that she was fired for uninstalling an app that tracked her whereabouts 24/7 and sent the data to her supervisor.
According to the suit, which was spotted by Ars Technica, Intermex made employees install the Xora GPS app so the company could track them at all times. Myrna Arias claims that she told her boss, John Stubits, that she was fine with the tracking while she was on duty, but opposed to it during her off hours and weekends. The suit alleges that a group of co-workers agreed with this position. After doing some research about Xora, Arias uninstalled the app in April 2014.
The suit, filed in Kern County Superior Court, claims privacy violations, wrongful termination, and other labor infractions. It outlines damages of more than $500,000 for lost wages. “What we have here is a really egregious situation,” Arias’s attorney Gail Glick told Courthouse News Service. The suit says:
After researching the app and speaking with a trainer from Xora, Plaintiff and her co-workers asked whether Intermex would be monitoring their movements while off duty. Stubits admitted that employees would be monitored while off duty and bragged that he knew how fast she was driving at specific moments ever since she installed the app on her phone.
ClickSoftware, which makes the product Xora StreetSmart, seems to envision its product as a 9-to-5 tool, not a full-time surveillance service. The company has not responded to a request for comment. Its website says, “When your field employees start their day, they simply launch the application on their mobile devices.” But the potential for invasive abuse of the product is there. “See the location of every mobile worker on a Google Map. You can drill down on an individual worker to see where they have been, the route they have driven and where they are now,” the site explains.
Mobile phones have made it cheap, easy, and appealing for employers, insurance companies, and other groups to track their affiliates. But concerns about these programs are extensive and include the danger of unforeseen privacy violations, in addition to the obvious ones like your boss finding out that you ran an errand during work or your insurance finding out that you visit a smoke shop a few times a week. As one of my Slate colleagues said, “If that [lawsuit] is even close to what actually happened, it is terrifying.”
HillaryClinton.net Redirects to Carly Fiorina’s Site. That’s Bad News for Carly Fiorina.
Type HillaryClinton.net into your address bar today and you’ll be redirected—not, as you might expect, to the official presidential campaign page of the former secretary of state, but to that of Carly Fiorina. If this oddity has attracted attention, it’s thanks to some conveniently partisan symmetry: Last week, Fiorina’s budding campaign for the White House faced widespread mockery after it failed to secure the domain name CarlyFiorina.org. In the place of an ordinary campaign site, a mysterious prankster had posted a stark political statement: 30,000 frowning emoticons. As an explanatory note on the page explained, there was one for each of the people she laid off during her tenure as CEO of Hewlett Packard.
From the outside, today’s Clinton to Fiorina redirect looks an awful lot like revenge. Breitbart suggests that the candidate is “seemingly buying up celebrity domains to push her campaign message.” Fiorina has indeed purchased .org pages of Seth Meyers and Chuck Todd before going on their shows. But it’s not actually clear that she’s responsible for HillaryClinton.net redirecting to carlyforpresident.com. According to MSNBC, a representative for Fiorina held that the campaign had not purchased the page. What’s more, the page’s registration was updated on May 2, while the CarlyFiorina.org prank didn’t begin making news until two days later, suggesting that the domain was secured earlier and for different reasons.
That’s probably for the best. By pushing back in this manner, the Fiorina campaign would only prolong its own embarrassment, something it’s nevertheless managed to do on its own. Puzzlingly, her campaign has given this nonstarter of a story its own scandalous moniker. In a tweet about her appearance with Chuck Todd on Meet the Press, Fiorina refers to it as #domaingate. Though she’s attempted to show how trivially easy it is to purchase a domain (as of this posting, I could buy ChuckTodd.us for less than $5), she’s only succeeded in extending the life of a story she should probably let die.
Some have suggested that the original registration debacle was newsworthy because Fiorina had positioned herself as the tech candidate. From this perspective, her failure to defend herself against cybersquatters speaks to her lack of savvy. It is presumably this point that Fiorina seeks to contest when her camp showily buys up the domains of talk show hosts. Anyone can pull this off, she suggests. It’s not my fault. (And, to be sure, her failure to register CarlyFiorina.org doesn’t constitute a “gaffe” in any meaningful sense.)
Ultimately, however, the original prank achieved traction because it calls attention to Fiorina’s more profound failures as a leader. On Meet the Press, Fiorina complained that CarlyFiorina.org “leaves out a whole bunch of other facts.” But that may be the point. In its simplicity, the troll site illuminates a single issue in the clearest possible way, and that light will stay lit as long as we keep paying attention to “#domaingate.” Tellingly, the attention paid to CarlyFiorina.org has inspired serious reflections about her record as CEO, potentially turning one of her ostensible strengths into a liability.
Cybersquatting is almost as old as the Internet domain system itself. Taking over some version of a candidate’s name will always be easy, and it will keep happening as long as we keep paying attention to it. By and large, these are nonstories, no more meaningful than graffiti hastily scrawled on a wall. They matter only so long as we keep talking about them. Fiorina would do well to stop lengthening the conversation.
NPR Initiates Phase Two of Plan to Become the Pandora of News
The announcement by National Public Radio that it’s opening access to an application programming interface, or API, seemed like it should be of interest only to a relative handful of tech developers.* In fact, it is a significant and smart next step in NPR’s strategy to become the Pandora of news.
In 2014, NPR introduced a streaming player, NPR One; podcasts and NPR One are NPR's fastest-growing digital properties. Demian Perry, NPR's director of mobile, now says, "We have tested and refined the experience to the point that it is ready to scale to multiple platforms," in addition to its current roster of major hitters: iOS, Android, Windows Phone, etc. But NPR is a lean organization—how can it create versions of the player for the hundreds of new versions of smartphones, tablets, wearables, and who-knows-what's-next else without setting its pledge drive to stun?
Answer: You open up an API to let other developers to do it for you.
You can think of an API as a website designed to be accessed by applications requesting data from that site.* Some APIs are intended to enable developers to come up with their own novel ways of using an organization's data, but as the new NPR One Developer Center makes clear, this API is solely intended to be used by authorized developers who are creating device-specific versions of the NPR One player.
Like Pandora, NPR One fetches content from the cloud—in this case, NPR's—based on each individual user’s pattern of behavior. Also like Pandora, the NPR stream continuously adjusts to listeners' preferences as expressed by their behavior. "If they mark a story as interesting or skip it, we gather that data," says Perry, "and over time we’re able to use that listening history to predict what their rating will be on every news story that’s being released today.” As a result, “we can put the story you’re most likely to enjoy as the next one in your queue.” (But unlike Pandora: unlimited skips.)
As the players proliferate across platforms, NPR will know what you have already heard across whatever set of devices you listen on. You won’t get the same story twice, even if you switch devices. NPR might, however, serve you a Cinco de Mayo story from a few years ago on some future May 5 if your behavior indicated that you liked this year’s story, and if you haven't already heard the archived story. (Perry points out that this is another difference from Pandora: Users of the music streaming service like to hear content more than once.)
Developers have to be authorized by NPR because, says Perry, “We have to balance the need to keep our user data secure with our public media dedication to openness. ” For that reason, the “first tier” of developers will be “partners we’ve worked in the past or have a prior relationship with.”
The news comes a week after NPR’s April 29 announcement that virtually any piece of NPR audio content—more than 800,000 items—now can be embedded in a Web page by anyone who knows how to copy and paste HTML. For example, please enjoy:
These easy-bake widgets make it easier for sites to include particular NPR stories. The One Developer Center, on the other hand, speeds the deployment of NPR players that stream personalized news.
“The primary benefit [for NPR] is helping to create ubiquity for the NPR One listening experience,” says Perry. “We’re trying to take this experience and have it be accessible from every platform. Users can turn it on and get the same experience in their IOS app or their Web app or any device."
NPR being NPR, Perry says their recommendation algorithms will steer listeners away from comfortable filter bubbles. So no matter how hard you press the ”interesting" button for the Miley Cyrus think piece you just heard, NPR is still going to stream you that story about events in the Ukraine.
Correction, May 13, 12:30 p.m.: This post originally misstated that API stands for application program interface. It stands for application programming interface.
Update, May 15, 11:15 a.m.: This post was updated to clarify that not all API's are websites.
Why Google’s Hometown Said “No” to a Massive New Googleplex
Google had a bold, futuristic vision for a gleaming new campus adjacent to its current headquarters in Mountain View, California.
Mountain View, California had other ideas.
This week, the city council rejected the bulk of Google’s plans by a 4-3 vote. Instead, it set aside the majority of the developable office space in question for a more modest project proposed by another local tech company: LinkedIn.
The move will give LinkedIn a chance to build its own new headquarters as part of a mixed-use development that will also include a movie theater, fitness club, shops, and restaurants, all open to the public. And it will leave Google with rights to less than one-fourth of the commercial square footage it had hoped to build—about enough for one of the four main buildings it had planned.
“To have one building—it’s a significant blow,” a Google vice president told the council just before its late-night vote, according to the Silicon Valley Business Journal. “I’m not sure how I make any of this economically viable with one building.”
As I detailed a few months ago, Google had brought together a pair of celebrity architect-designers to realize its high-concept dream of a fancy new Googleplex beside the San Francisco Bay. You can see their plans and watch a gauzy promotional video for them here. The complex was to include tree-lined public hiking and biking paths meandering past a series of glass-wrapped modular buildings shaped like circus tents.
Alas, it was not to be. Google still has the right to develop land it owns elsewhere in Mountain View, albeit not as much as it had hoped to build on its North Bayshore property. If it wants to build a dramatic new ’plex, it may have to look beyond its hometown.
Mountain View, population 75,000-ish, has been central to Silicon Valley since the days of William Shockley. But over the past decade, it has become nearly synonymous in the popular mind with the Internet search giant. So it might come as a surprise to outsiders that the city would thwart Google’s ambitious growth plan in favor of a less dazzling proposal from a less dazzling local Internet company.
Why did the city risk souring relations with its largest taxpayer and employer?
Part of it has to do with the specifics of the two companies’ plans. Backed by well-to-do residents who don’t want their multimillion-dollar backyards blighted, local elected officials on the Peninsula tend to err on the cautious side when it comes to new development. As the New York Times’ Conor Dougherty points out, LinkedIn’s plans required no special exceptions to the city’s height or density limits and were touted as more shovel-ready than Google’s far-out designs.
But there was also a deeper motive underlying the council’s decision. As Google has grown, some in Mountain View worry it’s becoming a de facto company town, reliant on the fortunes of a single massive corporation. (The Verge had a good piece last year on how Google is “taking over Mountain View.”)
Google owns so much of the city’s property that other local tech companies, including LinkedIn, felt hemmed in. And whereas Google is entrenched locally, there was concern that LinkedIn might leave the city altogether if it didn’t win this battle.
“I don’t think the decision was a reflection of any dissatisfaction with Google’s proposal,” Randy Tsuda, Mountain View’s community development director, told me in a phone interview Wednesday. “But several council members expressed the desire to really accommodate LinkedIn’s growth and allow them to remain in North Bayshore.” The council also wanted to reserve some room for new residential development in the neighborhood, and it appeared to view Google’s plans as less compatible with that goal, even though Google itself has long called for the same. Regardless, that would be a welcome move in a city whose job growth has drastically outpaced its housing supply.
The question now is, what will become of Google’s grand plans? The company hasn’t said, but it did indicate that it will continue to expand both in Mountain View and elsewhere in the Bay Area in the years to come. It has office projects in the works everywhere from downtown San Francisco to Redwood City and Sunnyvale to Moffett Field, which is situated on the border of Mountain View but is owned by NASA.
There is also some speculation that the company could look to build in Oakland, whose historically blue-collar downtown has seen an influx of young professionals priced out of San Francisco and the Peninsula. There would be some irony in that, as those exorbitant prices are driven by the vast wealth that tech companies like Google have minted.
A Google office in Oakland would likely be a magnet for the city’s famously frisky protesters, including the contingent that has coalesced to counter the techie invasion (and its buses) in recent years. The company would probably prefer to build its next homes in environs more compatible with its sunny, utopian vibe. But it’s telling that even Mountain View, one of the more business-friendly jurisdictions in the region, has come to regard Google’s presence with some ambivalence. If Google can’t build its Shangri-La there, what’s left—Cleveland?
Previously in Slate:
Has Facebook Seemed Really Aggressive About Birthdays Recently? You Aren’t Imagining Things.
Maybe you’ve noticed it, too: Facebook has been unusually pushy about birthdays lately. For me, it started near the end of April with an unexpected and unrequested email early in the morning. It was a dictatorial imperative, telling me that I should wish a happy birthday to a beloved professor from my undergraduate years. Later, when I logged onto the site itself, a notification popped up, repeating this command. I dismissed it, only to find that the Facebook app on my phone was providing the same information. Everywhere, insistently, the same demand. Celebrate!
This pattern held throughout the week that followed. Emails, push notifications, and pop-ups, oh my! While I was accustomed to occasional, unobtrusive reminders, I felt as if birthdays were being forced on me. Taking a page from Slate’s Dan Kois (though I didn’t go quite as far as he did), I began to unfollow ostensible “friends” whose “special days” didn’t seem particularly special. And yet they kept coming.
I initially assumed this torrent was intentional. This week, however, a Facebook representative informed me that it had been a bug in the site’s notification system. Under ordinary circumstances, the representative explained, a user should receive only one notification. You can subscribe to email birthday notifications, but people who don’t have that enabled were incorrectly receiving emails. It was not immediately clear whether the other, similarly persistent, notifications were part of the same problem.
But if these additional notifications were a glitch—and there’s no clear reason to doubt the claim—they were a glitch that served Facebook’s interests. Birthdays have been a central part of Facebook for years, partly because they encourage engagement with the site. They inspire you to click through to the profile of your newly agèd “friend.” Once you’re there, social pressure (“55 people have congratulated your casual acquaintance on her birthday!” it tells us) encourages you to take the time to write a note—even if it’s a glib one—thereby making you spend a little more time on the site than you otherwise might. In the process, you’re exposed to more ads, and sometimes to more content—the profiles of other well-wishers, for example—that might hold your attention longer still. From Facebook’s perspective, birthday notifications are a lure, one that lets them reel us in slowly and often.
Facebook’s company line is, of course, different. The representative I spoke to told me Facebook had originally incorporated birthday notifications because users requested them. It upset those users to miss their friends’ birthdays, and a gentle nudge helped save them from embarrassment. Unsurprisingly, though the alerts are optional, most users apparently keep them turned on.
Some of my real-world friends tell me that birthday notifications are the part of Facebook they like best. Having occasionally forgotten the birthdays of everyone I have ever loved (most of them more than once), I can understand why. I missed my father’s birthday this year, calling him only after my sister posted a picture of their celebration. Later, I learned that he had hidden the date from his timeline, feeling that the notes he normally received on the occasion were facile. Though I share his opinion, I couldn’t help but realize that I might have avoided some embarrassment if he’d left it on.
Ultimately, the power of the birthday notification speaks to the ways Facebook is transforming our ordinary experience of intimacy. Knowing that it’s someone’s birthday has traditionally meant being near them, either literally or figuratively. Proximity, in its own turn, is the starting point of intimacy. That’s why it often upsets us when people forget our birthdays. It’s not that we think they love us less, but that we fear they’re drifting away.
Even when it’s minor, that intimacy can be pleasurable. Facebook plays on that sense of pleasure, extending it indefinitely by letting us know the birthdays of almost everyone we’ve ever met. It doesn’t necessarily take away from the intimate quality of birthdays—as some have suggested—but it does expand the intimacy’s scope in a way that makes it unsustainable without the aid of Facebook itself.
For many of us, birthday alerts have woven themselves into the landscape of our lives. In this regard, at least, those additional notifications seemed almost natural. Still, I won’t miss them.
Netizen Report: Slovakia Says Mass Surveillance Is Unconstitutional
The Netizen Report offers an international snapshot of challenges, victories, and emerging trends in Internet rights around the world. It originally appears each week on Global Voices Advocacy. Renata Avila, Ellery Roberts Biddle, Hae-in Lim, Bojan Perkov, and Sarah Myers West contributed to this report.
It has been a year since the EU Court of Justice found the European Union’s Data Retention Directive to be “invalid” due in part to its infringement on user privacy. While stories of mass surveillance continue to dominate headlines in France, Germany, and the United Kingdom, some European countries actually have sought to build stronger protections for user privacy. Last week, mass surveillance was officially ruled “unconstitutional” by the Constitutional Court of the Slovak Republic in a case brought by a coalition of parliament members working in cooperation with the European Information Society Institute, a think tank. The decision will dismantle key elements of Slovakia’s 2011 Electronic Communications Act, which required mass metadata collection and storage by telcos.
Iraqi media freedom advocate killed in Baghdad
Iraqi journalist Ammar Al Shahbander was killed in a car bomb explosion in Baghdad carried out by ISIS. Al Shahbander worked to defend free speech in Iraq as the Chief of Mission in Iraq for the Institute for War and Peace Reporting.
Message apps blocked in Burundi
In the midst of protests surrounding recent presidential elections, the government of Burundi blocked WhatsApp and Viber, despite fewer than 2 percent of the country’s residents having access to the Internet. The move is the latest in a trend of increasingly restrictive measures used to quell dissent in Burundi—phone lines to private radio stations have also been cut, and the government passed a law in 2013 requiring journalists to become accredited and reveal their confidential sources under certain circumstances.
Hong Kongers call for stronger privacy protections
Hong Kong civic groups are calling for accountability in surveillance practices, following suspected widespread monitoring of mobile and Internet-based instant messaging apps during the Occupy Central movement. Civic groups including Hong Kong’s InMedia have called upon the government to ensure all forms of interpersonal communication are covered under Hong Kong’s surveillance law. They are also asking that the law ensure that police not surveil protest organizers, leaders of civic groups, or political dissidents on the grounds of public security.
“Great Cannon” blast on Facebook login service
According to the Verge, China’s Great Firewall has been used to attack the Facebook Login service by redirect users to a third-party web page by inserting code, a capability the Citizen Lab has labeled “the Great Cannon.” Because this attack was performed through the Chinese national telecom infrastructure, only users located in China who were not using a virtual private network were affected. Similar tactics were previously used for a denial-of-service attack on GitHub and GreatFire.org in March.
An Enormous Game of Capture the Flag Could Change Cybersecurity Forever
The Defense Advanced Research Projects Agency (DARPA) announced in 2013 that it was launching a “Cyber Grand Challenge” to explore the idea of automating cybersecurity. If computers themselves could identify vulnerabilities and create working patches, the timeline for dealing with bugs (like Heartbleed) could shrink from days or weeks to seconds. But it was a pretty radical idea, and it wasn’t clear what would come out of it. Now, more than a year later, the project is headed into its first major competition, and the preliminary results are promising.
The Grand Challenge is modeled on a type of hacking competition called Capture the Flag, but instead of being played by humans, the DARPA version will be played by autonomous computers created by human teams. So far, teams have been able to voluntarily particpate in practice sessions called scored events. DARPA program manager Mike Walker says that the first scored event in December (after the teams had been working for seven months) was a “rough experimental prototype start” that produced about three functional patches.
Things changed at the second scored event in April, though. The teams competing were able to definitively confirm bugs in 23 out of 24 pieces of test software, and they produce patches in all of the software. Walker isn't getting cocky—he knows that the work is far from over. But he says, “We think we’re doing well.”
On June 3, 104 teams will compete in the Grand Challenge qualifying round to identify seven finalists for the summer 2016 final showdown. In digital Capture the Flag, all teams are given the same software at once (this is like the field in traditional Capture the Flag), and the software contains data that needs to be defended (like a flag in the physical game). When the teams discover vulnerabilities in the code, they have to decide whether to use the weakness to attack their opponents, begin working on a patch to defend their own data, or find a way to do both simultaneously. The game organizers feed the teams more data throughout the game. The scoring is based on how many “flags” the teams take from others and how many of their own they can defend to the end.
The cybergame is tough for humans, but designing computer programs that can play autonomously is a whole other level of difficulty. “There are three incrementally harder things,” for computers to do autonomously, Walker says. “One is ‘I think there’s a bug and I tell a human and a human figures out if it’s true.’ The other is ‘I’m certain there’s a bug and I can prove it.’ ... And even harder than that is ‘I know how to fix the problem without breaking the software.’ That’s hardest of all.”
The winners of the Cyber Grand Challenge will take home $2 million for first place, $1 million for second place, and $750,000 for third place. It’s a lot of money at stake for something that was barely conceivable a few years ago. “When [DARPA's director of the Information Innovation office Dan Kaufman] asked me … could machines play this game? I just said I don’t know. It was a completely new thought,” Walker said. “Everyone I asked, no one knew. And when no one knows it’s an interesting problem.”