Future Tense
The Citizen's Guide to the Future

July 22 2015 3:24 PM

This Data-Protection Company Once Again Failed at Its One Job: Protecting Data

Wired logo

Customers who hired the infamous ID theft-protection firm Lifelock to monitor their identities after their data was stolen in a breach were in for a surprise. It turns out Lifelock failed to properly secure their data.

According to a complaint filed in court today by the Federal Trade Commission, Lifelock has failed to adhere to a 2010 order and settlement that required the company to establish and maintain a comprehensive security program to protect sensitive personal data users entrust to the company as part of its identity-theft protection service.

This is ironic, of course, because Lifelock promotes its services to companies that experience data breaches and urges them to offer a complimentary Lifelock subscription to people whose data has been compromised in a breach. To properly monitor victims’ credit accounts to protect them against ID theft, Lifelock requires a wealth of sensitive data, including names and addresses, birth dates, Social Security numbers, and bank card information.

Protecting that data should be a primary concern to Lifelock, particularly in light of the fact that many of its customers have already been victims of a breach. But the FTC found in 2010 that the company had failed to provide “reasonable and appropriate security to prevent unauthorized access to personal information stored on its corporate network,” either in transit through its network, stored in a database, or transmitted over the internet.

Lifelock had been ordered to remedy that situation, but according to the complaint filed today, it has failed to do so. The complaint is currently sealed, but the previous finding from 2010 provides insight into the company’s security failures.

The CEO OF Lifelock, Todd Davis, became famous for advertising his Social Security number on television ads and billboards, offering a $1 million guarantee to compensate customers for losses incurred if they became a victim of identity theft after signing up for the company’s services.

For an annual subscription fee, Lifelock promised customers that it would place fraud alerts on their credit accounts with the three credit reporting agencies. As a result, the company said, thieves would not be able to open unauthorized credit or bank accounts in their name.

“In truth, the protection they provided left such a large hole … that you could drive that truck through it,” FTC Chairman Jon Leibowitz said in 2010, referring to a Lifelock TV ad showing a truck painted with the CEO’s Social Security number driving around city streets.

Leibowitz said the promises were deceptive because thieves could still rack up unauthorized charges on existing accounts—the most common type of identity theft. It also couldn’t prevent thieves from obtaining a loan in a Lifelock customer’s name.

In fact, Lifelock CEO Davis was the victim of identity theft in 2007 when a thief used his widely advertised Social Security number to obtain a $500 loan in Davis’ name.

Lifelock also promised customers that sensitive data they provided the company to perform its protection services would be encrypted and protected in other ways on Lifelock’s servers and accessed only by authorized employees on a need-to-know basis.

“Your documents, while in our care, will be treated as if they were cash,” the company promised.

But it turned out that none of that data was encrypted. The company also had poor password management practices for employees and vendors who accessed the information, and Lifelock failed to limit access to sensitive data to only people who needed access.

What’s more, the company failed to apply critical security patches and updates to its network and “failed to employ sufficient measures” to detect and prevent unauthorized access to its network, “such as by installing antivirus or antispyware programs on computers used by employees to remotely access the network or regularly recording and reviewing activity on the network,” the FTC found.

“As a result of these practices, an unauthorized person could obtain access to personal information stored on defendants’ corporate network, in transit through defendants’ corporate network or over the internet, or maintained in defendants’ offices,” the FTC said in 2010.

Lifelock’s stock price dropped 50 percent, from $16 to $8, following news of the FTC’s new complaint against the company.

See also:

Video Advertisement

July 22 2015 1:18 PM

Let Cats Guide You Through Art History With This New Chrome Extension

It is a tired maxim that the Internet basically exists to help us find pictures of cats. That particular search just got a lot easier—and a lot classier—thanks to Meow Met, a new Chrome extension. Designed by Emily McAllister for the Metropolitan Museum of Art’s Media Lab, Meow Met shows you a cat-related picture from the museum’s collection every time you open up a new tab. As Hyperallergic’s Claire Voon writes, the extension makes ordinary browser usage “into an enjoyable learning experience.”

Among other things, Meow Met offers an important reminder that our contemporary passion for depictions of our feline friends has deep historical origins. Most of the pictures it presents derive from the 19th century, but a few are much, much older. Those of more recent provenance vary delightfully in style and approach, from a charming Qing dynasty scroll of a cat pawing at butterflies to a more sinister oil painting by Gwen John.

As Voon notes, the extension doesn’t resize the artworks to fit within the browser window. It does, however, crop them in a way that almost always spotlights the cat (or cats! or kittens!), ensuring that users never come away from a new tab disappointed. What’s more, clicking on the work’s title pulls up its entry in the Met’s digital catalog, offering a fuller view for those unsatisfied with adorable fragments.

As Olivia B. Waxman notes in an article on Meow Met for Time, “The plug-in is the latest example of how museums have taken to curating cat art to attract snake person visitors.” Wait, snake person? That can’t be right. I think I may have too many Chrome extensions installed.

July 22 2015 11:22 AM

This Computer Program Says It Can Decode Your Emotions by Reading Your Emails. Is It Right?

IBM Watson—AI extraordinaire, Jeopardy world champion, student of hedonic psychophysics—may not have the warm corporality of his crime-solving namesake, but he’s working to acquire the social intuition. Last week the computing company rolled out its Tone Analyzer tool, which harnesses “cloud-based linguistic analysis” to decode the feels roiling beneath your email correspondence or any other text you want to input. The program interprets the writing sample on three levels: emotional tone (angry, cheerful, or negative); social tone (agreeable, conscientious, or open); and writing tone (analytical, confident, or tentative). It assigns to every word it recognizes a color based on that word’s affective tenor. If you click on a particular word, Watson offers up synonyms that might increase agreeability, openness, conscientiousness, or cheer. Meanwhile, a rainbow-hued bar at the top of the page tells you what percentage of the sample language contributes to the overall emotion, social persona, or writerly disposition.

I can’t predict how useful the Tone Analyzer will prove in a business setting—I’d guess that only a small number of managers don’t realize whence the vitriol comes in a sentence like Your presentation was a disaster—but it’s fun to play with. You can reverse-engineer Watson’s color-coded verdicts, using words like punish or stupid to envelop your text in an angry red, or opting for super-duper exciting to sound pink and cheerful. Unpleasant words—worry, fail, decay—boost your negativity score, while neutral nouns and adjectives (project, lunch, timely) weirdly get an “agreeable” label, “conscientiousness” is mostly measured in conjunctions and other syntactical helpmates, and “open” words are … to be honest, I’m not sure. (They include this, away, and murdered.) There’s the “analytical” category, which latches onto thinking verbs like wonder and decide, and the “confident” one, which encompasses emphatic descriptors like any and exactly, and the “tentative” one, which hedges with terms like some and maybe. It all seems a bit scattershot—either Watson’s cloud-based exegesis has a few kinks to work out, or it runs on logical rails too baroque and ethereal for this lowly meat sack. Oh well. I was pleased, at least, to feed the program some work emails and learn that my colleagues and I are all, in Watson’s estimation, agreeable mensches. “You’re no Sherlock, but I like you,” I typed into the feedbox afterward. It replied that I was cheerful and conscientious.

The Tone Analyzer’s a tool, not an English professor, so unsurprisingly it feels less suited to revealing all the emotional subtleties in a piece of writing and more helpful as a kind of spellcheck for being an asshole. Wondering how to make your memo to staff sound less angry? Watson will trace that nebulous rage vibe to a few problem words and suggest gentler replacements. Hoping to strike the perfect chord of confidence and humility in your cover letter? Watson will ferret out your overweening nevers, your diffident sort ofs. True, homographs occasionally baffle the supercomputer. I served it one of the ghastliest passages I could think of from Cormac McCarthy’s The Road—“People sitting on the sidewalk in the dawn half immolate smoking in their clothes. Like failed sectarian suicides … The screams of the murdered. By day the dead impaled on spikes”—and it approved of the happy word like. So too context: I told it I was “obsessed” with hound dogs and it chided me for negativity (probably picturing an anguished Bassett-stalking scenario). Also, I submitted the last page of The Great Gatsby, one of the most emotionally soaring blocks of prose-poetry ever written in English, and Watson gave it a 0 percent emotion tone. “Let’s agree to disagree!” I wrote. “Differ,” the computer corrected gently.

Watson-baiting will only get you so far. By the time I was inputting, at various co-workers’ suggestions, passages from Fifty Shades of Grey and Naked Lunch, the novelty of the exercise had worn off. (Fun fact: Watson prefers peter to penis.) Agreeability, conscientiousness, and anger are just not very revelatory dimensions along which to assess most pieces of writing, it turns out. That’s because, in an ideal world, all office communications sound vaguely alike: congenial, competent, engaged, and helpful. But beyond the cubicle, so much of our language use expresses singularity rather than convention, treading into other affective realms entirely.

That’s obvious, as is the maxim that there’s no science—no specific goals, no rules, and certainly no shortcuts—to conjuring emotions out of articulated noise. Yet sentiment analysis continues to entrance linguists and computer developers. In the early aughts, Eudora’s email service came with an automated function that assessed the various feelings reflected in each message. Like the Tone Analyzer, the software was rudimentary and easily misled. (Jokes circulated about a math teaching assistant who got dinged for negativity after repeatedly referencing his students’ “problems.”) Academic studies also make use of “opinion mining” computer programs to “identify and extract subjective information from source materials.” The Cyberemotions project, from 2013, for instance, tried to understand how angry-, happy-, or sadness-tinged language drove the formation of online communities. The new iteration of sentiment analysis with IBM raises the question: Why do we keep tilting at this particular windmill?

I’d argue that people interested in artificial intelligence might also be interested in the proposition that the consciousness embedded in and delivered by a passage of writing can be broken down into discrete, understandable parts. Sentiment analysis enacts the mind-body problem, but for texts. Is the tone of a sentence some eerie, soul-like emergent property, or just a sum of processes you can ask a computer to model? I actually posed that question to Watson and was unsurprised when he told me I sounded “tentative.” The human race gets the last laugh, however. He didn’t even recognize the word “computer.”

July 21 2015 7:05 PM

Lindsey Graham Uses a Flip Phone and Memorizes Phone Numbers. That’s a Great Way to Live.

On Tuesday, Donald Trump gratuitously revealed Sen. Lindsey Graham’s cell phone number to an audience in South Carolina. He did this hours after Graham implored Trump to “stop being a jackass”; Graham, himself a Republican candidate for president, responded to the campaign-trail doxing by tweeting, “Probably getting a new phone. iPhone or Android?” I thought this was sort of funny, but when I read that Graham still uses a flip phone—and, moreover, that he chooses to memorize phone numbers rather than store them in his phone—I was shocked. I still use a flip phone, too—and, what’s more, I also choose to memorize key phone numbers rather than store them in my phone. Am I actually Lindsey Graham? It’s very possible. What I can say for sure is that flip phones and memorized phone numbers are the best, and unless the day comes when I literally have no choice but to do so, I will never, ever change my ways. I hope Graham doesn’t, either.

My willful Luddism may not come as a surprise, given that I’ve previously blogged about how I still use Winamp, and how I like to write out my blog posts longhand in a notebook. My insistence on using a 12-year-old flip phone might be the apotheosis this antediluvian tendency. I use a Motorola v60s flip phone that dates back to at least 2003. It’s a great phone: It both sends and receives phone calls and text messages, and it makes a very satisfying “click” when I shut it. (Seriously, you can’t put a price on that click.) Like cockroaches, plastic six-pack rings, and the Canyonero, my phone is virtually indestructible. You cannot break it—and I have tried. What’s more, it’s a conversation starter nonpareil. The following scenario plays out about once a week: I’ll be sitting at a bar, fiddling with my phone, and some talkative lush will see me and say something like, “Wow, I had that phone back in 2004.” And then I say something like, “Ha ha, yeah, I still do,” and inevitably there’s a weird pause as my interlocutor tries to decide whether my telephonic primitivism is interesting or just plain weird.

As an icebreaker, my phone is great. As a telephone, the v60s leaves much to be desired. It doesn’t really get reception indoors, which means I have to stand on the street outside my apartment if I want to talk on the phone. It only stores 200 text messages at a time, and tends to freeze up whenever I receive several texts in rapid succession. Sending texts is a chore, too; for one thing, I have to press the number 1 exactly 17 times in order to get an apostrophe. My phone doesn’t have any games. It can’t connect to the Internet. It loses its charge after like 45 minutes of use—and now that RadioShack is out of business I can’t easily buy a replacement battery.

And yet it’s among my most cherished possessions. I am an easily distracted person who spends about 12 hours per day in front of his computer, and usually wastes about 10 of those hours frantically refreshing his email or looking up meaningless baseball statistics —Did you know that 36-year-old Eric Davis had a surprisingly good year for the Orioles in 1998? Neither did I until this morning!—or otherwise drowning in the digital deluge. I love the Internet very much, but I’m well aware that 20 years of prolonged exposure to it has decimated my attention span and my capacity for sustained contemplation. The only opportunities that I actually have to think are when I’m walking around in public or taking a shower, and if I could bring my laptop into the shower with me, I probably would.

Having an extremely dumb phone allows me to walk around in public and think without feeling compelled to check my email or keep up with sports scores that I don’t actually care about. If I had a smartphone, I’d lose that built-in respite from the state of perpetual connectivity in which we are all encouraged to live. Sticking with my old-ass flip phone is a means of mental self-preservation. The same goes for my insistence on memorizing important phone numbers. I’ve got about 15 or 20 of my most-called phone numbers committed to memory, which isn’t very many in absolute terms, but which makes me a regular Kevin Trudeau compared to most of you chumps. Doing so isn’t hard, and it makes me feel slightly less reliant on technology, and slightly more able to manage and control my daily life.

In a very minor way, it also makes me feel good about myself, much like figuring out directions based on instinct and memory rather than relying on GPS does the same. Figuring out street directions isn’t hard, people! Most street systems are grids! And don’t get me started on Venmo. Whatever happened to good, old-fashioned checkbooks?

I don’t know what Lindsey Graham thinks about street grids or checkbooks. (The Lindsey Graham for President 2016 website is surprisingly devoid of information about his policies on those issues—and definitely does not accept Bitcoin donations.) But I truly hope he doesn’t actually upgrade to an iPhone or Android. Being a senator and presidential candidate is probably even more stressful than being an online journalist—and even more so than I am, Graham is probably overwhelmed by the myriad pieces of information competing for his attention. If anything, more public servants ought to create opportunities for themselves to take periodic mental breaks. Plus, I’m just saying, Graham’s flip phone will make a really great conversation starter on the ol’ campaign trail.

July 21 2015 6:45 PM

Are Current Cybersecurity Measures Enough? Professionals Can’t Agree.

With all the high-profile hacks being disclosed lately, it certainly seems like both public and private cybersecurity protections are lacking. But two surveys of security professionals reveal widely varied views on whether companies and networks are prepared to deal with digital attacks.

In the "Critical Infrastructure Readiness Report" from McAfee, the Aspen Institute, and Intel, almost 75 percent of the 625 respondents said they were confident or extremely confident in their organization's framework for identifying intrusions. Sixty-eight percent said they were confident that they could deal with attacks. Sounds great, let's all go home.

Seventy percent of the same survey respondents, though, said that there were more and more threats out there. And a vast majority reported at least one cyberattack on their organization's system, with the median number of attacks at 20 per year. Respondents said that these hacks resulted in service interruptions, data breaches, and even physical damage.

The survey notes:

Those who have endured a higher number of successful attacks and confirmed damage feel more vulnerable than the rest; this suggests that as the number of attacks on all organizations continues to increase, the confidence levels reported in the survey may erode.

The most incredible and concerning stat from the report is probably that 48 percent of the cybersecurity professionals surveyed said that they think it's likely that a hack will compromise critical infrastructure "with potential loss of life." These are the same people who feel confident that their organizations are secure!

Released last week, the 2015 Black Hat Attendee Survey polled a more pessimistic group of 460 security professionals. Seventy-three percent said they thought their organizations would suffer a data breach at some point in the next 12 months, but only 27 percent said that the group would be able to handle it. Similarly, just 27 percent said they had enough people working on security to address everything. "The survey indicates that most enterprises are not spending their time, budget, and staffing resources on the problems that most security-savvy professionals consider to be the greatest threats," the report said.

July 21 2015 11:46 AM

Senators Introduce Legislation to Protect Your Car From Being Hacked

Wired logo

A few years ago, the notion of hacking a car or truck over the Internet to control steering and brakes seemed like a bad plot point from CSI: Cyber. Today, the security research community has proven it to be a real possibility, and it’s one that at least two U.S. senators won’t wait to see play out with real victims.

On Tuesday morning, Sens. Ed Markey and Richard Blumenthal plan to introduce new legislation that’s designed to require cars sold in the U.S. to meet certain standards of protection against digital attacks and privacy. The legislation, as described to Wired by a Markey staffer, would call on the National Highway Safety and Transportation Administration and the Federal Trade Commission to together create new standards that automakers would be required to meet in terms of both their vehicles’ defenses from hackers and how the companies safeguard any personal information such as location records collected from the vehicles they sell.

Until now, car hacking has remained a largely theoretical threat, despite some instances when thieves have disabled cars’ door locks with wireless attacks or when a disgruntled dealership employee used a tool designed to enforce timely car payments to remotely brick more than one hundred vehicles.

But the security industry has demonstrated that vehicles’ increasing connections to the internet create new avenues for attack. Earlier Tuesday morning, in fact, Wired revealed that two security researchers have developed and plan to partially release a new attack against hundreds of thousands of Chrysler vehicles that could allow hackers to gain access to their internal networks. As part of the same demo, those researchers, Charlie Miller and Chris Valasek, also demonstrated to Wired that they could use the attack to wirelessly control the steering, brakes, and transmission of a 2014 Jeep Cherokee over the Internet. (A Markey spokesperson insists that the bill’s release wasn’t timed to Wired’s story.)

“Drivers shouldn’t have to choose between being connected and being protected,” Markey wrote in a statement shared with Wired. “Controlled demonstrations show how frightening it would be to have a hacker take over controls of a car. We need clear rules of the road that protect cars from hackers and American families from data trackers.”

Markey and Blumenthal’s bill will have three major points, according to a spokesperson’s description. First, it will require the NHTSA and the FTC to set security standards for cars, including isolating critical software systems from the rest of a vehicle’s internal network, penetration testing by security analysts, and the addition of on-board systems to detect and respond to malicious commands on the car’s network. Second, it will ask those same agencies to set privacy standards, requiring carmakers to inform people of how they collect data from vehicles they sell, letting drivers opt out of that data collection and restricting how the information can be used for marketing. And finally, it will require manufacturers to display window stickers on new cars that rank their security and privacy protections.

Automakers have gotten hints for months that legislation was in the works. In February, Markey’s office released the results of a series of questions it had sent to 20 carmakers, quizzing them on their handling of digital security and privacy. The 16 companies that responded gave answers that weren’t reassuring. Nearly all of them said their vehicles now include wireless connections like cellular service, Bluetooth and Wi-Fi–the means by which remote hacking can occur. Only seven said they used independent security testing to check their vehicles’ security. Only two said they had tools in place to stop a hacker intrusion. And an “overwhelming majority” collected location information about their customers’ vehicles, in many cases offering only ambiguous claims about encrypting the collected data.

In May, members of the House of Representatives’ Energy and Commerce Committee followed up with their own set of even more detailed questions for 17 automakers and the National Highway Safety and Transportation Administration. “While threats to vehicle technology currently appear isolated and disparate, as the technology becomes more prevalent, so too will the risks associated with it,” read the letter.

Car hacking has emerged as an increasingly crowded field of study for digital security researchers. In 2011, academic researchers from the University of Washington and the University of California, San Diego, published a study in which they remotely hijacked an unnamed sedan via its wireless connections to disable its door locks and brakes. In 2013 the same security researchers Miller and Valasek who hacked the Jeep pulled off a series of similar attacks against a Toyota Prius and a Ford Escape (also with me behind the wheel), though their laptops were wired at the time into the vehicles’ dashboards via their OBD2 ports. At the Black Hat hacker conference in August Miller and Valasek plan to reveal the full details of their latest car attack, the over-the-internet compromise of a Jeep Cherokee.

Despite that growing drum beat of warnings about digital attacks on cars, however, not everyone in the security community is so excited about legislation. Josh Corman, one of the co-founders of the security industry group I Am the Cavalry, which is focused on protecting things like medical devices and automobiles, was wary of a possible bill when he spoke with Wired about the possibility earlier this month.

Corman worried that the ensuing law could be comparable to payment card industry rules that are widely seen as outmoded and ineffective. Instead, he said he hoped the auto industry could be nudged into innovating security features on its own in the same sort of competition that currently exists for traditional safety features.

“Laws are ill-suited for a dynamic space like this,” Corman said at the time. “If this can catalyze [the industry] standing up straighter and getting a plan in place, that’s great. If it makes them less responsive in the face of new adversaries, that could be very bad.”

Whether through legislation or industry competition, however, the pressure on carmakers to protect vehicles from hackers is growing. “If consumers don’t realize this is an issue, they should, and they should start complaining to carmakers,” says Miller. “Cars should be secure.”

See also:

July 21 2015 11:18 AM

Download This Windows Patch Right Now

On Wednesdays we wear pink, and on Tuesdays Microsoft pushes big patch packages to correct problems. But Thursday the company disclosed a vulnerability in its system for displaying custom fonts and Monday the company released a patch in its security bulletin. Since neither of those days were Tuesdays, you know that this is serious. Also, Microsoft is calling the update "critical," so that might also be a tipoff.

Researchers looking through documents leaked in the breach of Hacking Team, an Italian company that sells surveillance technology, discovered a vulnerability in the Windows Adobe Type Manager Library. Basically if you open a document or Web page that has custom fonts built to exploit the flaw, a bad actor could run code of their choosing on your computer. That would be bad!

A remote code execution vulnerability exists in Microsoft Windows when the Windows Adobe Type Manager Library improperly handles specially crafted OpenType fonts. An attacker who successfully exploited this vulnerability could take complete control of the affected system. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.

The patch applies to all supported versions of Windows (Vista on) plus as yet unreleased Windows 10. If you have automatic updates set up on your Windows machine, the patch has probably already been applied without you noticing, especially because it doesn't require a restart. But if you keep automatic updating off or you want to be sure, you can download the patch here. Do it.

July 20 2015 6:28 PM

Drones Keep Getting in the Way of Firefighting in California. That Seems Bad.

On Monday, two California lawmakers introduced a bill that would allow officials to immobolize drones that get in the way of emergency responses like firefighting. The bill comes in response to multiple incidents this summer and assorted others over the past few years.

Assemblyman Mike Gatto, Democrat of Glendale, and Sen. Ted Gaines, Republican of El Dorado, want Senate Bill 168 to offer protection to emergency responders so they can disable drones that are hindering operations without the possibility of being charged with destruction of property later. Gatto said in a statement, “Drone operators are risking lives when they fly over an emergency situation. Just because you have access to an expensive toy that can fly in a dangerous area doesn’t mean you should do it.” It's actually a great point—and one some people apparently have trouble grasping.

On Friday, a large wildfire burned cars and houses on Interstate 15, but during the firefighting effort five drones delayed air units by 15 to 20 minutes. NBC Los Angeles reported that "two drones actually gave chase to air units." (Seriously?) On July 12, a drone impeded firefighters for about 20 minutes while they were attempting to address a brush fire that was threatening four houses. On June 24, a firefighting aircraft working on controlling a brush fire in the San Bernardino Mountains was grounded to comply with FAA regulations after officials spotted a hobby drone flying in the area. And last summer there was a series of similar incidents.

Gatto and Gaines are also working on another piece of legislation, SB 167, that would raise fines and potentially include jail time as a punishment for drone operators who impede emergency response efforts. “People can replace drones, but we can’t replace a life. When our rescuers are risking their own lives to protect us, I want them thinking about safety, not liability,” Gaines said.

Drones are supposed to be helping with disaster response, not making things worse.

July 20 2015 1:30 PM

The New Approach to Fighting Wildfires

Fire season has so far mostly meant Alaska, which has racked up 1.8 million burned acres and counting. But fires are also moving down the West Coast, with a record burn on the Olympic Peninsula and houses again burning in central Washington. Flames are moving into drought-blasted California a couple of months early. The Forest Service estimates it will need an additional $800 million to $1.7 billion to pay for the season's expected costs.

But wildfire statistics are a poor proxy for what is happening. Last year Florida prescribe-burned 2.5 million acres—two-thirds as much acreage as burned by wildfire throughout the country. And this year's largest fire to date in the Lower 48 is actually a managed wildfire. The Whitetail and Sawmill fires on the San Carlos Apache Reservation are being controlled through a confine-and-contain (or box-and-burn) strategy. The complex is 35,000 acres and growing, and doing what neither prescribed fire nor suppressed wildfire could. Last year San Carlos similarly managed two fires that together topped out at 84,000 acres. America's fire scene is more complex than the usual media and political obsession with burned houses, dead people, and celebrity landscapes like Yosemite suggests. So are the strategies to cope with it.

Three strategies are now in play, each the product of a particular era and its peculiar challenges.

July 20 2015 12:39 PM

Your Very Own Data Privacy Policy

Dear Corporation,

You have expressed an interest in collecting personal information about me. (This interest may have been expressed by implication, in case you were attempting to collect such data without notifying me first.) Since you have told me repeatedly that personalization is a great benefit,\ and that advertising, search results, news, and other services should be tailored to my individual needs and desires, I’ve decided that I should also have my own personalized, targeted privacy policy. Here it is.

While I am glad that (as you stated) my privacy is very important to you, it’s even more important to me. The intent of this policy is to inform you how you may collect, use, and dispose of personal information about me.

By collecting any such information about me, you are agreeing to the terms below. These terms may change from time to time, especially as I find out more about ways in which personal information about me is actually used and I think more about the implications of those uses.

Note: You will be asked to provide some information about yourself. Providing false information will constitute a violation of this agreement.

Scope: This policy covers only me. It does not apply to related entities that I do not own or control, such as my friends, my children, or my husband.

Age restriction and parental participation: Please specify if you are a startup; if so, note how long you’ve been in business. Please include the ages of the founders/innovators who came up with your product and your business model. Please also include the ages of any investors who have asserted, through their investment in your company, that they thought this product or service was a good idea.

Information about you. For each piece of personal information about me that you wish to collect, analyze, and store, you must first disclose the following: a) Do you need this particular piece of information in order for your product/service to work for me? If not, you are not authorized to collect it. If yes, please explain how this piece of information is necessary for your product to work for me. b) What types of analytics do you intend to do perform with this information? c) Will you share this piece of information with anyone outside your company? If so, list each entity with which you intend to share it, and for what purpose; you must update this disclosure every time you add a new third party with which you’d like to share. d) Will you make efforts to anonymize the personal information that you’re collecting? e) Are you aware of the research that shows that anonymization doesn’t really work because it’s easy to put together information from several categories and/or several databases and so figure out the identity of an “anonymous” source of data? f) How long will you retain this particular piece of information about me? g) If I ask you to delete it, will you, and if so, how quickly? Note: by “delete” I don’t mean “make it invisible to others”—I mean “get it out of your system entirely.”

Please be advised that, like these terms, the information I’ve provided to you may change, too: I may switch electronic devices; change my legal name; have more children; move to a different town; experiment with various political or religious affiliations; buy products that I may or may not like, just to try something new or to give to someone else; etc. These terms (as amended as needed) will apply to any new data that you may collect about me in the future: your continued use of personal information about me constitutes your acceptance of this.

And, of course, I reserve all rights not expressly granted to you.