Future Tense
The Citizen's Guide to the Future

Aug. 4 2015 11:58 AM

In Just Three Days, Pilots Spotted Three Drones Flying Illegally by JFK Airport

The Federal Aviation Administration reported Monday that there were three drone sightings in just as many days at John F. Kennedy International Airport in New York, one of the nation’s most heavily trafficked transportation hubs.

Following two sightings Friday, by pilots for JetBlue and Delta flights, a Shuttle America crew reported Sunday seeing a drone fly within 25 feet of the aircraft. When the pilot spotted the “black, four-rotor ‘quadcopter’ off his left wing,” the jet was only 15 feet from touching down on the runway.

Advertisement

While all three flights were able to land without incident, the damage could have been serious. An aviation expert told the New York Post, “If the drone were to hit the cockpit window and blind the pilot, or if it hit the plane, hit an engine, it could unbalance the plane, and the pilot could lose control.” Because of the potential results, FAA policy bans drones from flying within five miles of airports, unless the operator had prior communication with the airport and control tower. It's not clear whether the drone operator (or operators) in this case was simply unaware of the FAA policy, was disregarding it, or actually intended to cause harm.

Sen. Chuck Schume called for new protocols to protect commercial airplanes, including the implementation of geo-fencing technology in unmanned aircrafts, which could prevent them from flying in prohibited areas. This technology is already in effect in the nation's capital—one major manufacturer's drones can't take off in or near Washington, D.C.

According to CBS New York, Schumer urged, “The FAA has to act and toughen up the rules before a tragedy occurs because if a drone were sucked into a jet engine of a plane filled with passengers untold tragedy could result and we do not, do not, do not want that to happen.”

Both privately and publicly, the FAA has been struggling for some time with unmanned aircraft regulation. After previously acknowledging only one near-collision, the administration revealed how widespread this problem really is. Its report from November 2014 detailed 25 incidents, only a sampling of the 175 reported by pilots in a six-month period. With this weekend of near misses and a renewed call for regulation, the FAA is again faced with the unenviable question of how best to keep drones out of no-fly zones. 

Video Advertisement

Aug. 4 2015 11:55 AM

Court Says It’s Not Netflix’s Fault When You Let Someone See Your Embarrassing Watch List

For years I shared a Netflix account with a friend and her brother. One day, my friend confronted me about some, ahem, amusing activity on the account. “Lily, did you watch Grease twice and then start The Aristocats?” But it actually wasn’t me! Her brother's secret (or whoever he had shared the account with) was out.

In a class-action lawsuit, plaintiffs alleged that Netflix was violating the 1988 Video Privacy Protection Act and a California civil code by allowing "family, friends, and guests" to see their viewing histories. The idea was that even if you gave someone your streaming password, it was still a violation of your privacy for Netflix to show them what you had been watching.

Advertisement

A district court held that Netflix had not violated users' privacy when they voluntarily shared access to their accounts, and Friday, as TechDirt reports, the 9th U.S. Circuit Court of Appeals upheld this decision. "The fact that a subscriber may permit third parties to access her account, thereby allowing third parties to view Netflix's disclosures, does not alter the legal status of these disclosures," Judge Edward J. Davila wrote. "No matter the particular circumstances at a subscriber's residence, Netflix's actions remain the same."

With so much bulk data collection going on everywhere, it's kind of refreshing to hear that a digital service actually did something right. But we certainly all feel the pain of having your house guest discover that you've been watching nothing but Good Eats for the past three weeks.

Aug. 3 2015 1:07 PM

Friendly Canadian Hitchhiking Robot Decapitated While Visiting Philadelphia

HitchBot is a robot built for studying human interactions. It hitchhikes through different countries examining the goodwill of humans, and so far it has had successful trips around Canada, the Netherlands, and Germany. But it took only two weeks of traveling in the United States for something to go wrong.

HitchBot got off to a good start on July 17, traveling around Massachusetts and New York City before heading to Philadelphia. But by Saturday, the Canadian researchers who run HitchBot had lost its signal. But it couldn't be that bad, right? Probably just a dead battery!

Advertisement

Oof. HitchBot was designed to be a good travel buddy. It could make (limited) conversation and share local trivia, plus it had an onboard camera that took a photo every 20 minutes to capture precious road trip memories. Why would anyone destroy such a delightful companion?

HitchBot made a statement on its website after the incident, “I guess sometimes bad things happen to good robots! My trip must come to an end for now, but my love for humans will never fade. Thank you to all my friends.”

The Canadian researchers behind HitchBot, Frauke Zeller of Ryerson University and David Harris Smith of McMaster University, talked to BuzzFeed about the tragedy. “We are still trying to make sense of everything that happened, and are tying to put together the bits of information that we can find,” Zeller said. “It’s upsetting—you can see how it has been taken apart and left in the street.”

Humans may find robots creepy at times, but at the International Conference on Human-Robot Interaction in March, Japanese researchers presented a paper titled "Why Do Children Abuse Robots?" about scenarios where the tables are turned. The field study noted incidents of children hitting, kicking, punching, detaining, and physically manipulating a human-sized humanoid social robot at a mall in Japan. The robot encountered many children, and the researchers did follow up interviews with 28 of them.

They wrote, "We found that the majority of [the children] did not regard the robot as just a machine, but a human-like entity ... yet [they] engaged in the abuse, mentioning the reason as curiosity or enjoyment." The researchers hypothesized that the children may not have felt empathy for the robots, even when they thought the robots were "upset" by being abused.

Maybe it's just bad luck that HitchBot had its first-ever vandalism problem in the City of Brotherly Love, but let's not forget that it was Eagles fans who booed a Santa Claus and threw snowballs at him while "Here Comes Santa Claus" played at a halftime in December 1968. Maybe some things never change.

July 31 2015 6:00 PM

Meet the Drone Facebook Wants to Use to Bring the Internet to Poor and Rural Areas

In its ongoing quest to conquer—err, make the world a better place, Facebook is building an air force.

On Thursday, the company released a video of Aquila, an ultralight solar-powered drone designed to beam Wi-Fi to poor and rural regions via lasers. Made of carbon fiber, Aquila has the wingspan of a 737 but weighs less than a Prius. It flies at upwards of 60,000 feet, high above commercial flights, and Facebook claims it will stay aloft for three months at a time. The company is developing the drones under the auspices of its Internet.org project.

Advertisement

As Facebook explains it, a station on the ground transmits radio Internet to one drone, which then beams it to a network of other drones via a new laser technology that Facebook’s engineers invented. The laser system can transmit data at tens of gigabits for second, Facebook says. Those other aircraft then beam the signal to villages on the ground below them.

Facebook says it has completed the prototype and will begin testing Aquila soon. “If we can get the aircraft to fly reliably, then we’re well on the road to being able to deliver the Internet to a lot of people,” said Andy Cox, engineering lead for Facebook’s aviation team. (Cox was the CEO and chief engineer of Ascenta, the U.K.-based solar-powered drone startup that Facebook acquired last year.)

Here’s the video. It’s quite something:

Should Facebook’s drones take off, they’ll share the skies with Google’s Wi-Fi balloons, which are also racing to provide Internet to some of the estimated 2.5 billion people around the world who lack access. Meanwhile, SpaceX and Virgin Galactic–backed OneWorld are working on plans to beam Internet from satellites arrayed in low-Earth orbit, much higher than the balloons or drones but lower than the satellites that provide cellular data today.

Facebook’s stated aims for Internet.org are altruistic: delivering certain basic Internet services for free, without ads, to people who can’t afford a high-speed connection. But critics charge that the concept violates the principle of net neutrality and would amount to a second-class Internet for the poor. Facebook CEO Mark Zuckerberg has responded to the complaints, arguing, “It’s not sustainable to offer the whole Internet for free.”

July 31 2015 4:54 PM

France Wants EU’s Right to Be Forgotten to Apply in Global Search Results

The European Union's "right to be forgotten" has been around for more than a year now. As of December there's even a framework for standardizing how search engine companies should evaluate and carry out right-to-be-forgotten requests. But some regulators want to take the controversial idea a step further.

In a blog post on Thursday, Google's global privacy counsel Peter Fleischer laid out the company's response to an order from the CNIL, France’s data protection agency. In June, the CNIL ordered that under right to be forgotten, Google should remove links in search results worldwide.

Advertisement

The agency said in a statement, "The CNIL considers that in order to be effective, delisting must be carried out on all extensions of the search engine and that the service provided by Google search constitutes a single processing." The idea is that results shouldn't just be removed from www.google.fr and other European Google versions; they should be removed on all of them everywhere.

Google strongly disagrees. Fleischer wrote:

While the right to be forgotten may now be the law in Europe, it is not the law globally. Moreover, there are innumerable examples around the world where content that is declared illegal under the laws of one country, would be deemed legal in others ... If the CNIL’s proposed approach were to be embraced as the standard for Internet regulation, we would find ourselves in a race to the bottom. In the end, the Internet would only be as free as the world’s least free place.

Taking France as an example, to respond to CNIL, Fleischer also pointed out that 97 percent of French Google users access the company's services through www.google.fr, so it's not like French users are frequently encountering links that they shouldn't see under right to be forgotten.

The CNIL said that Google had 15 days to begin removing links from its global search engine system before the agency began drafting a report to recommend sanctions. Google said Thursday, "we respectfully disagree with the CNIL’s assertion of global authority on this issue and we have asked the CNIL to withdraw its Formal Notice."

If the CNIL attempts to move forward with this global version of right to be forgotten, its impact will stretch far beyond Google. Hopefully it won't get that far.

July 31 2015 10:20 AM

Court Rules Police Need a Warrant to Access Location Data From Your Cellphone

Take a moment and try to remember where you were 24 hours ago. Maybe you’re a creature of habit and it’s easy to guess. Or maybe, like me, you can’t quite recall whether you were at work, at home, or somewhere in between. Either way, if you had your cellphone with you, it would be astonishingly easy for someone with the right access to pin your location down. Thanks to a recent court decision, however, that information just got a lot harder to examine for many in the United States.

In an order released Thursday by the U.S. District Court for the Northern District of California, Judge Lucy Koh found that Fourth Amendment protections extend to location data generated by cellphones. Ruling against the federal government, Koh affirmed that law enforcement agencies must seek a warrant before acquiring historical location data produced by a cellphone.

Advertisement

As Koh explains, modern phones constantly ping cellular towers, even when they’re not actively in use. Thanks to these regular connections, they generate a steady stream of data about their physical location—sometimes even when the user turns off location services, a fact that the ACLU stressed in an amicus brief. Koh notes that many users may be unaware of how much information they’re giving up as they move through the world. This data, which is known as cell site location information (or CSLI) can be important to legal investigations.

In the past, courts have largely avoided the issue of whether CSLI should be readily available. Koh writes, “Neither the U.S. Supreme Court nor the Ninth Circuit has squarely addressed whether cell phone users possess a reasonable expectation of privacy in the CSLI, historical or otherwise, associated with their cell phones.”

Previous relevant cases were mostly built around more basic technologies. In 1983, for example, the Supreme Court held that an individual’s movements along public thoroughfares could be tracked via his or her beeper. A year later, the court clarified and restricted this decision, stressing that it did not apply when a user was within his or her private home.

Koh’s decision ultimately turns around the increasingly central role that cellphones play in almost all of our lives. “For many,” she writes, “cell phones are not a luxury good; they are an essential part of living in modern society.” That’s in keeping with recent case law, which increasingly holds that we shouldn’t have to choose between participating in the contemporary moment and maintaining our privacy. For instance, in 2014 the Supreme Court ruled, in Riley v. California, that law enforcement needs a warrant to search a person’s cellphone as part of an arrest.

Of course, not everyone agrees. Earlier this week, a Cincinnati appeals court found that you have no reasonable expectation of privacy if you accidentally butt dial someone. As Slate’s Lily Hay Newman explained, the judge in that case held that being overheard during a butt dial is a bit like having an argument near an open window.

That may be, but as cellular technologies grow more and more sophisticated, they offer an increasingly complex picture of our lives, furnishing what Justice Sonia Sotomayor calls “a wealth of detail about [a person’s] familial, political, professional, religious, and sexual associations.” (Koh cited that line in her ruling.) Because it can paint a picture of “the sum of one’s public movements,” CSLI makes it difficult to clearly distinguish between public and private experience. As these ambiguities multiply, powerful, clear decisions like Koh’s will become all the more important.

July 31 2015 10:18 AM

Hackers Could Heist Semis by Exploiting This Satellite Flaw

Wired logo

Remember the opening scene of the first Fast and Furious film when bandits hijacked a truck to steal its cargo? Or consider the recent real-life theft of $4 million in gold from a truck transiting from Miami to Massachusetts. Heists like these could become easier to pull off thanks to security flaws in systems used for tracking valuable shipments and assets.

Vulnerabilities in asset-tracking systems made by Globalstar and its subsidiaries would allow a hijacker to track valuable and sensitive cargo—such as electronics, gas and volatile chemicals, military supplies, or possibly even nuclear materials—disable the location-tracking device used to monitor it, then spoof the coordinates to make it appear as if a hijacked shipment was still traveling its intended route. Or a hacker who just wanted to cause chaos and confusion could feed false coordinates to companies and militaries monitoring their assets and shipments to make them think they’d been hijacked, according to Colby Moore, a researcher with the security firm Synack, who plans to discuss the vulnerabilities next week at the Blackhat and Def Con security conferences in Las Vegas.

Advertisement

The same vulnerable technology isn’t used just for tracking cargo and assets, however. It’s also used in people-tracking systems for search-and-rescue missions and in SCADA environments to monitor high-tech engineering projects like pipelines and oil rigs to determine, for example, if valves are open or closed in areas where phone, cellular, and Internet service don’t exist. Hackers could exploit the same vulnerabilities to interfere with these systems as well, Moore says.

The tracking systems consist of devices about the size of a hand that are attached to a shipping container, vehicle or equipment and communicate with Globalstar’s low Earth-orbiting satellites by sending them latitude and longitude coordinates or, in the case of SCADA systems, information about their operation. A 2003 article about the technology, for example, indicated that the asset trackers could be configured to monitor and trigger an alert when certain events occurred such as the temperature rising above a safe level in a container or the lock on a container being opened. The satellites relay this information to ground stations, which in turn transmit the data via the Internet or phone networks to the customer’s computers.

According to Moore, the Simplex data network that Globalstar uses for its satellites doesn’t encrypt communication between the tracking devices, orbiting satellites, and ground stations, nor does it require the communication be authenticated so that only legitimate data gets sent. As a result, someone can intercept the communication, spoof it or jam it.

“The integrity of the whole system is relying on a hacker not being able to clone or tamper with a device,” says Moore. “The way Globalstar engineered the platform leaves security up to the end integrator, and so far, no one has implemented security.”

Simplex data transmissions are also one-way from device to satellite to ground station, which means there is no way to ping back to a device to verify that the data transmitted was accurate if the device has only satellite capability. (Some of the more expensive Globalstar tracking devices combine satellite and cell network communication for communicating in areas where network coverage is available.)

Moore says he notified Globalstar about the vulnerabilities about six months ago, but the company was noncommittal about fixing them. The problems, in fact, cannot be implemented with simple software patches. Instead, to add encryption and authentication, the protocol for the communication would have to be re-architected.

Globalstar did not respond to a request from Wired for comment.

Top Companies Rely on Globalstar Satellites

Globalstar has more than four dozen satellites in space, and it’s considered one of the largest providers of satellite voice and data communications in the world. Additionally, its satellite asset-tracking systems—such as the SmartOne, SmartOne B, and SmartOne C—provide service to a wide swath of industry, including oil and gas, mining, forestry, commercial fishing, utilities, and the military. Asset-tracking systems made by Globalstar and its subsidiaries Geforce and Axon can be used to track fleets of armored cars, cargo-shipping containers, maritime vessels, and military equipment or simply expensive construction equipment. Geforce’s customers include such bigwigs as BP, Halliburton, GE Oil and Gas, Chevron, and Conoco Phillips. Geforce markets its trackers for use with things like acid and fuel tanks, railway cars, and so-called frac tanks used in fracking operations.

The company noted in a press release this year that since the launch of its initial SmartOne asset-tracking system in 2012, more than 150,000 units were being used in multiple industries, including aviation, alternative energy, and the military.

In addition to asset-tracking, Globalstar produces a personal tracking system known as the SPOT Satellite Messenger for hikers, sailors, pilots and others who travel in remote areas where cell coverage might not be available so that emergency service personnel can find them if they become lost or separated from their vehicle.

Moore tested three Globalstar devices that he bought for tracking assets and people, but he says all systems that communicate with the Globalstar satellites use the same Simplex protocol and would therefore be vulnerable to interference. He also thinks the problem may not be unique to Globalstar trackers. “I would expect to see similar vulnerabilities in other systems if we were to look at them further,” he says.

The Simplex network uses a secret code to encode all data sent through it, but Moore was able to easily reverse-engineer it to determine how messages get encoded in order to craft his own. “The secret codes are not generated on the fly and are not unique. Instead, the same code is used for all the devices,” he says.

Moore spent about $1,000 in hardware to build a transceiver to intercept data from the tracking devices he purchased and an additional $300 in software and hardware for analyzing the data and mimicking a tracking device. Although he built his own transceiver, thieves would really only need a proper antenna and a universal software radio peripheral. With these, they could intercept satellite signals to identify a shipment of valuable cargo, track its movement and transmit spoofed data. While seizing the goods, they could disable the vehicle’s tracking device physically or jam the signals while sending spoofed location data from a laptop to make it appear that the vehicle or shipment was traveling in one location when it’s actually in another.

Each device has a unique ID that’s printed on its outer casing. The devices also transmit their unique IDs when communicating with satellites, so an attacker targeting a specific shipment could intercept and spoof the communication.

In most cases, attackers would want to know in advance, before hijacking a truck or shipment, what’s being transported. But an attacker could also just set up a receiver in an area where valuable shipments are expected to pass and track the assets as they move.

“I put this on a tower on a large building and all the locations of devices [in the area] are being monitored,” Moore says. “Can I find a diamond shipment or a nuclear shipment that it can track?”

It’s unclear how the military is using Globalstar’s asset-tracking devices, but conceivably if they’re being used in war zones, the vulnerabilities Moore uncovered could be used by adversaries to track supplies and convoys and aim missiles at them.

Often the unique IDs on devices are sequential, so if a commercial or military customer owns numerous devices for tracking assets, an attacker would be able to determine other device IDs, and assets, that belong to the same company or military based on similar ID numbers.

Moore says security problems like this are endemic when technologies that were designed years ago, when security protocols were lax, haven’t been re-architected to account for today’s threats.

“We rely on these systems that were architected long ago with no security in mind, and these bugs persist for years and years,” he says. “We need to be very mindful in designing satellite systems and critical infrastructure, otherwise we’re going to be stuck with these broken systems for years to come.”

See also:

July 31 2015 8:35 AM

The Art of Artificially Throwing Shade

As today’s artificial intelligence grows more and more capable of natural language interaction with humans, they will need to master a peculiar yet highly important design need: ready-made snarky responses for when their human owners troll them with science fiction movie A.I. references. As you can see in a video I recorded of myself playing with the Amazon Echo and its Alexa-intelligent assistant, Alexa got sassy when I repeated a famous line from 2001: A Space Odyssey

In the movie, the astronaut Dave Bowman asks the homicidal supercomputer HAL to let him back inside the spacecraft, and HAL responds with a curt “I’m sorry Dave, I’m afraid I can’t do that.” When you say “HAL, open the pod bay doors,” Alexa responds by not only mimicking the first part of HAL’s response—she also reminds you that she is not HAL and we’re not in space.

Advertisement

Granted, Alexa’s shade-throwing is really that of the team of programmers that built her. But that’s also the point. There are many ways of building human connection to machines, and Alexa reflects many of them. For example, by assuming a human female’s name and taking on a vaguely female voice, Alexa encourages you to regard it using terminology such as “her” or “she.” And whenever I call an “it” a “she”, I linguistically imbue a cloud-based computer program speaking through a faceless black cylinder with a socially constructed marker of human identity: gender.

But, as my video demonstrates, another component of feeling connected to a machine could also be the machine faking a form of self-awareness.  Alexa “knows” that she is an A.I. enough to understand what it means when I tease her by asking her to open the pod bay doors. And Alexa responds by effectively rolling her eyes at me. The fact that Alexa seems unhappy and even passive-aggressive when you troll her with HAL jokes makes it easier for us to assume that “she” has belief, desires, and intentions.

Small touches like this will help people adapt to a world in which they will live and work alongside machines like Alexa—as well as tease them in the hope of getting a “What, this joke again?” reaction.

July 30 2015 6:39 PM

A Look at the Awesome but Ridiculously Old Technology That Runs the NYC Subway System

Vintage technology is fun and fascinating. It feels new all over again to see how old devices made modern concepts possible. But buying LPs again is different than finding out that missile silos in the United States still rely on floppy disks. And this video of the old tech still in use in the New York City subway system feels more like the latter. It’s delightful, sure, but also deeply baffling.

The main point of the 9-minute video, released by New York City’s Metropolitan Transportation Authority, is to talk about how the subway system is modernizing. The agency has been working for years to implement “communications-based train control” on every line. It’s a system that tracks each train’s position, automates speed control, and calculates safe distances between trains. Compared with the current manual system of “fixed block signaling,” CBTC allows for more trains per hour, better precision, and less infrastructure maintenance. But first the MTA has to finish implementing it. (The automated system is only in use on one out of the system’s 34 lines so far, with another transition almost complete.)

Advertisement

The most captivating part of the video, though, is the opening section showing the devices that control trains in and around the West 4th Street stop in Manhattan. “What our riders don’t realize ... is that in our system it’s not just the architecture that’s 100 years old,” the narrator says. “It’s a lot of the basic technology as well. The infrastructure is old.” And the MTA is not joking around. The video shows 1930s devices, dispatchers filling out handwritten call sheets, and levers for manually operating signals and moving track switches.

In the relay room, MTA vice president and chief officer of service delivery Wynton Habersham talks about how difficult it is to maintain the aging technology.

This equipment is not supported at all by the railroad industry. We are fully self-sufficient and self-sustaining. We have a signal shop that can replace the parts, they rebuild these relays. And then when any modernization is going on we scavenge to retain the parts so we can provide replacement for those that remain in service.

Holy. Crap. This is a 24/7 subway system we’re talking about. Habersham goes on to say that the cables connecting many of the electromechanical relays throughout the system—meaning in control rooms but also on the tracks—are the original cloth-covered cables. And then Habersham talks about what would happen if there were a fire. (Bad things. Bad things would happen.) Vintage tech, so much nostalgia!

The video is fascinating, but Rebecca Fishbein put it best on Gothamist: “This shit is OLD, like grizzled dude who won’t stop stabbing at the back of your plane seat because he can’t figure out the TV touchscreen old. It’s a miracle the F train even runs at all.”

July 30 2015 4:05 PM

Tech Companies, Carriers Should Be Required to Issue Updates to Fix Security Flaws

No, it's not your imagination: You're hearing a spate of news about security flaws in the products you use every day. Two big annual hacker conferences are coming up in Las Vegas, and many of the people giving talks there are telling the world now what they've uncovered.

As usual, the news is grim, if not just a little terrifying—and it's especially bad this year if you own a mobile phone using the Google's Android operating system. The “Stagefright” vulnerability, revealed this week, suggests that a hacker could remotely take control of another person's phone simply by sending a specially crafted multimedia message, such as a text with a video attached. In other cases the user would have to open the message. (The company that found the flaw, Zimperium, has posted instructions on how to prevent this with some newer phones.)

Advertisement

Naturally, the people who sell Android phones are racing to install software patches that will fix this potentially catastrophic flaw, right? Wrong. There's a chance—a near-certainty in many cases—that you'll never get a fix for your phone. Because the companies that sell you phones and service care much more about their bottom lines than your security. The situation has gotten so bad that it’s time to turn to government intervention, much as it pains me to say.

We need a law, with teeth. Sellers of phones and many other connected consumer devices should be required to provide timely security updates for a minimum of three years after a device goes on the market. Regulation should be done with the lightest possible touch, and it should steer clear of interfering with the technology itself. Enforcing such a law would not be simple, to put it mildly. But the current situation has to change.

The Android ecosystem is a freewheeling mess. This is good in many situations, because it spurs innovation and competition. Google, which created the operating system, made it mostly open source—free to download and modify—and gives it away to hardware manufacturers. They modify it before installing it on their phones, most of which are sold by telecommunications carriers such as Verizon, AT&T, Sprint, and T-Mobile. So when Google issues updates to Android, which it does on a regular basis, owners have to wait for the manufacturer and the carrier to a) test the update with their own modified versions of Android, and b) send over-the-air updates to users. If they ever do.

Apple's iOS devices, of course, are part of a tightly controlled ecosystem, and while Apple is far from perfect on security, it does update iPhones. But we shouldn't be required to turn over our computing and communications to control-freak companies in order to get necessary security updates.

Now, if you have a Google-branded phone such as a recent Nexus, you're safer than most, because Google sells them directly and updates them. (I use a phone running an Android variant called Cyanogenmod, which is community-based and gets timely updates.)

If you're running an older Android phone, however, I have bad news: There's almost no chance that your device maker and/or carrier will send you an operating system update that repairs the Stagefright vulnerability. This isn't because they couldn't. The reality is that once they sold you the phone, anything they have to do to improve it is added cost; they would much rather have you want buy a new one as soon as possible.

When businesses refuse to do what's necessary to provide customers even minimal safety, government has to step in. This is why regulators sometimes insist that car manufacturers recall their vehicles when flaws emerge.

The tech industry has been given a pass on all of this, in part because software is always a work in progress and is always going to have flaws. But once a flaw is identified, with code ready for updates, the updates should be made available, period.

It's not just phones where we need this. The home-router industry—companies making the devices that broadcast Wi-Fi signals throughout our homes—is notorious for its lax security practices and diffidence when it comes to fixing known flaws. Meanwhile, the Chrysler hack revealed last week should tell us that Internet-connected cars are, at this stage, an absolutely terrible idea; at least Chrysler is doing a (flawed) recall.

So far, the government has shown absolutely no interest in this issue. An ACLU security expert, Chris Soghoian, filed a complaint with the Federal Trade Commission more than two years ago, asking the consumer-protection agency to require Android updates. He got nowhere.

It's time for the FTC and others in Washington—hello, Congress—to pay attention. The technology and communications industries have made a deliberate decision to be neglectful with their customers' security. It's doesn't mean government should be derelict, too.

READ MORE STORIES