Websites Ending In “.AS” Have Been Vulnerable to Takeovers Since the '90s
Top-level domains like ".com" have been spruced up over the years with customizable options that are often associated with specific countries—for instance, ".ly" for Libya. But these domains are known for security issues. The latest is a problem with American Samoa's ".as" domain registry that for years has left sites using the suffix vulnerable to takeovers.
On Monday, British security blogger Infosec Guy published evidence of a vulnerability in the ".as" top-level domain that allows anyone to view and alter domain records, which include things like administrator contact information and plaintext passwords.
The flaw is a bug in how users can access site details through the registry. Users are only meant to look up details they are "allowed" to see, but the registry actually allows anyone to see anything. "A malicious attacker could quite easily modify any domain information—such as Nameservers—allowing them to take control of websites by redirecting their traffic to servers they control," Infosec Guy wrote.
Infosec Guy writes that he contacted the AS Domain Registry in January about the bug. Though he encountered some resistance, the group eventually patched it in February. A statement from the American Samoa Domain Registry says, "We fixed the potential issue back in February with the legacy Registrar system before any problems arose. There was never any potential for unauthorized changes to domain name information, as the Legacy Registrar system is a manual system."
One comment in the statement describes Infosec Guy's report as "inaccurate, misleading and sexed-up to the max." Naked Security, the blog by cybersecurity company Sophos, described the statement as "a belligerent press release."
American Samoa's top-level domain may seem somewhat obscure, but it's used by lots of big companies for URLs like a.did.as and twitter.as. If it had this vulnerability for almost two decades, a lot of data was at risk for a long time.
Trust but Verify: A Future Tense Event Recap
A year ago, researchers from the University of Virginia published the findings from their attempt to repeat 100 published psychological experiments to see whether they could get the same results. The conclusion of the study indicated that only one-third to one-half of the original findings yielded similar outcomes. This study, known as the Reproducibility Project, brought attention to the fact that a great deal of scientific research produced today is not reproducible. Even the Reproducibility Project’s results were challenged when put to the test by other researchers. The ability to reaffirm findings is an important part of the scientific method. Yet the reality is that billions of dollars fund studies whose results don’t hold up when put under the microscope again.
On April 21, Future Tense—a partnership of Slate, New America, and Arizona State University—hosted an event in Washington, D.C., to discuss this crisis in biomedical research. The two conversations featured scientists, researchers, and journalists who are addressing the problem in diverse ways. It quickly became clear that this crisis is not isolated to any one field of research—and despite increased attention to the issue this past year, it’s not new.
Lawrence Tabak, principal deputy director of the National Institutes of Health, noted that he first became aware of this problem more than 35 years ago as an assistant professor working on a major grant application. After failed attempts to replicate a key study done by a prominent scientist, Tabak realized the dilemma he encountered went beyond the replication of data and results; rather, it hinted at a larger cultural problem in the scientific enterprise. According to Tabak, perverse incentives continue to allow, if not encourage, scientists to get away with publishing invalid research. In the publish-or-perish world of research institutions, professional development often depends on a scientist’s ability to produce exciting results at a rapid pace. Furthermore, Brian Nosek, executive director and co-founder of the Center for Open Science, which coordinated the Reproducibility Project, pointed out that reproducing studies will create transparency and higher scrutiny of the scientific process, which could encourage researchers to value quality of publication, rather than quantity.
It’s difficult to change one’s approach to the scientific process and incentives midcareer—which is why it’s important to think about students in these conversations. Emma Frow, assistant professor in the School of Biological and Health Systems Engineering and the School for the Future of Innovation in Society at Arizona State, believes that science education can instill new values in future generations who are not yet under the pressure to publish research. The goal is to create a community-oriented culture and an incentive structure that emphasizes process, not prestige.
But the current reproducibility problem isn’t just about researchers scrambling to get ahead; other variables can affect the reliability of results. For instance, Carolyn Compton, professor of life science at ASU and former pathologist, discussed how the specimens used in labs are often unregulated leaving their consistency and quality unreliable. According to Compton, we often know more about the beef in our supermarkets—thanks to regulation by the FDA—than we know about the samples that are at the heart of bioresearch. In studies that require precise measurements and astute attention to detail, this kind of inconsistency not only influences the results of the study but its ability to be reproduced.
Yet Arturo Casadevall, who is the Alfred & Jill Sommer professor and chair of the W. Harry Feinstone Department of Microbiology and Immunology at Johns Hopkins University, argued that uncertainties are just par for the course. Casadevall reminded the audience that “all science is provisional,” and as scientists continue to investigate and learn, their approaches change, as do their results. Therefore, we should not search for the “perfect” research studies as much as we should bring critical attention to all of them, with emphasis on method and standards. Richard Harris, visiting scholar at ASU’s Consortium for Science, Policy & Outcomes at ASU and science correspondent on leave from National Public Radio to write a book on the crisis in reproducibility, encourages us to embrace doubt and understand that unknowns are part of the scientific process. The infrastructure surrounding scientific inquiry may be flawed but it’s our search for the truth that directs our ever evolving inquiry into the known and unknown.
You can watch the full event on New America's website. Also in Slate:
- "The Unintended Consequences of Trying to Replicate Research," by Ivan Oransky and Adam Marcus
- "The Reproducibility Crisis Is Good for Science," by Monya Baker
- "Cancer Research Is Broken," by Daniel Engber
Why Does It Still Take Five Hours to Fly Cross-Country?! A Future Tense Event.
In the early 1960s, President John F. Kennedy pledged that Americans would go to the moon and develop a supersonic commercial airliner. By the end of that decade, the country witnessed in awe Neil Armstrong’s “small step for man.” It was the idea of supersonic intercity travel that proved the unattainable “moonshot.” A half-century after Kennedy’s promise, with the European Concorde in retirement and no American supersonic plane ever cleared for takeoff, airliners still travel at the same speed as did President Kennedy’s 707 Air Force One.
We like to talk about the dizzying rate of technological change these days, but when it comes to intercity travel, we’re stuck back in 1959, when the 707 made its inaugural transcontinental flight. Why is that? And are we now on the eve of startling innovations in flying, or will it still take five hours to fly across the country in 2059? Join Future Tense for lunch in Washington, D.C., on Wednesday, May 11, to discuss these questions, and the future of aviation. The agenda is below; for more information and to RSVP, visit the New America website.
Lunch will be served.
National correspondent, the Atlantic
Author, China Airborne
Author, Free Flight: From Airline Hell to a New Age of Travel
12 p.m.: Intro: Living Large and Flying High in the Jet Age, Circa 1959
Editorial director, Future Tense
12:10 p.m.: What a Drag That Supersonic Boom: The Impediments to Supersonic Flight for the Rest of Us
Deputy administrator, NASA
Apollo Program professor of aeronautics and astronautics and engineering systems, Massachusetts Institute of Technology
Chief scientist of the U.S. Air Force
12:40 p.m.: I’ll Sell You a Futuristic Plane, if You’re Willing to Pay for It
Vice president, analysis, Teal Group Corp.
Vice president, head of research and technology for North America, Airbus Group Innovations
Founder, Lightcraft Technologies Inc.
Founder and CEO, Boom Technology Inc.
1:40 pm: Cleared for Takeoff? Our Strained Aviation System’s Capacity and Design
CTO, Resilient Ops Inc.
Associate principal, Arup Group
Chief of staff, Federal Aviation Administration
2:30 pm: Parting Thoughts: Air Travel, Circa 2059
House Unanimously Passes Bill to Safeguard Email Privacy
The Electronic Communications Privacy Act of 1986 is from, well, the '80s. So naturally it wasn't written with the current digital climate in mind. As a result it allows for some things that have become problematic, like warrantless government access to old emails stored on a server. Not good.
On Wednesday, though, the House indicated that it understands the ECPA's current Fourth Amendment issues by voting unanimously to amend the law. The new Email Privacy Act would also protect other types of digital communications stored in the cloud like instant messages and texts.
With Congress, change often takes years, and many states like California, Texas, Utah, and Maine already have digital privacy laws that have essentially been acting as stop-gaps. Though the bill must pass the Senate and reach the president before it can be enacted, but the House's 419-0 vote will hopefully give it some sway.
The bill had bipartisan support and was written by Patrick Leahy, D-Vermont, and Mike Lee, R-Utah. In an effort to bolster the bill's momentum, they told Reuters on Wednesday that the vote was "an historic step toward updating our privacy laws for the digital age."
Some don't think the bill goes far enough, though. Mike Godwin wrote on Slate in September that, "If federal legislators truly want to bring the ECPA up to date, they shouldn’t stop with updating a statute that was flawed even at its passage in 1986. Instead, they should revisit the ECPA’s constitutional roots."
Hopefully small victories are still better than nothing.
Encryption Technology Could Help Corporate Fraudsters. We Still Need to Fight for It.
Early this week, James Clapper, the head of U.S. intelligence, complained to journalists that Edward Snowden’s whistleblowing (my word, not Clapper’s) had sped up wider use of encryption by seven years. That’s great. Now let’s speed it up even more.
Not so fast, of course. Lots of powerful people don’t just want to hinder the adoption of strong encryption. They actively want to derail it.
Which is why, in several recent cases involving Apple phones, the FBI appealed not just to a judge but to public opinion. It warned that national security depended on being able to crack the late San Bernardino killer’s employer-supplied phone. Then it paid hackers more than $1 million to find a flaw in Apple’s operating system. It demanded access to another phone in New York state, but got access in other ways to the passcode. These cases went away, but not the issues they raised.
Law enforcement has only escalated its fear-mongering about going dark—being unable, due to strong encryption, to understand what surveillance targets are saying to one another or decipher the data on their devices. Members of Congress are proposing new legislation that would require tech companies to compromise the security of devices and software. The 1990s “Crypto War” over mobile phones is being replayed with new ferocity, this time in a much larger arena. The rhetoric has inflated commensurately.
The specter of terrorists communicating freely may have been the government’s ace in the Apple case. But it’s hardly the only card that people in authority can play. And we’re seeing the outlines of their strategy as the nation finally starts to debate what is truly a binary issue.
Encryption is binary because it’s either strong or it isn’t. A backdoor makes it weak. The experts in the field are clear on this: Weak or crippled encryption is the same thing as no encryption. So the choice we’re going to make as a society is whether we can securely communicate and store our information, or whether we cannot.
Those of us who believe we must protect strong encryption have to acknowledge some potentially unpleasant realities if our side wins this debate. If you and I can protect our conversations and data, so can some very bad people.
President Obama foreshadowed the next phase of the crypto fight in Austin last month. Speaking at the South by Southwest conference, he talked about, among other horribles, a “Swiss bank account in your phone” that strong encryption might create.
Consider the banksters who rigged interest rates, stealing billions from the financial system and eroding yet more trust in our vital institutions. They were caught because they were brazen and stupid in their communications, which provided evidence of their conspiracy. Do we want to make life easier for corporate criminals? (Of course, given our government’s notoriously soft-on-corporate-crime stance in recent years, at some level impunity already is the default.)
Do we want public officials to have easy ways to violate open-government laws? In many states, for example, members of city councils are required in most circumstances to communicate in ways that will leave a public record. Should we effectively invite the venal ones to cheat?
That latter scenario resonates with me for many of reasons, including this: I'm on the board of the California-based First Amendment Coalition, which fights for free speech and open records. I can’t be specific, but we're currently involved in a case featuring members of a city council whose email communications are being withheld from the public. What happens when—not if—they take these to Signal, a text-messaging and voice system that encrypts everything, or to PGP-encrypted email, where even if we get the records we won't be able to read them?
Given these potential problems, it’s tempting to be sympathetic with the law enforcement position on encryption—but history is clear that we can’t trust the government in this arena. As Harvard law professor Yochai Benkler wrote recently, our law enforcement and national security institutions have routinely—and with the impunity so routinely assumed by the rich and powerful—lied, broken laws, and thwarted oversight. “Without commitment by the federal government to be transparent and accountable under institutions that function effectively, users will escape to technology,” he wrote, and as a result we are learning to depend on technology.
Which also means, in the absence of remotely trustworthy government, we’re going to have to be honest with ourselves about the potential harms to transparency and accountability in a strong-crypto world. Yes, some tools make it easier to commit crimes. Yes, the Bill of Rights (or what’s left of it) makes things less convenient for the police. Yes, we take more risks to have more liberty; that’s the crucial bargain we struck with ourselves at the nation’s founding.
As Oregon Sen. Ron Wyden explained at a recent gathering of technology and human rights activists, it would be much more dangerous to force backdoors or other crippling of technology. And as he said in response to truly idiotic anti-security legislation from two powerful senators, “it will not make us safer from terrorists or other threats. Bad actors will continue to have access to encryption, from hundreds of sources overseas. [And it] will empower repressive regimes to enact similar laws and crack down on persecuted minorities around the world.”
As so many others have said, this is not about security vs. privacy. It is about security versus a lack of security—for all of us. That’s going to cause some discomfort, but liberty has a way of doing that.
Hundreds of Active Spotify Credentials Showed Up Online. Here's How to Protect Yourself.
On Monday, TechCrunch reported on a list of working Spotify credentials that had shown up on the text hosting site Pastebin. The dump contained email addresses, usernames, current passwords, and other information like account type.
When TechCrunch reached out to Spotify users on the list, they confirmed that the information about them and their accounts was accurate. Many noticed strange activity on their Spotify accounts, and some even had to contact Spotify customer service when they were locked out by someone changing their account email address.
The list's origins remain unknown. It could have been put together based on old Spotify hacks, or there could be a new breach in Spotify's network. The company denies this, though. When asked on Wednesday whether there was any update about the situation, Spotify provided the same statement it has been circulating since Monday:
Spotify has not been hacked and our user records are secure. We monitor Pastebin and other sites regularly. When we find Spotify credentials, we first verify that they are authentic, and if they are, we immediately notify affected users to change their passwords.
One possible explanation is that hackers acquired login data from other companies' data breaches, and tried them against Spotify's login portal until they found ones that worked (meaning credentials that customers had reused on multiple services). "It looks like a leak that used stolen credentials from another breach—people tend to reuse the same passwords. With that said, there's no way anybody can really know unless Spotify confirms it," said Michael Borohovski, the co-founder of Web security company Tinfoil Security. "It's fairly common. Attackers seek out services that don't support 2-factor [authentication] so that they can run lists against them."
Though we can't know for sure that this strategy is the cause of the problem, it's a likely candidate if Spotify is adament that it didn't have an internal breach. Regardless, Spotify users would be much better protected if the company offered two-factor authentication.
Some companies seem to be taking proactive steps to discourage their users from reusing passwords. On Monday, a Slate employee (who had been using the same password for Spotify and Amazon) recieved a security email from Amazon:
As part of our routine monitoring, we discovered a list of email addresses and passwords posted online. While the list was not Amazon-related, we know that many customers reuse their passwords on multiple websites. Since we believe your email addresses and passwords were on the list, we have assigned a temporary password to your Amazon.com account out of an abundance of caution.
Amazon has not yet returned a request for clarification on whether the list mentioned in the email is in fact the Spotify list, but it's a positive practice either way. You must have heard the mantra by now: Use strong, unique passwords for all of your accounts, consider a password manager, and enable two-factor authentication everywhere you can. Luckily it was only hundreds of users this time—we know it can be far worse.
Future Tense Newsletter: Responsible Robots, Mechanical Doping, and Educational Technology
Greetings, Future Tensers,
Who’s responsible if a robot murders its owner? That’s the central question posed by “Mika Model,” a new short story by Paolo Bacigalupi that we’re excited to have published for our new Future Tense Fiction project, a joint effort with Arizona State University’s Center for Science and the Imagination. Also part of our Futurography unit on killer artificial intelligence, Bacigalupi’s story begins when an advanced sex robot turns itself over to the police for decapitating its owner. From there, a complex set of issues emerge that suggest the real danger of A.I. may not be what they’ll do to us but the ways we’ll relate to them.
Ryan Calo, an expert in robotics law, begins to unpack some of those questions in an essay responding to Bacigalupi’s story. Calo notes that it’s best not to anthropomorphize robots, even as he acknowledges that it’s sometimes impossible not to. Such slippages can only make it more difficult to assign blame. While interviewing A.I. researcher Stuart Russell last week, I learned that living with the computers of the future may mean living with such uncertainties. As Russell suggested to me, struggling with the values we impose on computers may mean coming to terms with what we value ourselves, a premise that’s also at the heart of Bacigalupi’s tale.
Here are some of the other stories that we read while contemplating how much the FBI paid to hack a phone:
- Mechanical doping: Apparently some professional cyclists are installing tiny motors in their bikes to win races. Because performance-enhancing drugs aren’t enough.
- Educational technology: Phones certainly seem like a distraction in the classroom, but one app may actually help students stay focused and meet their goals.
- Social networking: Has Facebook peaked? Will Oremus explores how the site is changing, becoming a platform for news and other information, rather than one for personal details.
- Cybersecurity: A database containing personal records for 87 million Mexican voters found its way online. That’s a lot of personal records!
Updating my firmware,
for Future Tense
87 Million Mexican Voter Records Discovered in Unprotected Online Database
Hacks and data breaches are a ubiquitous threat these days, but malicious actors don't always need to put in a lot of work to mine valuable personal data. Sometimes they can go right in the front door of an unprotected database. The latest example is a trove of Mexican voter registrations discovered by a security researcher a few weeks ago. And it wasn't a minor list. The database had personal information for 87 million Mexicans—out of a population of more than 120 million.
Security researcher Chris Vickery, of the software company MacKeeper, discovered the database on April 14. Vickery is the researcher who discovered the Hello Kitty Sanrio database leak in December. He followed that up about 10 days later with the discovery of an unprotected database that contained records for 191 million U.S. voters.
As with the latest Mexican leak, voter data generally doesn't contain citizen IDs (like social security numbers) or credit card numbers, but it does often have addresses, birthdays, voter ID numbers, and other personal information that could help bad actors construct phishing schemes or do other social hacking.
The Mexican database was taken down over the weekend, but Vickery had to work for a few days to notify the correct Mexican authorities. The Mexican National Electoral Institute released a statement on Friday noting that it has launched an internal investigation and notified the prosecutor for electoral crimes. Amazon Web Services, which was hosting the database, told BBC News that "As of 1:00 am on April 22, this database was no longer publicly accessible."
Vickery told Ars Technica UK, "The Mexican government says that when they give out these data sets, each set is 'watermarked.' ... That makes it possible to determine who was responsible for the set that got leaked. So, soon enough we'll at least know which non-governmental authority was responsible for the particular data that was leaked," he said.
Deploying intense cybersecurity measures is clearly necessary for sensitive personal data as hacks and breaches ramp up. These unprotected databases don't even put a password between valuable data and potential bad actors, though. As awareness about data security grows, even small protective steps are important.
The FBI Paid More Than $1.3 Million to Unlock the San Bernardino iPhone. Is That a Good Deal?
After spending months in court attempting to compel Apple to unlock the iPhone used by one of the San Bernardino shooters, the FBI eventually paid a third party to do it instead. The most important part of the saga is probably the ideological questions it raised about privacy and security ... but let’s be real, we’re all curious about how much the FBI spent to solve the problem.
At the Aspen Security Forum in London on Thursday, FBI director James Comey hinted at the amount, saying the bureau paid more to have the phone unlocked than he will make during the seven years and four months he has left in his 10-year term leading the bureau. Reuters estimates Comey’s remaining earning potential at the FBI is $1.34 million. Comey characterized the sum of money the bureau paid as “a lot,” but added, “It was, in my view, worth it.”
To put that in perspective, startup Zerodium offered a $1 million bounty to anyone who could hack iOS 9; that bounty was claimed back in November. Zooming out, Bank of America Merrill Lynch estimates that the cybersecurity defense market was $75 billion in 2015 and will grow to $170 billion by 2020. And banks like J.P. Morgan Chase, Citibank, and Wells Fargo are spending hundreds of millions of dollars per year on digital security.
The enacted 2016 FBI budget is about $8.8 billion. If the bureau spent roughly $1.3 million to hack the iPhone, that would account for about 0.01 percent of its annual spending.
Did the FBI get a good deal? CNN Money speculated in February that it would have only cost Apple $101,000 to crack the phone, but added that the company would have needed to spend millions of dollars to protect the tool it created.
Also, keep in mind that the Navy paid Microsoft $9 million last year to continue supporting Windows XP on its networks. We don’t know whether there was any worthwhile information on the San Bernardino iPhone, but the bar for justifying government tech spending seems to be pretty low.
EU Brings More Antitrust Charges Against Google for Pushing Its Mobile Services
A year ago, EU competition commissioner Margrethe Vestager announced antitrust charges against Google related to the ubiquity of the company’s web products like search. On Wednesday, Vestager announced similar charges related to Google’s Android mobile operating system, which dominates the international smartphone market.
Vestager presented a “statement of objections,” which outlines Android’s power to steer users toward Google services like search and the Chrome browser, allegedly making it difficult for competitor services to gain any type of traction. The commission estimates that 80 percent of internet-connected mobile devices in Europe run Android, because Google licenses the operating system to manufacturers. The licensing agreement requires pre-installation of Google Search and the Chrome browser, and includes financial incentives for doing so.
Vestager said in a statement on Wednesday, “Based on our investigation thus far, we believe that Google’s behaviour denies consumers a wider choice of mobile apps and services and stands in the way of innovation by other players, in breach of EU antitrust rules.”
Google will have 12 weeks to prepare an official response, and the New York Times reports that a decision about the seperate, but related antitrust charges from last year should also come out in the next few months. Google senior vice president and general counsel Kent Walker wrote in a statement on Wednesday, “We take these concerns seriously, but we also believe that our business model keeps manufacturers’ costs low and their flexibility high, while giving consumers unprecedented control of their mobile devices.”
The outcomes from these and other complaints brought by the EU competition commission may be influential in other markets, as tech giants continue to expand on mobile. For example, in the United States, the Federal Trade Commission conducted a significant antitrust probe of Google after other large companies complained that Google was skewing its search results to favor its own products and services. The FTC eventually settled with Google in 2013, but did have findings about the company’s anticompetitive behavior that have been slowly leaking.
Precedent from other countries could reopen or ignite controversies outside the EU. And decisions that aren’t in Google’s favor could threaten its business model for Android, which has made $31 billion in revenue for Google according to Oracle Corp. Google makes the vast majority of its money from ads—for example $19.1 billion out of $21.2 billion in the last quarter of 2015—so tighter controls on how the company presents its services or weights its search algorithm could cut into the company’s advertising reach, significantly affecting revenue.