In Praise of Email Debt Forgiveness Day
We live in an age of Days.
On the internet, invented occasions proliferate, some inviting awareness, others memorializing events that never interested us in the first place. The most important will get Google doodles, and even the dopiest sometimes merit recognition on Wikipedia. This May alone, we’ll be asked to honor Star Wars Day (the 4th), Towel Day (the 25th), and many more. Our trajectory is clear enough: Before long we’ll be celebrating Day Awareness Day, dutifully familiarizing ourselves with under-recognized annual occurrences.
But if the calendar is cluttered, my email inbox is more so, a thorny thicket of unanswered missives buried beneath unwanted offers. I know I am not alone in this: Though some of my friends—and some Slate editors—brag of their adherence to the Cult of the Inbox Zero, others are drowning. When I inquired about the topic on Facebook, one posted a queasifying picture of his primary Gmail inbox, letting the number—87,946 unread messages—speak for itself. I count only 68 (read and unread) in my personal account, but each is a loadstone.
It’s the consciousness of this burden that leads me to reluctantly endorse—praise, even!—Email Debt Forgiveness Day, which makes its second appearance on April 30. As Reeves Wiedeman explains in The New Yorker, Email Debt Forgiveness Day is the creation of Alex Goldman and P.J. Vogt, hosts of the internet culture podcast Reply All. On their show’s website, Goldman and Vogt write, “If there’s an email response you’ve wanted to send but been too anxious to send, you can send it on April 30th, without any apologies or explanations for all the time that has lapsed.” In lieu of further details about the delay, they invite you to simply link to their own explainer of the holiday.
If this feels necessary, it may be because the emails we fail to send often matter to us most. “The emails that I really want to respond to in a thoughtful way—put some time and heart into—are the ones I leave the longest, or in many cases, don’t end up answering,” my friend Mindi tells me. Like her, by the time I’ve called up an adequate response, I find that I’m paralyzed by the time that’s passed. A recent episode of Reply All goes deep into some such stories, tales of unresolved heartbreak and unexpected connection.
To be sure, this guilt isn’t the invention of email. Tucked away in a drawer somewhere, I probably still have a letter from one Brenna P.—a letter that she sent when we were 9, one that overwhelmed me even then, so much so that I never wrote back. Shame has always clung to human connection, a constant reminder that we’re never as good to others as we’d like to be. Digital communication hasn’t created this problem, then, but it may have intensified it, confronting us with the fact of our failures, given it a numerical value.
Email Debt Forgiveness Day isn’t right for everyone, of course. One Slate employee told me that she tried it last year, only to receive baffled responses to her belated replies. An ex mistakenly thought she was trying to get back together with him, she said, while other correspondents were just offended. Lesson learned: If you plan to celebrate the holiday this year, don’t expect everyone to understand.
Still, as a reminder that we’re not alone in our guilt, Email Debt Forgiveness Day can provide a much needed push. It might not be as socially important as, say, World Turtle Day, but in encouraging us clear up some of the emotional clutter in our lives, Email Debt Forgiveness Day might free us up to be a little more conscientious about everything else.
This Website Is a Rube Goldberg Machine Made of HTML Components
To differentiate a Rube Goldberg machine these days, it usually needs to be big and flashy. You have to set something on fire. But the beauty of this digital Rube Goldberg by Sebastian Ly Serena is its subtle humor. You set it off and it very quietly races toward a satisfying conclusion.
The cascade happens within HTML components that are usually used in forms—check boxes, dropdown menus, sliders, text fields. With one click you start a chain of blue checks that eventually leads to a button for emailing Serena. The forward momentum feels real.
As BoingBoing points out, some commenters on Hacker News criticized the project because the code that underlies it doesn't mirror a cascade effect in itself. Commenter tonyle said, "This is really cool, though I really wish I didn't look at the source code."
So maybe just take this Rube Goldberg machine at face value. It's a pretty clever way to give people your contact information.
Singapore Wants to Use Satellites to Collect Drivers’ Tolls. Will the U.S. Follow?
Traffic is an abomination. About 1.3 million die in motor vehicle accidents each year worldwide, dozens of psychology studies point to commute times as the most significant predictor of unhappiness, and transportation is a significant driver (no pun intended) of all sorts of air pollution—indeed, it accounts for 70 percent of American oil consumption. The scourge of traffic explains at least half the boosterism surrounding Tesla and Google’s autonomous cars—who needs to honk horns if cars can just bicker in binary?
But important high-tech transportation developments are often more humdrum than driverless vehicles. For example, Singapore is moving forward with a plan to implement satellite road pricing, a wireless tolling plan that charges vehicles for distance traveled rather than passes through a toll booth. Singapore plans to launch its satellite road pricing system in 2020 at a cost of $395 million. The technology requires vehicles to carry tracking devices that charge tolls to citizens’ accounts directly, thereby rendering physical toll booths useless, freeing up road space and maintenance costs. The tracking devices can be easily configured to set up pay for parking systems, as well as to beam traffic updates or route guidance to citizens via a mobile app or vehicle console. E-ZPass, a widely used American electronic toll-payment device you might have on your dashboard, is the germ of satellite road pricing; E-ZPass removed the need to drop coins in a toll booth, while satellite road pricing makes it unnecessary to pass through a booth at all.
Road pricing, as a general concept, is ostensibly about reducing traffic congestion: Setting a market price for public roads ensures demand (cars on the road) is responsive to the set supply (lanes on the highway). Citizens will only drive if the toll matches or is less than their willingness to pay—otherwise, they’ll turn to public transportation or teleworking, or they’ll put off their trip until the price falls, thereby reducing congestion. Road pricing also redirects motorists to longer but cheaper and less congested routes, and some models give motorists discounts when they choose such alternatives. Tolls’ ability to adjust by the hour to traffic patterns more efficiently reduces congestion than other methods, like carpool lanes, which can’t easily and quickly adapt. Road pricing has successfully reduced congestion in cities like London, and environmentalists often like the policy because it discourages driving and offsets public transportation costs.
With satellites, however, road pricing becomes far more efficient, as it tracks congestion in real-time. But while satellite road pricing has the potential to dramatically reduce traffic congestion and raise revenue, it’s proven politically infeasible in many cities, like New York and Edinburgh—it’s “the idea that economists love, but that ordinary people hate.” Politicians often pilot and launch satellite road pricing incrementally and quietly, “for fear of being labelled anti-car.” Now with satellites in the mix, privacy implications have left many municipalities even more wary of adoption.
Satellite road pricing has been successfully trialed in the European Union, a region most sensitive to privacy concerns. But in a post-Snowden and post-OPM hack world, public confidence in the government’s ability to protect data has eroded. The transition from physical booths to connected devices opens satellite road pricing systems to hacking, from external sources, perhaps seeking to follow a vehicle, from drivers themselves, who can jam tracking devices with tools that cost about $200, and from governments, who might use the system for unwarranted surveillance, a concern publicly expressed by Singapore’s opposition government. The city’s response to these concerns has been claims that the data will be “aggregated and anonymized,” a reply that oversimplifies the nature of data protection. Who or what will anonymize the data, and when will that occur in the process? How can motorists be charged for their driving without their data being tagged to their identity? What policies will be put in place to ensure data is protected? Will other government agencies have access to the back end?
If satellite road pricing is to come to the United States, it will have to reckon with the many privacy concerns believed of and raised over E-ZPass. On television, law enforcement characters in Law & Order: Special Victims Unit have used E-ZPass records to corroborate suspects’ stories. In real life, E-ZPass records are used to corroborate the claims of individuals who deny New York City residency to avoid paying the city’s steep income tax. And in 2015, the New York Civil Liberties Union, in partnership with a privacy activist known as “Puking Monkey,” discovered that E-ZPasses were tracked at places without visible toll cameras across New York City. The city responded that the additional E-ZPass readers were used in traffic management studies, but at the very least, motorists’ awareness of their participation in these studies was previously unknown. Reservations about data collected by E-ZPass would surely be exacerbated in a satellite road pricing context—E-ZPass only documents vehicles when they’re in the presence of readers, while satellites could presumably track a vehicle at all times, anywhere on the road.
The privacy debate over satellite road pricing overlays similar concerns about cars connected to any online network, especially driverless cars that rely on the network to, well, drive. Of course, there will be little need for congestion reduction in a world where driverless cars can automatically reroute. But it’s worth noting that much of the debate around transportation innovation is, at heart, about one question: How does one cede control in an activity, like driving, that has always been micromanaged by the self? Satellite road pricing won’t reduce commuting stress if tolls only make us angrier.
Websites Ending In “.AS” Have Been Vulnerable to Takeovers Since the '90s
Top-level domains like ".com" have been spruced up over the years with customizable options that are often associated with specific countries—for instance, ".ly" for Libya. But these domains are known for security issues. The latest is a problem with American Samoa's ".as" domain registry that for years has left sites using the suffix vulnerable to takeovers.
On Monday, British security blogger Infosec Guy published evidence of a vulnerability in the ".as" top-level domain that allows anyone to view and alter domain records, which include things like administrator contact information and plaintext passwords.
The flaw is a bug in how users can access site details through the registry. Users are only meant to look up details they are "allowed" to see, but the registry actually allows anyone to see anything. "A malicious attacker could quite easily modify any domain information—such as Nameservers—allowing them to take control of websites by redirecting their traffic to servers they control," Infosec Guy wrote.
Infosec Guy writes that he contacted the AS Domain Registry in January about the bug. Though he encountered some resistance, the group eventually patched it in February. A statement from the American Samoa Domain Registry says, "We fixed the potential issue back in February with the legacy Registrar system before any problems arose. There was never any potential for unauthorized changes to domain name information, as the Legacy Registrar system is a manual system."
One comment in the statement describes Infosec Guy's report as "inaccurate, misleading and sexed-up to the max." Naked Security, the blog by cybersecurity company Sophos, described the statement as "a belligerent press release."
American Samoa's top-level domain may seem somewhat obscure, but it's used by lots of big companies for URLs like a.did.as and twitter.as. If it had this vulnerability for almost two decades, a lot of data was at risk for a long time.
Trust but Verify: A Future Tense Event Recap
A year ago, researchers from the University of Virginia published the findings from their attempt to repeat 100 published psychological experiments to see whether they could get the same results. The conclusion of the study indicated that only one-third to one-half of the original findings yielded similar outcomes. This study, known as the Reproducibility Project, brought attention to the fact that a great deal of scientific research produced today is not reproducible. Even the Reproducibility Project’s results were challenged when put to the test by other researchers. The ability to reaffirm findings is an important part of the scientific method. Yet the reality is that billions of dollars fund studies whose results don’t hold up when put under the microscope again.
On April 21, Future Tense—a partnership of Slate, New America, and Arizona State University—hosted an event in Washington, D.C., to discuss this crisis in biomedical research. The two conversations featured scientists, researchers, and journalists who are addressing the problem in diverse ways. It quickly became clear that this crisis is not isolated to any one field of research—and despite increased attention to the issue this past year, it’s not new.
Lawrence Tabak, principal deputy director of the National Institutes of Health, noted that he first became aware of this problem more than 35 years ago as an assistant professor working on a major grant application. After failed attempts to replicate a key study done by a prominent scientist, Tabak realized the dilemma he encountered went beyond the replication of data and results; rather, it hinted at a larger cultural problem in the scientific enterprise. According to Tabak, perverse incentives continue to allow, if not encourage, scientists to get away with publishing invalid research. In the publish-or-perish world of research institutions, professional development often depends on a scientist’s ability to produce exciting results at a rapid pace. Furthermore, Brian Nosek, executive director and co-founder of the Center for Open Science, which coordinated the Reproducibility Project, pointed out that reproducing studies will create transparency and higher scrutiny of the scientific process, which could encourage researchers to value quality of publication, rather than quantity.
It’s difficult to change one’s approach to the scientific process and incentives midcareer—which is why it’s important to think about students in these conversations. Emma Frow, assistant professor in the School of Biological and Health Systems Engineering and the School for the Future of Innovation in Society at Arizona State, believes that science education can instill new values in future generations who are not yet under the pressure to publish research. The goal is to create a community-oriented culture and an incentive structure that emphasizes process, not prestige.
But the current reproducibility problem isn’t just about researchers scrambling to get ahead; other variables can affect the reliability of results. For instance, Carolyn Compton, professor of life science at ASU and former pathologist, discussed how the specimens used in labs are often unregulated leaving their consistency and quality unreliable. According to Compton, we often know more about the beef in our supermarkets—thanks to regulation by the FDA—than we know about the samples that are at the heart of bioresearch. In studies that require precise measurements and astute attention to detail, this kind of inconsistency not only influences the results of the study but its ability to be reproduced.
Yet Arturo Casadevall, who is the Alfred & Jill Sommer professor and chair of the W. Harry Feinstone Department of Microbiology and Immunology at Johns Hopkins University, argued that uncertainties are just par for the course. Casadevall reminded the audience that “all science is provisional,” and as scientists continue to investigate and learn, their approaches change, as do their results. Therefore, we should not search for the “perfect” research studies as much as we should bring critical attention to all of them, with emphasis on method and standards. Richard Harris, visiting scholar at ASU’s Consortium for Science, Policy & Outcomes at ASU and science correspondent on leave from National Public Radio to write a book on the crisis in reproducibility, encourages us to embrace doubt and understand that unknowns are part of the scientific process. The infrastructure surrounding scientific inquiry may be flawed but it’s our search for the truth that directs our ever evolving inquiry into the known and unknown.
You can watch the full event on New America's website. Also in Slate:
- "The Unintended Consequences of Trying to Replicate Research," by Ivan Oransky and Adam Marcus
- "The Reproducibility Crisis Is Good for Science," by Monya Baker
- "Cancer Research Is Broken," by Daniel Engber
Why Does It Still Take Five Hours to Fly Cross-Country?! A Future Tense Event.
In the early 1960s, President John F. Kennedy pledged that Americans would go to the moon and develop a supersonic commercial airliner. By the end of that decade, the country witnessed in awe Neil Armstrong’s “small step for man.” It was the idea of supersonic intercity travel that proved the unattainable “moonshot.” A half-century after Kennedy’s promise, with the European Concorde in retirement and no American supersonic plane ever cleared for takeoff, airliners still travel at the same speed as did President Kennedy’s 707 Air Force One.
We like to talk about the dizzying rate of technological change these days, but when it comes to intercity travel, we’re stuck back in 1959, when the 707 made its inaugural transcontinental flight. Why is that? And are we now on the eve of startling innovations in flying, or will it still take five hours to fly across the country in 2059? Join Future Tense for lunch in Washington, D.C., on Wednesday, May 11, to discuss these questions, and the future of aviation. The agenda is below; for more information and to RSVP, visit the New America website.
Lunch will be served.
National correspondent, the Atlantic
Author, China Airborne
Author, Free Flight: From Airline Hell to a New Age of Travel
12 p.m.: Intro: Living Large and Flying High in the Jet Age, Circa 1959
Editorial director, Future Tense
12:10 p.m.: What a Drag That Supersonic Boom: The Impediments to Supersonic Flight for the Rest of Us
Deputy administrator, NASA
Apollo Program professor of aeronautics and astronautics and engineering systems, Massachusetts Institute of Technology
Chief scientist of the U.S. Air Force
12:40 p.m.: I’ll Sell You a Futuristic Plane, if You’re Willing to Pay for It
Vice president, analysis, Teal Group Corp.
Vice president, head of research and technology for North America, Airbus Group Innovations
Founder, Lightcraft Technologies Inc.
Founder and CEO, Boom Technology Inc.
1:40 pm: Cleared for Takeoff? Our Strained Aviation System’s Capacity and Design
CTO, Resilient Ops Inc.
Associate principal, Arup Group
Chief of staff, Federal Aviation Administration
2:30 pm: Parting Thoughts: Air Travel, Circa 2059
House Unanimously Passes Bill to Safeguard Email Privacy
The Electronic Communications Privacy Act of 1986 is from, well, the '80s. So naturally it wasn't written with the current digital climate in mind. As a result it allows for some things that have become problematic, like warrantless government access to old emails stored on a server. Not good.
On Wednesday, though, the House indicated that it understands the ECPA's current Fourth Amendment issues by voting unanimously to amend the law. The new Email Privacy Act would also protect other types of digital communications stored in the cloud like instant messages and texts.
With Congress, change often takes years, and many states like California, Texas, Utah, and Maine already have digital privacy laws that have essentially been acting as stop-gaps. Though the bill must pass the Senate and reach the president before it can be enacted, but the House's 419-0 vote will hopefully give it some sway.
The bill had bipartisan support and was written by Patrick Leahy, D-Vermont, and Mike Lee, R-Utah. In an effort to bolster the bill's momentum, they told Reuters on Wednesday that the vote was "an historic step toward updating our privacy laws for the digital age."
Some don't think the bill goes far enough, though. Mike Godwin wrote on Slate in September that, "If federal legislators truly want to bring the ECPA up to date, they shouldn’t stop with updating a statute that was flawed even at its passage in 1986. Instead, they should revisit the ECPA’s constitutional roots."
Hopefully small victories are still better than nothing.
Encryption Technology Could Help Corporate Fraudsters. We Still Need to Fight for It.
Early this week, James Clapper, the head of U.S. intelligence, complained to journalists that Edward Snowden’s whistleblowing (my word, not Clapper’s) had sped up wider use of encryption by seven years. That’s great. Now let’s speed it up even more.
Not so fast, of course. Lots of powerful people don’t just want to hinder the adoption of strong encryption. They actively want to derail it.
Which is why, in several recent cases involving Apple phones, the FBI appealed not just to a judge but to public opinion. It warned that national security depended on being able to crack the late San Bernardino killer’s employer-supplied phone. Then it paid hackers more than $1 million to find a flaw in Apple’s operating system. It demanded access to another phone in New York state, but got access in other ways to the passcode. These cases went away, but not the issues they raised.
Law enforcement has only escalated its fear-mongering about going dark—being unable, due to strong encryption, to understand what surveillance targets are saying to one another or decipher the data on their devices. Members of Congress are proposing new legislation that would require tech companies to compromise the security of devices and software. The 1990s “Crypto War” over mobile phones is being replayed with new ferocity, this time in a much larger arena. The rhetoric has inflated commensurately.
The specter of terrorists communicating freely may have been the government’s ace in the Apple case. But it’s hardly the only card that people in authority can play. And we’re seeing the outlines of their strategy as the nation finally starts to debate what is truly a binary issue.
Encryption is binary because it’s either strong or it isn’t. A backdoor makes it weak. The experts in the field are clear on this: Weak or crippled encryption is the same thing as no encryption. So the choice we’re going to make as a society is whether we can securely communicate and store our information, or whether we cannot.
Those of us who believe we must protect strong encryption have to acknowledge some potentially unpleasant realities if our side wins this debate. If you and I can protect our conversations and data, so can some very bad people.
President Obama foreshadowed the next phase of the crypto fight in Austin last month. Speaking at the South by Southwest conference, he talked about, among other horribles, a “Swiss bank account in your phone” that strong encryption might create.
Consider the banksters who rigged interest rates, stealing billions from the financial system and eroding yet more trust in our vital institutions. They were caught because they were brazen and stupid in their communications, which provided evidence of their conspiracy. Do we want to make life easier for corporate criminals? (Of course, given our government’s notoriously soft-on-corporate-crime stance in recent years, at some level impunity already is the default.)
Do we want public officials to have easy ways to violate open-government laws? In many states, for example, members of city councils are required in most circumstances to communicate in ways that will leave a public record. Should we effectively invite the venal ones to cheat?
That latter scenario resonates with me for many of reasons, including this: I'm on the board of the California-based First Amendment Coalition, which fights for free speech and open records. I can’t be specific, but we're currently involved in a case featuring members of a city council whose email communications are being withheld from the public. What happens when—not if—they take these to Signal, a text-messaging and voice system that encrypts everything, or to PGP-encrypted email, where even if we get the records we won't be able to read them?
Given these potential problems, it’s tempting to be sympathetic with the law enforcement position on encryption—but history is clear that we can’t trust the government in this arena. As Harvard law professor Yochai Benkler wrote recently, our law enforcement and national security institutions have routinely—and with the impunity so routinely assumed by the rich and powerful—lied, broken laws, and thwarted oversight. “Without commitment by the federal government to be transparent and accountable under institutions that function effectively, users will escape to technology,” he wrote, and as a result we are learning to depend on technology.
Which also means, in the absence of remotely trustworthy government, we’re going to have to be honest with ourselves about the potential harms to transparency and accountability in a strong-crypto world. Yes, some tools make it easier to commit crimes. Yes, the Bill of Rights (or what’s left of it) makes things less convenient for the police. Yes, we take more risks to have more liberty; that’s the crucial bargain we struck with ourselves at the nation’s founding.
As Oregon Sen. Ron Wyden explained at a recent gathering of technology and human rights activists, it would be much more dangerous to force backdoors or other crippling of technology. And as he said in response to truly idiotic anti-security legislation from two powerful senators, “it will not make us safer from terrorists or other threats. Bad actors will continue to have access to encryption, from hundreds of sources overseas. [And it] will empower repressive regimes to enact similar laws and crack down on persecuted minorities around the world.”
As so many others have said, this is not about security vs. privacy. It is about security versus a lack of security—for all of us. That’s going to cause some discomfort, but liberty has a way of doing that.
Hundreds of Active Spotify Credentials Showed Up Online. Here's How to Protect Yourself.
On Monday, TechCrunch reported on a list of working Spotify credentials that had shown up on the text hosting site Pastebin. The dump contained email addresses, usernames, current passwords, and other information like account type.
When TechCrunch reached out to Spotify users on the list, they confirmed that the information about them and their accounts was accurate. Many noticed strange activity on their Spotify accounts, and some even had to contact Spotify customer service when they were locked out by someone changing their account email address.
The list's origins remain unknown. It could have been put together based on old Spotify hacks, or there could be a new breach in Spotify's network. The company denies this, though. When asked on Wednesday whether there was any update about the situation, Spotify provided the same statement it has been circulating since Monday:
Spotify has not been hacked and our user records are secure. We monitor Pastebin and other sites regularly. When we find Spotify credentials, we first verify that they are authentic, and if they are, we immediately notify affected users to change their passwords.
One possible explanation is that hackers acquired login data from other companies' data breaches, and tried them against Spotify's login portal until they found ones that worked (meaning credentials that customers had reused on multiple services). "It looks like a leak that used stolen credentials from another breach—people tend to reuse the same passwords. With that said, there's no way anybody can really know unless Spotify confirms it," said Michael Borohovski, the co-founder of Web security company Tinfoil Security. "It's fairly common. Attackers seek out services that don't support 2-factor [authentication] so that they can run lists against them."
Though we can't know for sure that this strategy is the cause of the problem, it's a likely candidate if Spotify is adament that it didn't have an internal breach. Regardless, Spotify users would be much better protected if the company offered two-factor authentication.
Some companies seem to be taking proactive steps to discourage their users from reusing passwords. On Monday, a Slate employee (who had been using the same password for Spotify and Amazon) recieved a security email from Amazon:
As part of our routine monitoring, we discovered a list of email addresses and passwords posted online. While the list was not Amazon-related, we know that many customers reuse their passwords on multiple websites. Since we believe your email addresses and passwords were on the list, we have assigned a temporary password to your Amazon.com account out of an abundance of caution.
Amazon has not yet returned a request for clarification on whether the list mentioned in the email is in fact the Spotify list, but it's a positive practice either way. You must have heard the mantra by now: Use strong, unique passwords for all of your accounts, consider a password manager, and enable two-factor authentication everywhere you can. Luckily it was only hundreds of users this time—we know it can be far worse.
Future Tense Newsletter: Responsible Robots, Mechanical Doping, and Educational Technology
Greetings, Future Tensers,
Who’s responsible if a robot murders its owner? That’s the central question posed by “Mika Model,” a new short story by Paolo Bacigalupi that we’re excited to have published for our new Future Tense Fiction project, a joint effort with Arizona State University’s Center for Science and the Imagination. Also part of our Futurography unit on killer artificial intelligence, Bacigalupi’s story begins when an advanced sex robot turns itself over to the police for decapitating its owner. From there, a complex set of issues emerge that suggest the real danger of A.I. may not be what they’ll do to us but the ways we’ll relate to them.
Ryan Calo, an expert in robotics law, begins to unpack some of those questions in an essay responding to Bacigalupi’s story. Calo notes that it’s best not to anthropomorphize robots, even as he acknowledges that it’s sometimes impossible not to. Such slippages can only make it more difficult to assign blame. While interviewing A.I. researcher Stuart Russell last week, I learned that living with the computers of the future may mean living with such uncertainties. As Russell suggested to me, struggling with the values we impose on computers may mean coming to terms with what we value ourselves, a premise that’s also at the heart of Bacigalupi’s tale.
Here are some of the other stories that we read while contemplating how much the FBI paid to hack a phone:
- Mechanical doping: Apparently some professional cyclists are installing tiny motors in their bikes to win races. Because performance-enhancing drugs aren’t enough.
- Educational technology: Phones certainly seem like a distraction in the classroom, but one app may actually help students stay focused and meet their goals.
- Social networking: Has Facebook peaked? Will Oremus explores how the site is changing, becoming a platform for news and other information, rather than one for personal details.
- Cybersecurity: A database containing personal records for 87 million Mexican voters found its way online. That’s a lot of personal records!
Updating my firmware,
for Future Tense