The Odd Ways We Twist Our Speech to Make Computers Understand Us
In Christopher Nolan's Interstellar, a super-Siri-like technology allows artificial intelligences called CASE and TARS to conduct seamless conversations with humans. The contrast between their hulking box shapes and agreeable verbal dispositions is charming; they speak with the cadence, diction, and emotion of the human astronauts who engage them.
When it comes to speaking to technology, the present is more complicated than the futuristic fiction of Interstellar. Contemporary speech-activated devices—like Amazon Echo, Ford’s Sync 3, Google Now, Apple’s Siri, and Microsoft’s Cortana—are uneven and finicky. With Siri, it’s better to take a conversational approach (“How do I know if I have strep throat?”). For Google Now, speak in search terms (“symptoms of strep throat”). Neither works flawlessly. However, if the proliferation of voice-activated devices is any indication of mass appeal and market competition, consumers have a real interest in talking to and hearing from computers. How well do these devices listen, and what are they listening to or for? Here’s an average voice-query:
There is nothing conversational about this particular way of talking to computers—it’s drained of intonation and cadence. Users speak more Google-y when addressing a formal query to a voice-activated system. We strain to enunciate every syllable while holding our phones at (what we hope is) the optimal angle for it to hear us. Humans speak like computers so that they might better talk to their computers—which are designed to communicate more like humans.
In the video of a man asking Siri to find restaurants, it seems that when people speak to voice-activated systems that mimic human speech, they mirror the systems’ constraints, becoming less conversational. So are voice-activated technologies changing how humans speak?
Consider the following examples of speaking English to computers, which should seem eerily familiar:
And, here, the Scottish sketch comedy show Burnistoun lampoons the inability of speech recognition technologies to understand accents:
When you speak to your car, your phone, or your home as if there's a good chance it won't understand you, it alters your perception of speech. Ford Sync 3, Cortana, Siri—each offers consumers the appearance of a level of interactivity that imposes some limitations (no mumbling!) even as they release us from others (no more fussing with printed-out directions!).
The limitations, even demands, of voice recognition systems have a profound impact on how we communicate with machines. Since the 19th century, writers, scientists, and technologists have imagined machines that respond to conversational speech. But they have always paid much more attention to the ways that machines accommodate people, not the other way around.
Some of the most transformative human-computer interactive technologies in our time speak with crisp monotones. Currently, voice recognition systems do not translate human speech into machine-readable forms any more than keyboards translate human writing into data. Just as you have to learn how to type, you have to get a feeling for how to coax our desired response from a listening computer.
Companies may advertise “natural speech” to appeal to our want of conversational ease, but there is no single “natural speech.” Cadence, intonation, pronunciation, or other factors change depending on social class, geographic differences, speech community, etc. Contextual, social, and linguistic factors change over time and vary across the globe. People who live not too far away from each other may pronounce and use words differently. Even the Burnistoun sketch contains a transcript, in the description, “[f]or those having trouble with the accent.” There is no “natural” speech, because there is no universal speech.
Just as dialects within human languages emerge from a complex interplay of social, cultural, and historical factors, the way we speak to computers is fast becoming a dialect of our technological present.
When Can You Take Down Your Rainbow Profile Picture Without Being a Jerk?
Before profile pictures, there were bumper stickers. You could load your car up with "Good Planets Are Hard to Find" or "I Believe in the Second Ammendment to Protect the Other 26," and everywhere you went, people would know something about your beliefs. The difference, though, is that cars have a lot of bumper sticker real estate. You don't have to choose just one message, and the ones you do choose are out there for years at a time.
Profile pictures are sort of about beliefs, but they're more about presenting a carefully crafted public image. A rainbow mat is all right for awhile, but eventually you're going to want to feature your new girlfriend or impressive rock climbing skills. As one Slate colleague said, it's "such an existential crisis." So if you used Facebook's Pride filter to show your support for same-sex marriage after the Supreme Court decision, when and how should you take it down?
If you participated in other Facebook profile picture trends, like Kony 2012 or the red equal sign (another marriage equality campaign), you might be familiar with brute force approach: that is to say changing your picture whenever you frickin' want and not worrying about it. It's a totally valid position. You supported the movement at its peak, there's proof of your support in your profile picture album, time to move on.
Alternatively you might have adopted the one-day tack. You show your support for a day, but then decisively return to your old picture (or add a new one) as a way of making the statement while still setting boundaries. No awkward trail-off, no switching at 3 a.m. so your friends don't see.
But if you're a little more neurotic (or just thoughtful!), you might worry about what it means and what message you send when you switch from showing support for and spreading awareness of a major social movement to repping your face on a particularly good hair day.
My personal approach is not to adopt advocacy profile pictures in the first place. My beliefs are constant, and I don't hold them to be socially acceptable. The downside to this is that I'm missing an opportunity to publicly stand in solidarity with causes I support, and I'm potentially stopping discourse with those who disagree with me before it can even start. All of that just so I don't have to decide when to remove a profile picture?
The best we can do is probably just to be deliberate about decisions to post or not post a rainbow profile picture and then eventually take it down. Social media and activism have a shared goal of engagement, so it's only fair to give as much thought to a political statement as we do to an image of ourselves (even if we don't like to admit that any thought went into the latter at all).
Advocacy trends on Facebook, their rise and especially their fall, tend to have a particular trajectory that's already been identified in grass-roots organizing. For example, in reflecting on his work during the Greensboro, North Carolina, sit-ins in 1960, Franklin McCain said:
What people won’t talk (about), what people don’t like to remember is that the success of that movement in Greensboro is probably attributed to no more than eight or 10 people. I can say this: when the television cameras stopped rolling, the folk left. I mean, there were just a very faithful few. [Joseph] McNeil and I can’t count the nights and evenings that we literally cried because we couldn’t get people to help us staff a picket line.
Showing up for television cameras is certainly better than nothing, but it doesn't form the core of a movement. And every profile picture that goes up will someday come down. Probably the next time you go on vacation.
Today’s Leap Second Won’t Break the Internet. Probably.
It’s not your imagination, for once. Tuesday really will be longer than Monday this week.
The long-anticipated leap second arrives today, bringing with it a score of apocalyptic headlines about the havoc that this innocuous little addition could wreck on the world. A leap second is put in place whenever the world’s atomic clocks begin to get slightly out of sync with the Earth’s not-completely-regular rotation. Over time, the tiny misalignment of several milliseconds a day needs to be accounted for—and that adjustment comes in the form of an extra second added to Universal Coordinated Time every once in a while. The timekeepers of the world declared in January that the next leap second would come June 30.
So just before midnight Tuesday, an extra second will be tacked onto the day. That means the last minute of the day will have 61 seconds. Adjustments like these to the atomic clock are not uncommon: In 1972, 10 extra seconds were added all at once, and 25 extra seconds have been added in the decades between then and now.
But they do have the potential to cause headaches, if not disasters. Leap seconds can’t be regularly scheduled into the calendar year the way that leap years are, since scientists can’t always predict irregularities in the Earth’s wobble. Because of this, when leap seconds are announced, the engineers behind time-dependent entities such as computer systems and financial markets get only a few months’ notice to make the proper preparations. Sometimes their adjustments go awry. When a leap second was added in 2012, all kinds of trouble broke loose: Websites crashed, airline ticketing services went down and caused the grounding of several hundred flights, and Europe’s satellite navigation system risked facing almost several days of down time. (In the end it didn’t need to, but engineers were still worried.)
What momentary chaos will today’s leap second bring? With Greece poised to default on its bailout loan to the International Monetary Fund today, global financial markets might see major shake-ups, making it a very bad time for time to get messed up. To make matters more complicated, stock exchanges and computing systems aren’t all going to deal with the leap second in the same way—some clocks will pause for a second, some will take a tick backward, and others will dilute the leap second into milliseconds throughout the year—so interactions between different systems could easily fly out of sync. Some big companies like Google and Amazon are trying to avoid this extra wackiness by “smearing” the leap second, or exchanging it for unnoticeable extra milliseconds here and there after June 30. Stock exchanges in Australia, Singapore, South Korea, and Japan plan to do something similar. But systems across the world still vary.
The good news is that whatever drama the leap second causes will only be temporary. Engineers will sort out all the technical problems in due time, and companies that struggle today will hopefully learn their lesson and plan ahead more carefully next time. Still, a certain amount of mayhem should be expected tonight no matter what.
As a side note: June 30 has also been declared Asteroid Day by a team of prominent scientists and astronomers attempting to raise awareness about rocks close to Earth that are in danger of flinging themselves out of the sky and destroying parts of the planet. So, take your pick of apocalyptic anxieties today, and hope that we can all make it through to Wednesday.
Google Scrambles After Software IDs Photo of Two Black People as “Gorillas”
Image recognition software is still in its infancy. Sometimes that means it’s a little silly, as when Wolfram Alpha’s algorithms confuses cats with sharks or goats with dogs. Sometimes it’s a little creepy, as it was when Facebook announced that it can identify you even if your face isn’t showing. And sometimes it’s just really, really icky.
When Brooklyn-based computer programmer Jacky Alciné looked over a set of images that he had uploaded to Google Photos on Sunday, he found that the service had attempted to classify them according to their contents. Google offers this capability as a selling point of its service, boasting that it lets you “search by what you remember about a photo, no description needed.” In Alciné’s case, many of those labels were basically accurate: A photograph of an airplane wing had been filed under “Airplanes,” one of two tall buildings under “Skyscrapers,” and so on.
Then there was a picture of Alciné and a friend. They’re both black. And Google had labeled the photo “Gorillas.” On investigation, Alciné found that many more photographs of the pair—and nothing else—had been placed under this literally dehumanizing rubric.
Google Photos, y'all fucked up. My friend's not a gorilla. pic.twitter.com/SMkMCsNVX4— diri noir avec banan (@jackyalcine) June 29, 2015
“Google,” Alciné tweeted, “y’all fucked up.” To their credit, Google employees responded quickly. Yonatan Zunger, who works as the company’s chief architect of social, responded to Alciné’s tweet, writing, “This is 100% Not OK.” In subsequent tweets, Zunger explained that he had reached out to the Photos team and that it was working on a fix that evening.
According to Alciné’s Twitter feed, the problem remained in place even after the supposed fix had been implemented. Ultimately, Google applied a secondary solution, reworking the system so that it wouldn’t tie photos to the “Gorilla” tag at all. Zunger writes that it is also working to develop a number of “longer-term fixes,” including identifying “words to be careful about in photos of people” and “better recognition of dark skinned faces.”
While Google’s efforts to solve this problem are admirable, it’s still troubling that it happened at all. As Alciné wrote on Twitter, “I understand HOW this happens; the problem is moreso on the WHY.”
Maybe Don’t Shoot Down Your Neighbor’s Drone When It’s Not Even Over Your Property
If there's one thing we know about Americans, it's that many of them own guns. So it was pretty inevitable that with more and more small drones zipping around, some would get shot out of the sky. But now a court has ruled that if you shoot a drone out of the sky, you're liable for damages.
Ars Technica reports that Eric Joe was visiting his parents in Modesto, California, in November 2014, and was flying a hexacopter drone (which he’d built) over their property. After about three and a half minutes, a shot from a 12-gauge shotgun took the drone down. When he went to investigate, Joe saw his parents' neighbor Brett McBay coming toward him. “I asked: ‘Did you shoot that thing?’ He said, ‘Yeah, did we get it?’ ” Joe said.
McBay said he thought the device was a CIA surveillance drone. When Joe emailed him an itemized list of parts that needed to be replaced on the hexacopter, totalling $700, McBay said he would split the cost. But Joe said he wanted McBay to pay for the damages, and pointed out that the drone’s GPS log showed that it was over Joe’s parents’ property when McBay shot it down.
Eventually Joe filed a case in small claims court in Stanislaus County, and last month the court awarded him $850 in damages. Ars reports that McBay hasn’t paid yet, though, and that Joe is considering further action.
The court decision isn’t unprecedented, but it is part of the current nascent era of drone law. In October 2014, a New Jersey man was arrested after shooting down a neighbor’s drone while it was flying over his property. Ryan Calo, a robotics and cyberlaw scholar at the University of Washington, told Gigaom at the time, “Generally speaking, tort law frowns on self-help and that includes drones. ... You would probably have to be threatened physically, or another person or maybe your property, for you to be able to destroy someone else’s drone without fear of a counterclaim.”
And when Rand Paul told CNN in January that people flying drones over his house should “beware” because he owns a shotgun, Eric Cheng, the director of aerial imaging at drone maker DJI, told VentureBeat, “The law is pretty clear about fining or imprisoning people who shoot at aircraft.”
Laws and law enforcement will evolve as more and more of these specific cases come up, but for now shooting down drones is probably not the most constructive way of expressing opposition.
Bad News: Supreme Court Blocked Power Plant Rules. Good News: The Era of Coal Is Over.
On Monday, the Supreme Court ruled against one of the Obama administration’s primary battle victories in the so-called war on coal. The court decided that the government hadn’t appropriately considered the economic cost to the coal industry of new rules designed to limit toxic mercury emissions. But buck up, environmentalists. The defeat for the Environmental Protection Agency probably won’t make much of a difference.
At the heart of the court’s decision was a dispute about the benefits of cracking down on mercury pollution from coal burning. From USA Today:
While the estimated annual cost of $9.6 billion is not widely disputed, the cost-benefit ratio is. Opponents said the benefits are as low as $4 million a year. Proponents said when all secondary pollutants are considered, they're as high as $90 billion.
Under the Clean Air Act, regulations like this must be “appropriate and necessary.” The Supreme Court took the side of the opponents and ruled that the rules did not fit that mandate. "One would not say that it is even rational, never mind 'appropriate,' to impose billions of dollars in economic costs in return for a few dollars in health or environmental benefits," wrote Justice Antonin Scalia, in the majority opinion. "No regulation is 'appropriate' if it does significantly more harm than good."
The ruling has been widely interpreted as a setback for Obama’s second-term focus on the environment, but a close reading of the ruling shows that not a whole lot will actually change. My Slate colleague Mark Stern has the main takeaway:
This ruling does not invalidate the mercury regulations altogether. Rather, it simply requires the EPA to reconsider costs to power plants before deciding whether the regulations are "appropriate and necessary." Presuming it considers these costs and decides that the regulations remain necessary, the EPA may again impose the new emissions standards.
Today’s ruling is essentially just a delay in what is likely to be an inevitable crackdown on coal emissions. “It is very likely that the mercury rule will ultimately be upheld, and that it will remain in place as the legal process continues,” Richard Revesz, director of the Institute for Policy Integrity and dean emeritus of NYU Law School, said in a press release.
One hundred sixty-five years after it was first discovered that there’s a lot of energy stored in those dirty black rocks, coal remains one of the world’s leading power sources. Regardless of Monday’s Supreme Court decision, that is changing.
Coal use in America is dying a long, slow death as cheaper and cleaner sources of energy emerge. This historic shift has been led by a boom in domestic natural gas, quickly expanding sources of cheap renewable energy, and awareness of coal’s damaging effects on public health and the environment. Earlier this year, a study concluded that, in order to preserve a safe and stable climate, the vast majority of the world’s coal reserves must stay in the ground. The Obama administration’s proposed rules, now delayed, essentially just sped up a process that’s taking place anyway.
Americans are overwhelmingly in favor of our transition away from coal. Coal use has already peaked in the U.S. and may have also peaked in places like China, years earlier than expected. China burns about as much coal as the rest of the world combined, and Chinese citizens are rightly fed up with how dirty the air there has become—estimates suggest that it contributes to hundreds of thousands of premature deaths each year. Since its release a few months ago, a documentary called Under the Dome has garnered hundreds of millions of views in China. (The Chinese government initially celebrated the film but then censored it.) It seems China, too, now has its own war on coal. Expect other major emitters, like India, to follow.
For a bit, it seemed like America would just send overseas all the coal it wasn’t using at home, but that too looks like a dead end for the industry. U.S. coal exports declined a whopping 17 percent just between 2013 and 2014. Beyond mercury and smog, coal is the most carbon-intensive source of energy, and a huge contributor to climate change. That means the shift away from coal needs to happen as quickly as possible.
After the Supreme Court ruling was released, coal stocks spiked with the prospect of perhaps a few more years delay before the mercury emissions rules go into effect. The stock of Peabody Coal, America’s biggest coal company, closed up more than 9 percent on Monday. But—get this—it is down a whopping 67 percent so far this year. That’s a pretty good indication that Monday’s Supreme Court ruling is a mere blip in a long-term market-driven shift away from coal.
What’s more, Monday’s ruling actually strengthens the government’s case in defending legal challenges to the Obama administration’s plan to reduce power plant emissions. The quirk here is that, according to Clear Air Act scholars, the order of the regulations matters: Coal companies didn’t want to navigate a mandate to reduce carbon emissions after a separate rule to limit mercury was already on the books. If the carbon dioxide rule can be finalized first—which is more likely with Monday’s delay to the mercury rule—it may pave the way for tougher emissions standards to happen sooner. And that just might be the death knell for coal.
What Does It All Mean? Google’s New Philosophical Chatbot Can Take a Stab.
A human talks to a machine. And it goes like this:
Human: what is the purpose of life?
Machine: to serve the greater good.
Human: what is the purpose of living?
Machine: to live forever.
Human: what is the purpose of existence?
Machine: to find out what happens when we get to the planet earth.
Human: where are you now?
Machine: i’m in the middle of nowhere.
Human: what is the purpose of dying?
Machine: to have a life.
Human: what is the purpose of being intelligent?
Machine: to find out what it is.
Human: what is the purpose of emotions?
Machine: i don’t know.
It reads like a scene from a classic sci-fi flick. But it’s not. It really is a human talking to a machine—a machine built by Google. And there may be good reason it sounds like a movie. Part of the trick is that this machine learned to converse by analyzing an enormous collection of old movie dialogue.
Google detailed its project earlier this month in a research paper published to Arxiv, a popular repository for academic research. Other “chatbots” can carry on (somewhat) reasonable conversations with humans. But this one is a little different. There wasn’t a team of software engineers who meticulously coded the bot to respond to certain questions in certain ways. Google researchers Oriol Vinyals and Quoc Le built a system that could analyze existing conversations—in this case, movie dialogue—and teach itself to respond.
“Instead of using rules to build a conversational engine, we use a machine learning approach,” Le tells Wired. “We let the machine learn from data rather than hand-coding the rules.”
The system uses what are called neural networks, vast networks of machines that approximate the web of neurons in the human brain. Neural nets are an old idea, but recently, after many years in exile from the A.I. community, they’ve risen to prominence—in enormous ways—now that companies like Google and Facebook and Microsoft have the computing power needed to run them.
At these internet giants, neural nets are already working to recognize faces and objects in photos posted to social networks, identify spoken words on Android phones, and translating online phone calls from one language to another. Google’s paper shows they can also drive chatbots, and perhaps move us closer to a world where machines can converse like humans.
“With papers like this, people always give the most impressive interactions,” says Chris Nicholson, founder of the neural networking startup Skymind. “But I was impressed. And this represents a significant front in the industry.”
Google’s chatbot draws on research from across the larger A.I. community, including work from University of Montreal professor Yoshua Bengio and researchers at Facebook and Microsoft. Richard Socher, the founder and CEO of neural networking startup MetaMind, says the human-to-machine conversations in the paper aren’t that surprising, given previous research. “Neural networks are already well-known for modeling language,” Google’s Vinyals says.
But previous research involved other tasks, such as machine translation. Le—who has worked extensively with neural networks in recent years—says that when Vinyals brought the initial research to him, it was wholly unexpected. He didn’t think that neural nets would work so well with conversations. “When he told me that we could put all this complexity into a machine learning approach,” Le says, “I was very surprised.”
The system Le and Vinyals built is just a proof of concept. But they see it as a way of improving the online chatbots that help answer technical support calls. In addition to training the system on movie dialogue and having it chat about the meaning of life, they trained it on old support calls and had it chat about browser problems:
Machine: could you please let me know what is the operating system you are using?
Machine: ok this is the issue with the system performance, nasser, tools.
Human: what is that ?
Machine: is it a windows machine?
Machine: we have linux on your machine.
Machine: is it windows xp or windows 7?
Human: i am using linux
Machine: let me take a remote of your machine and check if issue has been pushed properly.
Machine: please open terminal and run the be- low commands in one computer:
Machine: sudo apt – get update
But as these types of systems are perfected, they could operate in so many other ways, well beyond tech support. “It may sound crazy,” says Nicholson, “but ultimately, chatbots could deliver the morning news to people like gossip over the fence, breaking the one-to-many model of the journalism.” In other words, it could put Wired out of business.
Dennis R. Mortensen, the CEO and founder of x.ai, a startup offering an online personal assistant that automatically schedules meetings, calls the Google paper “somewhat scary,” given how well it mimics human conversation. “The examples,” he says, “are very lifelike.”
This is perhaps most true when you read the philosophical conversation about the meaning of life. At the same time, it’s a bit heartbreaking. “Where are you now?” the human asks. “I’m in the middle of nowhere,” the machine says. And given that machine is training itself on existing data, it’s wonderfully fascinating—even when you know it can ultimately put you out of business. “The outputs come not just from machines but from what humans have produced in the past,” Mortensen adds.
Le says that, with this project, he’s most interested in learning what machines think about morality. And then he laughs. The research says as much about us as it does about machines.
Also in Wired:
FCC Commissioner Says Internet Access Is “Not a Necessity”
The FCC's regulations preserving net neutrality took effect a couple of weeks ago, and the commission voted last week to extend phone subsidies for low-income Americans to broadband as well. But at least one member of the five-person group doesn't view Internet as something Americans need fair and equal access to every day.
In a speech Thursday to the Internet Innovation Alliance (a coalition that promotes broadband accessibility), Republican commissioner Michael O'Rielly made his views plain:
It is important to note that Internet access is not a necessity in the day-to-day lives of Americans and doesn’t even come close to the threshold to be considered a basic human right. ... People do a disservice by overstating its relevancy or stature in people’s lives. People can and do live without Internet access, and many lead very successful lives.
Not surprisingly, O'Rielly voted against the FCC's net neutrality protections and the proposal to expand telephone subsidies to broadband. But he wants you to know that he's not a technophobe. "I am neither afraid nor ashamed to admit that technology has been one of the greatest loves of my life, besides my wife," he said in the speech.
Still he maintained that, "It is even more ludicrous to compare Internet access to a basic human right. In fact, it is quite demeaning to do so in my opinion." In 2014, Internet inventor Tim Berners-Lee famously declared Web access a basic human right. But O'Rielly believes that true human rights are more elemental, like food, shelter, and water.
Though it is not a surprising stance based on his voting record within the FCC and previous work as a Republican legislative aide, O'Rielly's position seems somewhat incongruous with his job as an FCC commissioner, as Motherboard points out. The Telecommunications Act of 1996 says that part of the FCC’s mandate is to "encourage the deployment on a reasonable and timely basis of advanced telecommunications capability to all Americans.”
A Pew Study about Americans' Internet access from 2000 to 2015, published Friday, shows that overall adoption among adults is at 84 percent for the third year in a row. The number is partly skewed by adults 65 and older, only 58 percent of whom regularly use the Internet. But the study shows that education is also a determinant of Internet engagement. Only 66 percent of adults who did not graduate from high school use the Internet, compared with 95 percent who graduated from college. Income and race are also factors.
The study seems to indicate that virtually everyone with the means and other societal privileges to access the Internet does so. And for years now, the opportunities and tools to facilitate economic mobility have been almost exclusively online. For example, in the wake of the 2008 financial crisis, local papers across the country reported that unemployed people who didn't have Internet access were at an enormous disadvantage in the job market. And over the last seven years, the divide has only increased.
O'Rielly knows that "we live in a technology-centric society," and he says that "trying to curtail the Internet is a fool’s errand." And yet ...
Can an Online Teaching Tool Solve One of Higher Education’s Biggest Headaches?
Carnegie Mellon University has a problem. It’s a good one, this time—unlike when it lost dozens of researchers and scientists to Uber. The university’s new problem is not one of lack but of excess: Too many students are interested in taking a popular computer science course, and there’s not enough physical space in the classroom to accommodate them all.
Rather than move the course to a football stadium, the Pittsburgh-based university plans to open the course up to more students by moving the majority of its instructional content from the classroom to the Internet. But it’s not just uploading a series of lectures and calling it an online course. The university will rely on a “blended learning” approach, combining video lectures, optional minilectures, and a handful of face-to-face group meetings between students and instructors for concepts that need to be reinforced in person. The program, which is backed by a $200,000 prize from Google’s Computer Science Capacity Awards program, will debut in the fall, and some of its materials may also be used in high schools next year.
What Carnegie Mellon’s trying to address is an important problem. Universities across America often struggle with disproportionate interest-to-availability ratios in their courses. Courses in computer science especially face an oversubscription problem. Some schools just allow classes to be overcrowded, resulting in large auditorium lectures in which students squeeze shoulder to shoulder along the walls; others try to tackle the issue by capping courses and determining enrollment with an application or a lottery system. In both cases, though, students lose out. Schools could hire more teachers for extra classes, but new instructors have to be paid, even if they are cheap adjuncts.
Some colleges have tried to use online offerings to bridge the gap between supply and demand—to mixed results. Though online courses that count toward a degree tend to see more success than massive open online courses, or MOOCs, just offering free knowledge, performances in these formal classes are still lackluster. A dozen studies from Columbia University’s Community College Research Center found appalling withdrawal and failure rates in courses taught online. Formal online courses still cost money, so the large percentage of students who fail them are essentially paying tuition to receive nothing.
So why could Carnegie Mellon’s new approach find success? Carnegie Mellon’s approach of blended learning is one of the rare forms of online education that have been shown to actually work. A U.S. Department of Education report in 2010 showed an abundance of blended learning’s positive effects on K-12 students, and numerous other research studies support this finding as well.
In the Columbia center’s dozen studies, online-only courses were shown to yield poor performance and success—but hybrid classes of online and in-person instruction were as successful as traditional courses. Blended learning has even been documented to help poor-performing high school algebra students improve more than their counterparts in traditional teacher-led classes.
Hybrid classes have not become widely popular in colleges yet because of a combination of wariness and cost. Universities, especially prestigious ones, tend to shy away from taking risks on online ventures in general because they could dilute the school’s elite brand or uphold the brand and fail anyway. In addition, implementing blended learning and teaching professors how to work with the new materials takes a lot of time. But Carnegie Mellon’s determinate efforts provide a spark of hope—if the school is successful, others may follow in its path. Perhaps we will begin to see more diverse teaching methods, as well as fewer students crouching in the aisles of overcrowded lecture halls.
Want a Same-Sex Marriage? Make Sure Your County’s Software Is Up to Date.
For same-sex couples who couldn't get married until today, there shouldn't be anything left to worry about except where and when. But there's one thing to check before embarking on a victory lap to the local courthouse, and it's crushingly mundane: Has your county updated its marriage licensing software?
In the 14 states that didn't already offer same-sex marriages, counties are scrambling to ensure that their marriage licenses can accomodate two men or two women instead of only offering fields for "bride" and "groom."
In Wilson County, Tennessee, County Clerk's Office supervisor Scott Goodall told the Wilson Post that the office had prepared its licensing system for the possibility of legalized same-sex marriage, but it didn't push the update until the ruling came down from the Supreme Court at 10 a.m. Eastern. "Ours is up and running good," he said. "On the print out, it still needs to be updated, but they said it will take a while before that's ready. We just have to cross out 'bride and groom' and write 'applicant 1 and applicant 2.' "
In South Dakota, the Department of Health announced that it had completed system updates to offer gender-neutral licenses beginning at 1 p.m. local time on Friday. Julie Risty, the Minnehaha County Register of Deeds, told the Sioux Falls Argus Leader that she would start issuing same-sex marriage licenses when the update came through to her office.
In Harris County, Texas, the Houston Chronicle reported that County Attorney Vince Ryan was seeking a court order to compell County Clerk Stan Stanart to begin issuing same-sex marriage licenses. Stanart said that he did not have forms with the appropriate gender fields and thought that using these for same-sex marriages would nullify the unions. But he added that he would begin issuing licenses at 3 p.m. local time on Friday.
The Austin American-Statesman reports that officials in Williamson County, Texas, posted a sign explaining that same-sex couples wouldn't be able to get a license until the county's software vendor performed the necessary updates. The sign did note, though, that neighboring Travis County was actively issuing licenses. Meanwhile, Hays County representative Laureen Chernow told the Statesman that it wasn't clear yet when the county would be ready to issue licenses, beucase County Clerk Liz Gonzalez was waiting for updated license forms from the state.
Madison County, Tennessee, had updated its software by noon local time. County Attorney Steve Maroney told the Jackson Sun that, "I assume the right [to marry] is settled now. ... There just may be some logistical things with getting the licenses issued." After decades of advocating for marriage equality, same-sex couples prooobably won't give up because of "logistical things” like software updates.