Adapted from Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech by Sara Wachter-Boettcher. Out now from W.W. Norton & Co.
Are you a “Kelly,” the 37-year-old minivan mom from the Minneapolis suburbs? Or maybe you’re more of a “Matt,” the millennial urban dweller who loves CrossFit and cold-brew coffee? No? Well, this is how many companies think about you. From massive businesses like Walmart and Apple to fledgling startups launching new apps, organizations of all types use personas—fictional representations of people who fit their target audiences—when designing their products. The level of specificity aims to give personas enough descriptive detail and backstory to feel relatable to the teams that use them—so that, ideally, team members think about them regularly and internalize the needs and preferences of their prospective clients.
But when homogenous teams try to create personas, they often end up designing products that alienate audiences, rather than making them feel at home.
That’s what happened to Maggie Delano, a Ph.D. candidate at MIT. Her menstrual cycle had been recently irregular, and she wanted to do a better job of tracking both her period and her moods in relation to it. So she downloaded some period-tracking apps, looking for one that would help her get the information she needed.
Most of the apps she saw were splayed with pink motifs that Delano immediately hated. But even more, she hated how often the products assumed that fertility was her primary concern—rather than, you know, asking her.
As a “queer woman not interested in having children,” Delano found one app, Glow, particularly problematic. She wrote:
The first thing I was asked when I opened the app was what my “journey” was: The choices were avoiding pregnancy, trying to conceive, or fertility treatments. And my “journey” involves none of these. Five seconds in, I’m already trying to ignore the app’s assumptions that pregnancy is why I want to track my period. The app also assumes that I’m sexually active with someone who can get me pregnant.
Glow’s all-male founding team apparently never considered the range of real people who want to track their periods. If you’re an adult woman in a relationship with anyone who’s not a man, you’re probably going to feel left out.
This kind of thing happens all the time: Companies imagine their desired user and then start designing only for that narrow profile. A team becomes hyperfocused on one customer group, tailoring its messages to an imagined ideal user without pausing to ask who might be excluded, or how the broader range of people whose needs could be served by a product might be affected by those messages.
This kind of narrow thinking about who and what is normal often makes its way into the technology itself in the form of default settings. Defaults are the standard ways a system works—such as the ringtone your phone is already set to when you take it out of the box, or the fact that the “Yes, send me your newsletter!” checkbox comes preselected in so many online shopping carts.
Defaults can be timesavers for users. One could argue that the tipping defaults in New York taxis are just that, since they allow customers to skip the math when paying their fares (though it would be hard to convince anyone that’s all the designers had in mind).
Default settings can be helpful or deceptive, thoughtful or frustrating. But they’re never neutral. They’re designed. As ProPublica journalist Lena Groeger writes, “Someone, somewhere, decided what those defaults should be—and it probably wasn’t you.”
In 2015, middle-school student Madeline Messer discovered the harmful biases often present in default settings firsthand. Noticing that many smartphone games only included boy avatar options for free and charged an additional fee for female avatars, Messer embarked on an experiment: She downloaded the top 50 “endless-runner” games—games where players aim to keep their characters running as long as possible—from the iTunes Store and set about analyzing their default player settings. Messer found that the default characters were nearly always male: Almost 90 percent of the time, players could use a male character for free. Female characters, on the other hand, were included as default options only 15 percent of the time. When female characters were available for purchase, they cost an average of $7.53—nearly 29 times the average cost of the original app download.
Likewise, smartphone assistants like Apple’s Siri, Google Now, Samsung’s S Voice, and Microsoft’s Cortana all have one thing in common: Women’s voices serve as the default for each of them. As Adrienne LaFrance, writing in the Atlantic, put it, “The simplest explanation is that people are conditioned to expect women, not men, to be in administrative roles.”
Default settings can also exhibit the racial biases of the people who make them. Snapchat is known for releasing filters that purport to make you prettier, like the popular “beauty” and “flower crown” features. These filters smooth your skin, contour your face so your cheekbones pop, and ... make you whiter.
These might seem like small things, but default settings can add up to be a big deal—both for an individual user like Messer and for the culture at large. When default settings present one group as standard and another as “special,” the people who are already marginalized end up having the most difficult time finding technology that works for them. Worse, the biases already present in our culture are quietly reinforced.
That’s why smartphone assistants defaulting to female voices is so galling. The more we rely on digital tools like today’s default smartphone assistants, the more we bolster the message that women are society’s “helpers.” Did the designers intend this? Probably not. More likely, they just never thought about it.
Try to bring up all the people design teams are leaving out and many in tech will reply, “That’s just an edge case! We can’t cater to everyone!”
Edge case is a classic engineering term for scenarios that are considered extreme, rather than typical. It might make sense to avoid edge cases when you’re adding features: Software that includes every “wouldn’t it be nice if ... ?” scenario quickly becomes bloated and harder to use. But when applied to people and their identities, rather than to a product’s features, the term “edge case” is problematic—it assumes there’s such a thing as an “average” user in the first place.
It turns out there isn’t: we’re all edge cases. And I don’t mean that metaphorically, but scientifically: According to Todd Rose, who directs the Mind, Brain, & Education program at the Harvard Graduate School of Education, the concept of “average” doesn’t hold up when applied to people.
In his book The End of Average, Rose tells the story of Lt. Gilbert S. Daniels, an Air Force researcher who, in the 1950s, was tasked with figuring out whether fighter plane cockpits weren’t sized right for the pilots using them. Daniels studied more than 4,000 pilots and calculated their averages for 10 physical dimensions like shoulders, chest, waist, and hips. Then he took that profile of the “average pilot” and compared each of his 4,000-plus subjects to see how many of them were within the middle 30 percent of those averages for all 10 dimensions.
The answer was zero. Not a single one fit the mold of “average.” Rose writes: “If you’ve designed a cockpit to fit the average pilot, you’ve actually designed it to fit no one.”
So, what did the Air Force do? Instead of designing for the middle, it demanded that airplane manufacturers design for the extremes instead—mandating planes that fit both those at the smallest and the largest sizes along each dimension.
Our digital products can do this too. Designers should let go of their narrow ideas about “normal people” and instead focus on those people whose identities and situations are often ignored: people transitioning their gender, or dealing with unexpected unemployment, or managing a chronic illness, or trying to leave a violent ex. When designers call someone an edge case, they imply that they’re outside the bounds of concern. In contrast, referring to these scenarios as a “stress case” shows designers how strong their work is—and where it breaks down. It’s a subtle shift, but an important one with significant outcomes.
This is what one design team at National Public Radio is doing to improve its mobile news coverage. During the process of redesigning the NPR News mobile app, senior designer Libby Bawcombe wanted to know how to make design decisions that were more inclusive to a diverse audience, and more compassionate to that audience’s needs. So she led a session to identify “stress cases” for news consumers, and used the information she gathered to guide the team’s design decisions. The result was dozens of stress cases around many different scenarios, such as an English-language learner anxiously trying to decipher a critical news alert, a reader whose traumatic memories are triggered by a news story, or a reader whose loved ones are at the location where a breaking news story is unfolding.
None of these scenarios are what we think of as “average.” Yet each of these is entirely normal: They’re scenarios and feelings that are perfectly understandable, and that any of us could find ourselves experiencing.
And putting this new lens on the product helped the design team see all kinds of decisions differently. For example, the old NPR News app displayed all stories the same way: just a headline and a tiny thumbnail image. This design is great for skimming—something many users rely on—but it’s not always great for knowing what you’re skimming. Many stories are nuanced, requiring a bit more context to understand what they’re actually about. Even more important, Bawcombe says, is that the old design didn’t differentiate between major and minor news: Each story got the same visual treatment. “There is no feeling of hierarchy or urgency when news is breaking,” she told me. Finally, non-news stories like analyses, reviews, or educational articles were clustered under the generic label “more,” making these pieces easy to gloss over.
By thinking about stress cases, the team arrived at a solution that works when an anxious user needs to know about urgent news right now, and also helps all those less urgent stories find their audience by providing enough nuance and context to bring in readers.
In the new version, the app loads with the top story of the moment displayed at the top in with a headline, a teaser line below the headline to provide additional context, and larger image—providing a clear indicator of what’s critical right now. But for the rest of the news—whether an update on a bill passing Congress or a warning that a hurricane could hit the Caribbean—the team decided that headlines are typically clear and explanatory enough with a smaller image and without the teaser.
After the latest headlines, the new design mixes in more feature stories that include the larger size image and a teaser, effectively slowing down the scrolling experience for those who have the time to go past whatever’s breaking right now, but might need more context to know whether an individual item is interesting enough to tap.
The design team also knew the new version needed to bring breaking or developing news to the surface visually without causing alarm every time a developing story is posted—only when it’s truly warranted. So the team decided to balance the intense wording of these labels with a calmer color: blue. When a story is urgent, though, an editor can override that setting and make the label red instead. By defaulting to blue, the team is keeping a wider range of users in mind—users who need an alternative to sites where every headline shouts at them, all the time.
These are small details, to be sure—but it’s just these sorts of details that are missed when design teams don’t know, or care, to think beyond their idea of the “average” user: the news consumer sitting in a comfy chair at home or work, sipping coffee and spending as long as they want with the day’s stories. And as this type of inclusive thinking influences more and more design choices, the little decisions add up—and result in products that are built to fit into real people’s lives. It all starts with the design team taking time to think about all the people it can’t see.
And while the entire point of developing personas is to bring empathy into the design process, they often miss the fact that demographics and averages aren’t the point. Differing motivations and challenges are the real drivers behind what people want and how they will interact with a tech service.
To get that those real drivers, tech companies need to talk to real people, not just gather big data about them. Because the only thing that’s “normal” is diversity.
Adapted from Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech by Sara Wachter-Boettcher. © 2017 by Sara Wachter-Boettcher. Used with permission of the publisher, W.W. Norton & Company, Inc. All rights reserved.