Future Tense

Algorithms Can Be Lousy Fortunetellers

But potential employers could take them seriously anyway.

Apps like Crystal, not unlike horoscopes, can be powerful even if we intellectually understand their limitations.
Apps like Crystal, not unlike horoscopes, can be powerful even if we intellectually understand their limitations.

Photo illustration by Lisa Larson-Walker

Two weeks ago, a new app called Crystal—which is billed as a tool to improve professional communication—would have told a potential employer that I am “a quick learner with strong analytical, creative, and social skills, but may seem scatter-brained, forgetful, and/or sarcastic.”

This week, though, I’m “pragmatic, independent, and need logical reasons for everything—but [am] able to take a calculated risk when necessary.”

What changed? I have absolutely no idea.

According to its website, Crystal “analyzes public data to tell you how you can expect any given person to behave, how he or she wants to be spoken to, and perhaps more importantly, what you can expect your relationship to be like.” Say you want to pitch a brilliant idea to a client’s latest hire. Simply plug his name into Crystal’s search box or just pull up his LinkedIn profile and click a Chrome extension called Crystal for LinkedIn. The app analyzes that page and other publicly available information on the Internet to come up with a personality profile including a one-sentence summary and tips on how to speak, email, work with, and sell to him. Another extension, Crystal for Gmail, will also suggest the words, phrases, style, and tone that match his communication style and can even predict how two people will work together. Instead of a formal introduction and opening pleasantries, Crystal might suggest you “[g]et right to the bottom line,” or use casual phrases and abbreviations. It also lists several things that do or do not “come naturally” to the person you’re profiling, like “prioritiz[ing] innovation and excitement above stability and security” or “say[ing] something bluntly that accidentally offends someone.” It offers tips like “[d]on’t trust that he will follow specific verbal instructions.” Reporters at Business Insider and the Next Web have been impressed with its accuracy, though it’s worth nothing that bloggers provide Crystal with much more information to work with than average users.

Admit it. You just thought of someone that you’d like to profile. (Maybe you just want to see what the app thinks of you.) The idea is irresistible, like Googling a crush. But Crystal’s algorithms take that mildly invasive activity much further by analyzing and synthesizing all that information to paint a clear—if not wholly accurate—picture of someone based on her Internet presence. We love the idea of unlocking secrets about others and ourselves—whether through a BuzzFeed quiz, a horoscope, or the Myers-Briggs Type Indicator. Even if you don’t believe the results will be wholly accurate, you’ll often feel a flash of recognition. Some salient aspect seems to correspond to reality. This may be due to accuracy, luck, or descriptions so general as to be universal. (“You don’t get enough recognition for your efforts.”) Our minds, so good at pattern matching, latch onto these instances as proof that the predictions might contain a kernel of useful insight. It’s fun, possibly informative … and creepy.

Crystal is a novelty at this point and presents itself as a tool to facilitate more empathetic communication, but it does so by inferring character traits and describing them with moral overtones. Saying someone “gets bored very easily” sounds very different than, say, having a lively curiosity. It’s easy to imagine college admissions boards or employers using something like it to screen applicants. Profiling tools presage a future in which we may rely upon algorithmic and automated decision-making for public policy, social interaction, and more.

Apps like Crystal, not unlike horoscopes, can be powerful even if we intellectually understand their limitations. I applaud the app’s creators for showing the percentage of “accuracy confidence” that their predictions are correct for particular individuals. Confirmation bias is strong, however, and the program primes users to look for predicted attributes. Crystal predicted that it doesn’t come naturally to an old friend of mine to speak with her hands. I was suspicious of this inference because she has a minimal social media presence and few, if any, photos online. The program was inaccurate, but I found myself scanning memories several times for instances supporting that inference. Even knowing Crystal’s accuracy rating was only 49 percent, I still doubted my own perception, which was formed by more than 20 years of firsthand data.

I’m curious how an app that tracks written, and verbal, communication came to a conclusion about someone’s gestures in face-to-face conversation. However, Crystal gives users only a cloudy idea about its methodology. The app’s website indicates that its inferences draw upon public data written by the person profiled and at least three personality tests, the Five-Factor Inventory, DiSC, and True Colors, but it doesn’t explain why these were chosen, how they are weighted, and what other factors it takes into consideration. Why did my profile change so drastically, and what made the accuracy drop from 49 percent to 36 percent?

This makes it hard to assess how seriously to take its conclusions. Even worse, people have no way to counteract, or even challenge, Crystal’s conclusions. In my first profile, Crystal said I couldn’t be “trust[ed] to follow specific verbal instructions.” Not exactly something a potential employer wants to hear. What a relief that it now says it comes naturally to me to “[a]pproach problems cautiously and methodically.”

What’s a job seeker to do if the she disagrees or has found ways to compensate for her natural tendencies? Would she even know that her application had been rejected because of Crystal? Perhaps a person Crystal describes as quick to trust approaches work relationships differently than those in her social sphere. What if the profile is based on information someone posted in high school?

These tools promise to become more accurate and able to contextualize their results over time, but accuracy alone doesn’t solve the problems posed by algorithmic and automated profiling. As Evan Selinger points out in the Christian Science Monitor’s Passcode column, Crystal intrudes on our sense of privacy by obscurity. He notes, “The little and seemingly harmless digital breadcrumbs that we’ve left here and there can be aggregated to form a portrait that’s too revealing and too accessible.”

Like Selinger, I wonder what will happen as we become more reliant on similar technology. Even accurate and appropriately used profiling tools may have unintended consequences. Human judgments may be fallible, but they are flexible. Human hiring managers might have very different interpretations of a photo showing an applicant skydiving. One might consider it a sign of confidence and courage while someone else could see it as indicative of a risk-seeking personality. If employers adopted similar profiling tools, widespread application of the same standards in hiring candidates would effectively block individuals with certain digital signifiers.

This consistency may reduce other important values in our society like diversity and social mobility. People who may be less impressive on paper but might be hired on a gut feeling won’t even be considered in the application process. We, as a society, may decide that these countervailing interests in diversity and social mobility sometimes outweigh big data’s benefits. Either way, we must be mindful of ceding our future to algorithmic fortunetellers.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter.