The Incredible Shrinking Zeitgeist: How Did This Great Word Lose Its Meaning?
Not long ago, the New York Times crowned Tyler Brûlé, a sleekly sophisticated design mogul, “Mr. Zeitgeist.” But the throne was occupied: A different NYT piece had already declared Marie Antoinette queen of the ever-shifting zeitgeist. Before her, the paper had proclaimed Rosa Parks a “zeitgeist warrior,” and ascribed zeitgeist-whispering powers to Peaches and Pixie Geldof, Bionic Woman, the phrase “bonuses are back”(BAB), and Al Gore. The feistiest recent use of the term comes courtesy of Lindsay Zoladz, who compared Amy Schumer to “a comet streaking gloriously across the Zeitgeist, leaving a tail of smudged mascara and Fireball aftertaste in her wake.”
For a wisp of language compounded of ghostliness and time, the zeitgeist is sure making its presence felt. But what exactly do people mean when they invoke it today? A prevailing opinion about kale chips? Backlash against a guacamole recipe? How did the word zeitgeist come to feel so small?
A zeitgeist used to be a formidable thing. Matthew Arnold coined the term in 1848 to capture the spirit of social unrest that suffused Victorian England. In 1933, Aldous Huxley wrote in a letter that the zeitgeist “is a most dismal animal and I wish to heaven one cd escape from its clutches.” Implored W.H. Auden: “May we worship neither the flux of chance, nor the wheel of fortune, nor the spiral of the zeitgeist.” This threatening creature was capricious in its moods and careless about tradition. It was sinister—powerful enough to convince individuals that they were not responsible for their own choices, that they were merely carried along by the romantic gust of the now. In Bismarck’s Germany, the terrifying phantom of volk nationalism absolved people of any need to resist the pull of consensus and think for themselves. But if we used to talk about the Roaring ’20s or the Flower Power ’60s, great sweeps of history distilled into luminescent symbols, now we get a 5-minute-long zeitgeist consisting of, say, TV shows about white girls in Brooklyn. Somehow, the zeitgeist—once so historical and grand—has become an anemic, trivial little sprite.
The media coverage of Girls was arguably the first harbinger of our zeitgeist-saturated zeitgeist. “So zeitgeisty it hurts,” wrote Alexandra Petri of the show in 2012, because zeitgeists, like mirrors, can be cruel. Creator Lena Dunham was hailed/slagged as the “zeitgeist queen,” “the maven of the millennial zeitgeist,” a zeitgeist devourer, a zeitgeist seizer, a “zeitgeist figurehead,” and the darling of “media outlets desperate to ride the zeitgeist.” Since then, the term has continued to spiral. In 2013, Joseph Burgo wrote that Lady Gaga’s song Born This Way speaks to the “anti-shame zeitgeist”—but the zeitgeist was also accused of dabbling in fat-shaming and slut-shaming. Kanye vowed to “pop a wheelie on the Zeitgeist” in 2014. And now we are drowning in zeitgeist: Mindy Kaling said last month of Parks and Rec, “Because it’s zeitgeisty, it would be considered a hit [on cable].” (What if zeitgeist just equals “a raft of things that would be a hit on cable?”) We are supposed to believe that the contemporary zeitgeist is full of schadenfreude. And iPhones. And cheese.
On Twitter, a hoverboard might ride the crest of the zeitgeist and you may think you are original until you realize that “you’re just another spirit in the zeitgeist’s realm.” Elle magazine has its very own “zeitgeist” rubric, which entails articles on everything from the “ultimate girl crush” to “11 times Taylor Swift looked exactly like an emoji” to “6 things women should totally stop apologizing for” (including, perhaps, looking like an emoji).
Clearly this is a time in which we are constantly in the throes of some swift, consuming moment—an age of obsession, wherein we get “hysterically excited about very good but not hugely original cultural products seemingly every other month,” as Willa Paskin put it in Slate last year. Serial. True Detective. High Maintenance. There is always some new fad to point to, as totalizing as it is transient. So zeitgeist feels like a useful word for capturing this overwhelming illusion of cultural consensus. But what we are calling zeitgeisty these days has none of the inclusive significance of a true movement. From within our media bunkers, we imagine that the entire world is as transfixed by The Latest Thing as we are, but all we’ve done is type #BroadCity into Twitter.
Either the mini-zeitgeist guts our perception of just how diverse the culture is, or it offers us little incentive to care. The modifier zeitgeisty used to mark something out as widespread, a unifying force (Sgt Pepper was zeitgeisty)—now it serves as a password into specific echelons of cool. The word often refers to the niche-ily aspirational: Marie Kondo, or Pilates. It’s as if, since we can’t knit our fractured universe of tastes back together again, we’ve settled for paying lip service to the choicest fare.
Science-fiction writer William Gibson has claimed that the present moment is defined by “atemporality,” a “new and strange state of the world in which, courtesy of the Internet, all eras seem to exist at once.” The contemporary zeitgeist, then, has to do with not having a zeitgeist, or having an infinite number of zeitgeists, an undifferentiated Gesundheit of zeitgeists. This notion is hardly peculiar to 2015—in his 1910 tract The Spirit of Romance, Ezra Pound intoned that “all ages are contemporaneous.” If that’s true, surely the way to honor such simultaneity is not to parse it into fingernail slivers of fleeting obsession. Instead, let’s reserve this term for those startling, rare moments of clarity when an entire culture rises up as one, to support civil rights or condemn bigotry or mourn the dead. Either that, or zeitgeist should just give up the ghost.
Documenting the Diversity of American English
“Gas is really expensive anymore.”
“He’s in school in Boston—so don’t I.”
“I need me a salad.”
To a high school English teacher or self-styled grammarian, the above sentences are likely cringe-worthy. To most native speakers of English, in fact, they would sound either inelegant or incorrect. Why then, depending on where in North America you live, are they a part of normal, everyday speech?
R-E-S-P-E-C-T, Find Out What It Means to Scalia
Words may have lost all meaning to the Supreme Court, as Antonin Scalia suggested Thursday in his dissent from the King v. Burwell decision to uphold health care subsidies, but there’s one word that has a meaning quite particular to the Supreme Court: respectfully. It is a long-standing tradition that Supreme Court dissents often conclude with the gracious words “I respectfully dissent.” So it was taken as a grave sign of incomity in 2000 when Ruth Bader Ginsburg concluded her stinging minority opinion in Bush v. Gore with a bare “I dissent,” the Supreme Court’s equivalent of a glove slap to the face. On Thursday, Scalia likewise eliminated respect from his King v. Burwell dissent, concluding his torrent of outrage with “I dissent.” (Even though Scalia thinks that today's marriage equality decision was a descent into "the mystical aphorisms of the fortune cookie," that was not enough to provoke an "I dissent" from him, signaling that same-sex marriage might not upset him quite so much as health care subsidies.)
I wondered how often such disrespectful dissent occurred in the hallowed halls of the Supreme Court. Law professor Stephanie Tai helpfully pointed me to a Harvard Law Review note, “From Consensus to Collegiality: The Origins of the ‘Respectful’ Dissent,” that charts the history of dissents both respectful and less so. The convention is one that’s only been around for 50 years, but in that time it has become a quite durable one, making departures from it quite striking.
The note, by New York attorney Chris Kulawik, takes its inspiration from ordinary language philosopher J.L. Austin’s theory of speech acts. In How to Do Things With Words, Austin discussed certain speech acts as “performative utterances,” statements that draw their meaning not just from the semantics of their words but from the social context in which the words are uttered. The classic example is “I do,” in a marriage ceremony, which has a network of implications and commitments far beyond what the two words would mean in any other context. Another example would be when Scalia referred to Ginsburg as “Goldberg” the other week, a seeming slip of the tongue to which some imputed more sinister significance. Likewise with “I respectfully dissent” and “I dissent,” which in the patois of the Supreme Court take on the implications of a courteous response and a furious retort, respectively.
In the early history of the court, dissents were polite, defensive, and even apologetic, stressing the focus on consensus. “In any instance where I am so unfortunate as to differ with this Court,” Justice Bushrod Washington pleaded in U.S. v. Fisher (1805), “I cannot fail to doubt the correctness of my own opinion. But if I cannot feel convinced of the error, I owe it, in some measure, to myself, and to those who may be injured by the expense and delay to which they have been exposed, to show, at least, that the opinion was not hastily or inconsiderately given.” By the 20th century, dissent had become enough of a norm not to require such hand-wringing, though it was still comparatively rare. But by 1950, dissent was both common and far more prominent: Dissents were not just expressions of disagreement but judicial statements. With the Warren Court taking on ever more divisive social issues, the court tried to mitigate its own divisions by embracing a “norm of collegiality,” embodied by the respectful dissent.
The “respectful” dissent as we know it today emerged on the Warren Court in 1957, especially in the opinions of Charles Whittaker, followed closely by other justices. As with its immediate predecessors, “The respectful dissent is the dominant speech act of the Roberts Court,” being used in 70 percent of dissents. The remainder either have no dissenting speech act whatsoever, or, more rarely, contain what Kulawik terms “assertive dissents,” which “withhold respect where convention requires it.” Of the years 2005–09 that the note covers, most justices hewed to the respect norm, topped by the studiously polite David Souter and the newest appointee Sonia Sotomayor.
We recall that Ruth Bader Ginsburg performed a notorious nonrespectful dissent in Bush v. Gore. Yet a closer look reveals this to be typical behavior of the atypical Ginsburg: She never respectfully dissents. She believes the respectful dissent to be disingenuous when “you’ve shown no respect at all.” Ginsburg also disputes the intrinsic significance of her assertive dissent in Bush v. Gore. She would rather, it seems, be analyzed on substance than performance; she consciously omitted “I dissent” from her otherwise excoriating Hobby Lobby dissent.
Ginsburg’s bluntness separates her from the other judges, whose performative dissents require more interpretive argle-bargle. Most justices reserve the assertive dissent for the most controversial and consequential of cases. John Paul Stevens, usually quite respectful, used “I emphatically dissent” in his Citizens United dissent, his sole assertive dissent. In the 2007 Parents Involved in Community Schools desegregation case, Breyer concluded, “I must dissent.” As Kulawik writes, “an assertive dissent is ultimately an act of protest, a signal from one Justice to the world at large that the majority opinion does not deserve legitimation — that the majority has acted impermissibly and produced significant costs for political society.” Under this interpretation, it is Scalia who protests the most. He ties Breyer for assertive dissents, used in his Defense of Marriage Act (U.S. v. Windsor) and Guantánamo (Boumediene v. Bush) dissents, but he is vastly stingier with his respectful dissents, marking him as the most substantively disrespectful of the justices if we disqualify Ginsburg, who thinks the whole business meaningless. (Perhaps this helps explain why Scalia and Ginsburg have somehow managed to remain friends despite agreeing more frequently over Italian opera than they ever do in English.)
Like the philosophies of W.V.O. Quine, Wilfrid Sellars, and the later Wittgenstein, Austin’s philosophy was a rejoinder to the logical positivist idea of meaning, which posited that meanings of statements could be specified atomically and precisely. Perhaps surprisingly, Scalia’s textualist philosophy invokes this holistic and sometimes squishy perspective: “Words, like syllables, acquire meaning not in isolation but within their context,” he wrote in K Mart v. Cartier (1988), uncannily echoing Jacques Derrida’s deconstructionist maxim, “Il n'y a pas de hors-texte” (“there is nothing outside context”). “Context always matters,” Scalia repeated in his King v. Burwell dissent. Such linguistic indeterminacy may be correct (I happen to think it is), but it makes Scalia’s supposedly precise textualist philosophy no less vague than the philosophy of an evolving “living Constitution” to which Ginsburg subscribes. Perhaps this is what Scalia meant when he said of Ginsburg, “She's a really good textualist.” In that context, textualism, I respectfully submit, is just another act of interpretive jiggery-pokery.
Check Out the Trailer for Do I Sound Gay?
What does it mean to “sound gay?” Is there a gay voice? A gay lilt? A gay inflection? We posed these questions on an episode of the Lexicon Valley podcast last fall, and now a new documentary from David Thorpe, titled Do I Sound Gay?, takes on the subject more intimately, using the filmmaker's own anxiety over the way he talks as his motivation to get answers. Post-breakup and past 40, Thorpe finds himself loathing the sound of his voice and struggles to reconcile, through interviews with other gay men and speech experts, the tangled mix of language, identity, and sexuality. Check out the trailer:
Is Language Now Meaningless? A Ruling in the Matter of Scalia v. SCOTUScare.
IN THE MATTER OF
SCALIA V. SCOTUSCARE
Argued June 25, 2015 – Decided June 25, 2015
Supreme Court Justice Antonin Scalia alleges in his dissent to the ruling on King v. Burwell that “words no longer have meaning.” Scalia claims that language is futile if “an Exchange that is established by the State is not ‘established by the State,’ ” but rather “by the State and Federal Government. That of course is absurd.”
After prolonged deliberation, the Supreme Court of the American Lexicon sees fit to hand down its own ruling as to whether all words have lost their meanings. This court has heard from scientists who have assessed individual phonemes and morphemes for sudden declines in import as well as from nihilists, deconstructionists, and existential philosophers who have urged the court to manufacture meaning out of nothingness. Justices have attempted to communicate with each other across an impressive mahogany table. The extent to which our own various mutterings and bloviatings and grandstandings made any sense at all was fastidiously charted. Finally, the court consulted Justice Scalia’s own glossary of transparently made-up words in an effort to more wisely adumbrate the line between significance and pure fairyland castle argle-bargle.
The Court finds on this twenty-fifth day of June in the year 2015:
- THAT in light of the precedent established by Ducking v. Autocorrect, 2008, a departure from the realm of specificity determined by the principle of intended meaning does not preclude the more general term meaning from being applicable
- THAT the feeble argument that definitions of which one disapproves are in fact nonsense is a curious bit of casuistry unworthy of a court of our stature (see Santorum v. Savage, 2003)
- THAT Justice Scalia’s many vivid and delightful coinages—e.g. “interpretative jiggery-pokery,” “Platonic golf”—presuppose a language capable of generating new and essentially comprehensible elements, relative to the descriptivist provision in Malaprop v. Jones, 1896
- THAT the preposterous and outlandish interpretation of the Seuss Statute (1957) advanced by the dissenting Justices does not in fact impinge on words’ ability to signify on a boat, in a moat, on a train, or in the rain
- THAT, indeed, the plaintiff’s claim of linguistic meaningless is, in his own argot, applesauce
- THAT language itself is alive, well, and aboil with significance notwithstanding the challenge to its powers impertinently posed by the plaintiff. As you were.
Can You Guess the Poem From a Single Line? Take Our Quiz.
O Slate quiz takers! We’ve challenged you to guess the pop song from its first second and the painting from one eye. Now we want to know whether you can deduce the poem from a single line. Sound easy? Not so fast. It turns out that a lot of versifiers like to write about the same things, including love, birds, and art itself. Can you untangle your Levine from your Levertov, your Yeats from your Keats? Take our quiz and find out!
Loading Quiz ...
It’s Time to Speak Up About the Overuse of “Speaking Out”
North Carolina florist Deborah Dills was driving to work when she spotted Dylann Roof, the suspected gunman, seemingly fleeing his Charleston, South Carolina, murder scene. She called the cops, then tailed Roof’s car for 35 miles until police caught up with him. Over the weekend, the New York Times posted a video interview with her. Its headline: “Motorist Who Alerted Police Speaks Out.”
Speaks out. The phrase implies something weighty. She isn’t just telling her story, and she isn’t just speaking. She is speaking out. But what does that mean, exactly?
In reviewing other recent Times usages of “speaks out,” it’s hard to tell. The headline when Kanye West objected to yet another award winner: “The Morning After the Grammys, Kanye West Speaks Out Against Beck.” The headline when Martin O’Malley, a presidential candidate who is neither Bush nor Clinton, stood in shocking objection to a potential Bush-Clinton election: “O’Malley Speaks Out Against Possible Dynastic Rematch for White House.” Meanwhile, the Washington Post headlined that O’Malley “speaks out” against a trade deal (and that the mayor of Washington, D.C. “speaks out on pot, dating and what makes her angry.”) Entertainment Weekly reports that Cersei Lannister’s body double “speaks out” about how great her filming experience was. On local TV, there’s even a lot of speaking out against animals. Northern Michigan: “Man Attacked by Dogs Speaks Out About ‘Terrifying Ordeal’.” Colorado: “Colorado Teen Speaks Out After Shark Attack.”
Deborah Dills isn’t the only person “speaking out” about this past week’s tragedy. Billboard: “Nicki Minaj Speaks Out on Charleston Church Shooting.”
No more. It’s time to speak out—er, so to speak—against the overuse of “speaks out.” When somebody truly speaks out, they are raising their voice above an oppressive institution, or taking a risk by speaking in opposition to something. Consider the very construct of the phrase: To speak out, one must have first been in. The speaker must have been contained. The speaking is an act against that containment; it takes courage. A whistleblower, for example, speaks out. So does a prisoner being abused in Guantánamo Bay. But when somebody makes a statement about something that directly or indirectly concerns them, or shares an opinion, or simply chimes in on the news of the day, they are just speaking. Perhaps they are speaking up. They are definitely not speaking out.
Put it this way: South Carolina Sen. Lindsey Graham couldn’t speak out about the steak he ate last night, but he could, were he decent and bold enough, have long ago spoken out against the flying of the Confederate flag.
I made this case, some of it word-for-word from what I wrote above, to then-Times language columnist William Safire on Dec. 30, 2004. That day, the paper’s lead story—about the global response to the tsunami that ravaged eastern Asia—was headlined “World Leaders Vow Aid as Toll Continues to Climb.” Beneath it was the subhed “Bush Speaks Out,” because George W. Bush, after a few days of unexplained silence, had finally issued a statement pledging help. He had spoken. But what was Bush speaking out about? My argument: The Times was wrong to say he had spoken “out” at all.
Safire passed my words along to Allan Siegal, then the paper’s standards editor, who sent me this reply: "My first impulse was to scour the dictionary definitions and defend our headline. My second—and current impulse—is to tell you that you're absolutely right. I also plan to tell the staff, in our post-mortems. Thanks, and regards." Vindication!
(I was a cub reporter at a Times-owned community newspaper at the time, and this was the greatest email of my career so far. I replied, telling Siegal that whenever I applied for a job at the Times, I’d include his note in my application to show that he once declared me “absolutely right.” His response: “With any luck, I’ll be retired, and the person you tell will point out (correctly) that ‘absolutely right’ is redundant.”)
I don’t know if Siegal ever actually relayed this to the staff, but he did retire in 2006 and the “speaks out” abuse continues there, as it does across the media. This is a problem, and not just for nit-picky readers like myself. It seems clear that publications overuse "speaks out" because it inflates the importance of a story. It isn't all that newsworthy when Kanye speaks, but it seems far more noteworthy when he speaks out! It's as if we're justifying our having written these stories, or perhaps luring the reader in with promise of important speech the way CNN declares everything to be "breaking news." This isn’t fair to readers. If a story can’t survive without the phrase speaks out—if, say, “Nicki Minaj Says Things About Charleston Church Shooting” just doesn’t sound important—maybe it’s because it’s not actually news worth running.
But most importantly, the distinction is critical because when we use "speaks out" for every act of speech, we diminish the moments when someone really does speak with courage. There are many important moments, blended in with the rest of the muck, when someone risks their reputation or, in some cases, even their wellbeing and safety, in order to speak out. We devalue critical speech when we treat all speech as equally courageous.
So, to this weekend’s usage: Is Deborah Dills, the woman who spotted the Charleston shooter on the road, speaking out? No. She is speaking. Deborah Dills did a great thing, and hers is a wonderful voice to hear. But I think she'd be the first to admit that she is not speaking under duress. Her moment of courage came before, when she followed Dylann Roof’s car. Now she is safe.
Earlier this month, though, the Times ran a piece of writing with the headline, “Letter From Azerbaijan Jail: Khadija Ismayilova Speaks Out.” It was written by a journalist there, who had been locked up for exposing the corruption in her country. This is a brave woman whose speech required courage. Her words were her protest. She was not just speaking, nor was she speaking up. She was speaking out.
There Should Be a Word for That! (So Make One Up.)
Language is wonderfully expansive and fluid—constantly mutating and forever evolving to better represent our lives and culture—and yet frequently inadequate. There are countless concepts, feelings, and situations, not to mention emerging technologies and gadgets, that don’t have a particular word to describe them. So, as proprietary speakers of a language, we have the right to invent what currently does not exist. “Everybody who speaks English decides together what’s a word and what’s not a word,” said the lexicographer Erin McKean in her 2014 TED Talk. “Every language is just a group of people who agree to understand each other.” So, if we all agree that a “selfie” is a photo one takes of oneself, then that’s exactly what it is (doesn't mean we have to like it!).
Gay Marriage? Same-Sex Marriage? How Should We Talk About Marriage Equality?
As the Supreme Court prepares to hand down a decision in Obergefell v. Hodges, how should we refer to the matrimonial institution they are poised to accept or deny? The front-runner terms are “gay marriage” and “same-sex marriage,” often used interchangeably. The question has real stakes—in politics, every word counts. A number of activists are moving away from the former and toward the latter, arguing it’s both more accurate and more respectful. Should publications reporting the story, like Slate, follow suit? What if it turns out that the term that’s out of favor with the affected group is the one that’s more likely to get people to read and share a piece?
Once upon a time, assimilation was the goal of the LGBTQ movement, and “gay marriage” was its term of art. The alternative, “same-sex marriage,” was viewed as potentially damaging to the cause due to its connotations of, well, sex. Now, though, champions of marriage equality believe that “gay marriage” performs small and insidious distortions. It seems to ignore the fact that some of the people who might throw a same-sex wedding do not identify as gay at all—they may be bisexual, asexual, or even heterosexual. (Point being that it’s none of our business.) And to be punctiliously correct, as the writer Tom Head notes, we would actually need to redefine certain opposite-sex marriages as “gay” marriages, because they represent the union of a closeted gay person and a straight person, or two closeted gay people, or some other overt or clandestine mash-up of yens hetero and homo.
My Students Never Use the First Person Voice. I Wish They Would.
At least once a week on the Dartmouth College campus, I see a student, eyes glued to a smartphone, literally walk into a tree or a pole or a peer. I don’t mean to laugh (and usually I suppress the urge), but c’mon, it’s a little bit funny. Sure, if this were a romantic comedy, students colliding with each other and dropping papers everywhere could make for a perfect meet-cute: heads would bump as both people bend down to collect the items, and the rest is history.
As a professor, this isn’t exactly the meeting of minds I’m looking to foster.
Each year, Apple’s Worldwide Developer Conference coincides with university commencements across the country. From June 8 to June 12 this year, the corporate giant is announcing shiny additions to its line of phones, tablets, computers, and watches, already ubiquitous on college campuses. But as students proudly claim the latest smartphones and personal gadgets, are teachers encouraging them to voice their own smarts in a personalized manner?
In various stages of schooling, we’ve all had instructors tell us to eschew the first-person voice for purposes of critical distance and rhetorical authority. Third-person perspectives boast several strengths, enabling prose that can sound formal, professional, objective, generalizable, and persuasive.
I can’t help wondering, however, whether every time we discourage students from using the first-person—every time they sheepishly hit backspace after reflexively writing I or me—they’re taking to heart the implicit message: “I” don’t matter. It’s not about “me.” My hunch is that cultivating discursive habits of self-effacement can lead to subtle yet systemic detriments to students’ sense of self and self-worth.
Many scholars insist that academia and first-person narratives shouldn’t mix. Feminist theorists, merging personal and political, have often been accused of “naked self-interest.” Disability scholars who spend time divulging their own hardships are charged with indulging in narcissistic “moi-criticism,” appealing to emotions (and scoring so-called sympathy points) rather than to the intellect. And philosophers are, as Dartmouth professor Susan Brison remarks, “trained to write in an abstract, universal voice and to shun first-person narratives as biased and inappropriate for academic discourse.” Other professors at Dartmouth have also weighed in on this matter. Jeffrey Ruoff, who teaches Film and Media Studies, told me: “Students commonly come to me to ask if they can write in the first person, like it’s a transgression.” Or, as English professor Marty Favor says: “I want to see the students behind the prose.”
The goal, of course, isn’t to assure students that it’s all about them—that is, to condone attitudes of entitlement and egotism. The point is for students to recognize that they must listen inward, harnessing a voice from deep down, in order to reach outward and contribute to society at large. Yes, I realize such advice runs the risk of sounding clichéd and sentimental: believe in yourself, the truth lies within, speak from your heart. But I’d rather see students grapple with sentiment than to have them smudge it out altogether.
Because you know what else is a cliché? The notion that good writing stands on its own merits, or that good ideas speak for themselves, or that a good paper can practically write itself. When we empower students to write with I, what we convey is: Stand up for yourself and take responsibility for what you say. Once you’ve found a voice, start thinking of all the people whose voices continue to go unheard. Behind glowing phones and laptop screens, students need to look up and speak out, to collide and connect with one another through exercises in self-expression and self-evaluation.
Students’ identities aren’t measurable by the gadgets they own. After all, even though the “i” in iPhone promises individuality, the widespread adoption of such devices can make us largely uniform. In this season of university commencements, what I really want is to see students graduate from small i to capital I. I want them to take stock of the actual “self” amidst their constant selfies.
William Zinsser, who passed away earlier this month, stressed the importance of writing in the first person. Author of the bestseller On Writing Well, he advised: “Writing is an intimate transaction between two people, conducted on paper, and it will go well to the extent that it retains its humanity. Therefore I urge people to write in the first person: to use ‘I’ and ‘me’ and ‘we’ and ‘us’.”
Unlike endless iterations of iProducts, the real I should never feel outdated, replaceable, or dispensable. As students strive to make the grade, we’d do well to remind them that there’s no voice without I.