How to Make the Internet Kinder

Social media isn’t making us less kind, but it makes kindness harder

Photo: fizkes/Getty Images

InIn the lead-up to the 2016 election, like many other Americans, I found myself spending hours each day discussing politics on social media. Although at the time it felt so urgent, I think it’s safe to say that nothing was actually gained from these exchanges between strangers and far-flung family members.

Part of the problem was that the fear of what might happen if the election went one way or the other made everything seem so urgent that there was no time to listen. I found myself constantly thinking about whether it was a fundamental lack of understanding that had led to the country’s massive division and wondering what role gamified conversation had played.

Looking back, it seems clear that we often got stuck because we were trying to force feelings of empathy onto one another — for the poor white working class, for Mexican immigrants, for people who felt forgotten by past presidents, for people who feared for their future under this one — through a medium that wasn’t built for it.

Empathy — at its most basic, the ability to imagine the feelings of another — is often described as a salve for divisions in American culture. In recent years, it has come to also be seen as a skill that can — and arguably must — be learned and practiced. It’s not just about social harmony; empathy makes us better people.

A 2018 study compared self-reported empathy scores from more than 9,000 people with the respondents’ performance level in 31 different abilities. The people who scored higher in empathy also scored much higher in reading body language, conflict-resolution skills, resilience, and standing by their values. They were happier, hardier, and more optimistic. They were better at asserting their needs and expressing their feelings. There was only one scale where non-empathetic people scored higher: Need for Approval.

So, what happens to our empathy reserves when so much of life happens online? We obviously don’t have self-reported empathy scores from everyone who regularly uses social or immersive technology, but it’s clear that there are few better places to see “need for approval” on display than the platforms that keep people coming back for more “likes” and “shares.”

Arguing online squanders emotional energy

Much has been written about trolling, public shaming, and doxing (publishing someone’s private contact information online in an act of aggression or revenge). These issues have been bubbling under the surface for years, but it was the 2016 U.S. presidential election that seemed to really bring them to the fore of the national conversation. During that campaign, two conversation researchers conducted a poll of 1,866 Americans and found that 90% of them felt the 2016 election season had been more polarizing than the one in 2012. One-third of respondents said they had been “attacked, insulted, or called names” because of their political views, and one in four said their relationships had suffered. More than a quarter of these troublesome conversations were reported to have happened on social media.

One finding of this research particularly resonates: that it wasn’t always specific, controversial topics that divided these people and their conversational partners. Rather, it was support for individual candidates and how that support was tied to identity. To illustrate, the authors asked people to describe supporters of the candidate they didn’t like. These were the top adjectives respondents used on both sides: angry, uneducated, ignorant, uninformed, racist, white, narrow, and blind.

The same researchers went on to ask respondents to describe which elements went well in conversations with political adversaries. In response to that question, people used these words: agree, listen, common, open, respect, think, and ask.

The internet did not cause people to reinterpret the rules of conversation, and neither did the 2016 presidential election. But the tension seemed heightened in the lead-up to that race, and, by the time it was over, our inability to empathize with each other only seemed to be getting worse.

In 2017, the journalist and conversational expert Celeste Headlee published a whole book about fixing conversations, called We Need to Talk: How to Have Conversations That Matter. When I called her to discuss it, she said: “I want to be clear first that tech is not the problem. It’s a tool like any other tool.”

We have just become so overwhelmed by the number of people and feelings and conversations this tool opens us up to that we’ve squandered our emotional energy, argues Headlee. She encourages people to establish common ground and try to combat the “horizontal” nature of many online relationships by having more one-on-one conversations. “We used to think you were either empathetic or you weren’t, but the truth is you can increase your empathy, and one of the best and most effective ways is by hearing other people’s perspectives and experiences,” she says.

A 2017 study seemed to prove what those of us familiar with online debates have feared for years: People we disagree with seem less human to us when we read their views than when we hear them spoken aloud. Results from a separate 2017 study might help explain why. One: Voices convey emotion, both through the content of what a person says and in how they say it. And two: Intimacy can change everything in these contexts. Seeing someone’s face all the time creates a kind of expertise that allows a person to understand another’s mental state just by looking at them.

There’s evidence to suggest that it’s also possible to have this transformation on social media, where we are increasingly conducting our lives.

How to talk better

Enter Faciloscope, a natural-language-processing algorithm that analyzes sentences and interprets them based on three conversational “rhetorical moves”: staging (laying out the ground rules for a conversation), evoking (pointing out relationships between participants, e.g., “as Kaitlin said to Reid…”), and inviting (directly soliciting participation by asking a question or requesting a comment). The app, and bots like it, aren’t evaluating the content of what people are saying but the structure of the conversation. But by identifying where these things are happening in the conversation, in real time, a moderator might be able to see where the interaction could go wrong.

I decided to test out the app with a chunk of text from a tense conversation I’d recently had in a Facebook group about books (yes, I know). The discussion got heated pretty quickly, but it wasn’t easy to tell why. I let Faciloscope’s analysis fill in the blanks. After processing the text, the bot showed me a pie chart. Almost 70% of the pie was chartreuse, the color that signified “staging”; just under 25% was purple (“inviting”); and the rest was blue (“evoking”). I was also able to see a “move pattern,” showing the order that each “move” came in. There was a lot of chartreuse concentrated in four major areas, with three little purple bars and only one blue.

All that greenish-yellow painted a pretty unflattering picture of the three of us in the conversation: It suggested we’d apparently spent most of our time setting up, and re-setting up, our own positions and expectations. That didn’t leave much time for showing that we understood — or even acknowledged the existence of — each other’s points of view. We all know that when people don’t feel that they’re being listened to and understood, they’re less likely to be open to hearing out others. It’s no wonder our online political debates feel so polarized.

Google has a similar tool called Perspective API, which uses survey data on phrases that people deemed “rude, disrespectful, or unreasonable” to assess “toxic” language. A recent study used a mixture of Google’s Perspective API and human intelligence to test predictions of when conversations might go sour. Researchers analyzed 1,270 conversations in Wikipedia editors’ forums that had begun civilly enough but devolved into hostility. They then lined up these conversations with ones on the same topic that went well.

Their findings were not shocking: When the first comment in an exchange used direct questions or started sentences with “you,” the conversation was significantly more likely to go awry. Comments that started with gratitude, greetings, or attempts at coordination were correlated with conversations staying on track. Disagreement seemed to be okay, just as long as it was done using “hedges” and prompts for opinion. The research also supported the long-held idea that using “I/we” sentences tends to keep people from getting too defensive.

The data gathered by these algorithms reminds us to take a breath before we enter or continue a conversation. We already have the tools to guess when more empathy might be needed. Sometimes we just need a little nudge.

Excerpted from The Future of Feeling: Building Empathy in a Tech-Obsessed World by Kaitlin Ugolik Phillips. Used with permissions from Little A Books, an imprint of Amazon Publishing. Copyright © Kaitlin Ugolik Phillips 2020

Writer, editor, loud laugher. The Future of Feeling: Building Empathy in a Tech-Obsessed World out Feb 1 https://www.amazon.com/Future-Feeling-Building-Empathy-

Sign up for The Forge Daily Tip

By Forge

A quick morning email to help you start each day on the right foot. Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store