A few years ago, a friend of mine became internationally notorious thanks to Twitter.

I'm not going to use her name here to spare her further damage to her Google results, but you've probably heard of her: It's the woman who tweeted a joke about catching AIDS in Africa right before she got on a plane to Africa and was fired by the time she landed. 

I wrote a column saying, among other things, that the intention of her tweet had been widely misconstrued: It was meant to be a self-deprecating joke about white privilege. Among the angry reactions I received to that column was one from a somewhat closer friend -- let's call her Gertrude -- who was mad at me for defending someone she saw as a racist.

When I told Gertrude I was in a better position than she to understand the intent of the tweet, since I had spoken with the tweeter about it, she replied that it was the opposite: I was too close to her to see things clearly. Not knowing anything about the person behind it, except what everyone knew -- that she was blonde and worked in PR and had tweeted a few other off-color jokes -- made Gertrude a less biased and therefore better judge, she implied. 

I think about this eerily postmodern claim pretty much every time I witness a social media pile-on. I thought about it again after reading Meghan Daum's excellent essay about her temporary infatuation with the loose confederation of academics, writers, and podcasters who call themselves "the Intellectual Dark Web." Although fairly vanilla in her own (liberal, coastal) politics, Daum was drawn to the self-styled renegades of the IDW because of a phenomenon she'd noticed spreading throughout society, but particularly on the left: an intolerance for nuanced positions and arguments. 

On hot-button topics, notably those involving race or gender, to attempt to qualify one's support for one's ideological allies in any degree was increasingly viewed by those allies as a defection. Merely to seek to understand the other side's case on its own terms was decried as endorsing it and giving it quarter. Sub-maximal expressions of outrage were tantamount to treason. 

As the Trump era dawned, turning up the heat on the culture wars, social media became a venue for voicing full-throated agreement with established orthodoxies and nothing else, unless you wanted to be put on public trial for ideological impurity. "Any admission of complexity was a threat to the cause," Daum writes. "Nuance was a luxury we could no longer afford." 

To hear the heads of the big social media companies tell it, their platforms are, above all, places where people can talk to one another. You can express your point of view on the day's news, discuss it with your fellow citizens, attempt to persuade someone who holds different opinions. Because there are no gatekeepers, the full spectrum of possible views is on display, in theory. 

Response to the plague of misinformation and propaganda that has flooded social platforms in the past few years has tended to focus on how it skews those discussions, polarizing them and destroying the possibility of consensus based on a shared set of facts. A new initiative by Twitter aims to promote "healthy conversations" by addressing the effects of "echo chambers and uncivil discourse." At Facebook, there's a similar push to encourage "meaningful social interactions" between real-life friends and family rather than arguments between strangers sparked by sensationalized news stories. 

It would be great to see these efforts succeed. But I suspect they won't, because they misunderstand the function of "conversation" on social media, which has much more to do with signaling and sorting than it does with understanding or persuading.

As organizing principles of the internet, social and search are usually portrayed as competing. But social can be understood as a subset of search: a way of organizing information around connections between people. If I want to find pizza places in my neighborhood, I search on Google; if I want to find out what people I know are saying about the Oscars, I search on Twitter. 

Social media is a tool for discovering other people who are like us. You can do that by finding information other people are posting, or by posting information of your own and letting like-minded people find you. That social platforms are designed to reward agreement is baked into the architecture. There are multiple clear ways of signaling agreement: You can like a post, share it, or follow its author. If a piece of content challenged your views or persuaded you to change your mind, on the other hand, there's no easy way to indicate that, which means there's no way for the platforms to measure it. And, as we all know, you get what you measure. Indeed, figuring out how to measure anything beyond agreement and engagement is very much the point of the conversational-quality initiatives now under way at Facebook and Twitter. 

In essence, the social platforms think of themselves as 18th-century London coffee houses, where students of the Enlightenment gathered to match wits and debate one another's theses. But in reality, these platforms are more like big, complicated political rallies where people hold up signs stating their causes. The signs aren't there to change other people's minds. (Whose mind was ever changed by a sign at a rally?) They're there so you can find the other members of your group and they can find you, and you can all stand together. 

There's no room for nuance in a political-rally sign. If I were to show up at a rally against police violence holding a poster that said "Killing Unarmed Civilians Is Often Murder," that would be technically true: Some such killings are genuine accidents. But my fellow rally attendees would view that "Often" -- understandably -- as a provocation. I would be using a tool meant only for signaling agreement to signal something other than that. Like my friend the Africa tweeter, I'd be guilty prima facie of not taking the subject seriously enough to make myself absolutely clear.

If pithy arguments of the sort that fit on a poster or in a tweet don't change people's minds, what does? That's simple: We are persuaded by people we like. The more affinity someone establishes with you, the more receptive you'll be to their ideas. 

Given that the Intellectual Dark Web is organized, at least notionally, around ideas that are difficult to boil down to platitudes or arguments that run afoul of settled orthodoxies, it's no coincidence that the primary vectors for so many of its figures, as Daum points out, have been not Facebook and Twitter but YouTube videos and podcasts. Spelling out your message an hour into a two-hour lecture or interview isn't the most efficient way to broadcast it to millions of people who might agree with you, or who might noisily disagree with you and thereby amplify your message to others who agree with you. But it's a great way to maximize the chance that anyone who hears it will have by then spent enough time with you to give it a sympathetic hearing. 

Maybe Facebook and possibly even Twitter can square this circle. Maybe the sites can figure out how to get their users to spend enough amicable time together that the arguments, when they finally arise, will feel more like discussions.

Until then, though, they'll remain what they are: tools for expressing approval or antipathy, and nothing in between.