Text is imperfect, so are people

As someone who works within the movement and participates as a volunteer from time-to-time, I’m sometimes frustrated with the imprecise nature of text-focused collaboration. There is a lot of nuance and subtly in how we communicate as people, and much of that is lost when we work together via text. It can lead to abuse, frustration and misinterpretation. Especially when it comes to giving feedback.

A colleague of mine shared this recent blog post from Stack Overflow, the popular Question and Answer site. The author describes how a rather benign policy change lead to feelings of being attacked and the cycle of frustration when conversations go awry.

This is something I consider daily when interacting here in Space, and across my many interactions with the Wikimedia community. I’d encourage you to have a read and let’s discuss ways we can remember that we’re all on the same side of the problem.

“The monster in this case is not one person, it was created when lots of people, even with great intentions, publicly disagreed with you at the same time. Even kind feedback can come off as caustic and mean when there is a mob of people behind it.”


A very interesting observation, which seems quite relevant to Wikipedia discussions where offense is frequently perceived where it is not intended.

I’ve felt that before when I’ve perceived someone’s frankness as being short with me. I think it’s a projection of how we talk to one another in-person. Like, “I’d never say that to someone standing in front of me!” but when really it’s just someone who’s busy trying to communicate information as quickly as possible.

When that happens, what do you think the response should be? I always feel quite bad when someone reads what I’ve written and takes offense. That’s empathy, which I suppose is a healthy thing, but in the moment is sometimes hard to remember.

1 Like

I really wish I knew. I usually read what I have written before posting, and look at it as if it was someone else communicating with me. Then I fix it if it seems a good idea, and read it again. It costs time, and sometimes the time is in short supply. I also don’t know if it works, or how often it is worth the effort. How would I know? The sample is small and there is no control group…
And then I read it again after I post, and sometimes edit again to clarify.


Could it be also that the fact that comments are public kind of reinforce their negative aspect? So for example, if I write a criticism of your article/email/whatever that is constructive and well thought out, that is polite and maybe even kind, without overdoing it, maybe the fact that it is public actually has the effect that Sara is describing in the article. It feels like a mass. It feels like one inadequacy has been unveiled to all, and that this “all” is looking at you thought this one thing you may have done wrong…


I just finished reading this story on internet moderation of comments: https://www.newyorker.com/news/letter-from-silicon-valley/the-lonely-work-of-moderating-hacker-news

The Hacker News moderators advocate for a gentle and patient style of moderation.

This could be true, but is there any reasonably practicable way to mitigate? Is the problem here with the writer, the reader or the medium?

Is the problem here with the writer, the reader or the medium?

Or maybe the problem is “being human”? I am reminded of lunch time at school. Students occasionally dropped their lunches on the cafeteria floor. Especially in the middle years, this was terribly embarrassing to the student. The room went silent. Everyone looked at person who did it. An adult would start walking over to help clean up the mess. And then the room would go back to its normal chatter. (At my schools, we were relatively gentle about this. I don’t remember anyone trying to make the student even more ashamed about the accident.) A few minutes later, nobody was thinking about it – except for the poor student whose lunch was spilled, who might still be writhing in humiliation whenever s/he thought of it days later.

I think the effect is similar on the internet: If it seems like “everyone” is thinking of you in a somewhat negative context, even if their attention only lasts for a few seconds, then this has a much bigger effect on you than it does on the people whose attention you held for a few seconds.

See also https://en.wikipedia.org/wiki/Spotlight_effect


Not much we can do about being human, but not all humans react in the same way. Some shrug it off, some don’t appear to even notice it (could be the same thing to an observer), and some take it personally and protest that they are being harassed. With the flavour of the month being “safe spaces” and “friendly spaces”, there is a danger of taking the perception of hostility for actual hostility, and making it dangerous to give honest, balanced, and necessary feedback, which would be gamed by the troll faction to make it even worse. Then we get the outsiders jumping in with both feet to “fix the problem with zero tolerance” because they are “professionals” and “know better” than the people involved,and another batch of long term constructive contributors crack under the frustration and leave forever. Somewhere there is a reasonable balance between making reasonable people feel welcome and secure, and getting the work of building a balanced and reliable encyclopedia done by a group of volunteers from widely varying cultures and with widely varying personal agendas. I do not think that top down imposition of one group’s hypothetical solution is an acceptable risk, particularly when that group is best known for their history of misreading the room.

Do not forget that by its nature, Wikipedia attracts a greater share of people with autism and similar issues than we see in real-world offline discussions. There is nothing wrong with that, and most of them are wonderful Wikipedians, but chances are reactions in discussions are not always as one expects.

1 Like

there is a danger of taking the perception of hostility for actual hostility

Do “apparent hostility” and “actual hostility” feel different to you, at the moment that it’s happening?

I’m thinking that if a given community’s goal were to be “make new people not feel like we’re a solid mass of seething hostility”, then perceived hostility would be every bit as much a problem as actual hostility. An editor wouldn’t have to actually think a newbie is the enemy for that newbie to want nothing more to do with that community. It’d be sufficient for the editor to merely produce the appearance of hostility to end up with that result.

Somewhere there is a reasonable balance

I think there might be some small things that could help us reach that balance. For example, maybe our response to a “bad idea” shouldn’t be a dozen people telling the editor that it’s a bad idea. Maybe a social norm of, say, two or three people replying and everyone else discreetly pretending not to notice that the editor posted a bad idea would be helpful. What do you think?

1 Like

Do “apparent hostility” and “actual hostility” feel different to you, at the moment that it’s happening?

I would say not, otherwise there would be less of a problem. Yet the problem occurs. It is possibly an artifact of the text communication system as much as the differences in culture and personality of the participants.

Not piling on would probably help. Possibly a lot. It might be difficult to get people to understand, never mind agree or actually comply, but it is probably worth a try. The more I think about it, the better it looks.
How does one stop someone adding what they think is important additional advice or explanation? How many comments is enough or too much? I ask because I know Wikipedians well enough to know that these points will come up and be haggled over for megabytes. I can foresee users getting templated over this. Still worth looking into.

1 Like

Well, it all depends. Sometimes one comment is enough, if the matter is not important, or if the user is receptive. Sometimes it takes two or three for them to understand that this is not a good idea. But if someone makes a statement which potentially affects thousands of users and they are in charge of executing this statement - well, one might need hundreds of people to say loudly that this is not a good idea, like in the recent en.wiki case.

that is an interesting angle actually. I never really considered the potential impact of this part before.

Response to the initial comment is important, and would have a big effect on how continuation is likely to go. History also matters, when it is known.

Also, if the response is dead silence, you can expect more comments, and they are likely to get more impatient with continued lack of response.