Moderating or closing comment sections – the Independent Media Advisory Panel report

trollsWhile I would have wanted to submit comment to Independent Media’s panel on what to do about abusive speech on newspaper comment sections, I somehow missed the call for submissions.

Now that their report is out, and on the understanding that this is a continuing conversation, I’ve offered a few comments on the panel’s recommendations – and the underlying issues – below.

TheMediaOnline’s summary of the panel’s findings seems comprehensive and accurate enough to simply quote, instead of doing the work of summarising them myself:

  • In the interests of freedom of expression, it is desirable to host online comments
  • However, the constitutional rights of readers and members of the public should not be infringed by such comments
  • It would be preferable to moderate comments prior to their publication online
  • Online platforms should be staffed with suitably qualified personnel
  • If effective pre-moderation cannot be undertaken for any particular reason, Independent should consider closing its comments section
  • Independent Media should develop guidelines to define unacceptable speech, which take into account legal and ethical considerations, but should not amount to censorship of differing viewpoints

I’ll not be addressing those points sequentially, and might not even get around to addressing all of them. But the free speech issue is one that does have to be addressed, if only to emphasis an important distinction.

Free speech

A commitment to free expression makes it desirable, rather than necessary, to host online comments. In other words, even if you shut comments down entirely, you are not violating anyone’s right to free expression.

The right to free expression means that you’re not barred from saying something. It does not mean being required to provide you with the platform on which to say it. So, for as long as you can make your point on your own blog, Facebook, Twitter or wherever, your rights are not being violated.

Your opportunities are being circumscribed, yes, but a private entity like a media house has no legal or moral obligation to provide you with an opportunity to comment. In fact, an overall commitment to free expression might mean exactly barring some people from commenting, in the grounds that they provide a chilling effect on the comments of others.

A practical example: if you wanted to have a comment-section discussion on what it’s like to be black and poor in Cape Town, you’d naturally get fewer people who are black and poor commenting if you also allowed white racists to comment.

More problematically: you might also get fewer black and poor people commenting if you simply allowed rich people to comment, in that the target audience might feel some measure of alienation or of being typecast or misunderstood.

I use these examples not to recommend these sort of constraints on comment spaces, but in furtherance of the general point that specific restrictions on who can comment where might, in certain instances, enhance freedom overall. The point is that freedom of expression in the aggregate can sometimes be served by restricting limited instances of free expression.

Legality vs. tone/character

The right to free expression and the extent to which it is (or isn’t) violated is a separate matter from the tone or character of a website. As soon as you allow comments at all, you’re encouraging the formation of some sort of community, and with that comes goals as to what the character of that community should be.

What this means is that even if something is not legally proscribed, you might nevertheless want to prevent it from being said. The Independent report makes much play of the right to dignity in the Constitution, but I’d rather not rest on that, because even if you think the Constitution has it wrong on things like hate speech and dignity, you could still justify restrictions on some speech on your news portal.

You could justify them simply via wanting to have a certain level of discourse on your platform, where you hold characteristic X to be non-conducive to that. X could be excessive sarcasm, or whatever – for example, I didn’t publish a comment the other day simply because it was overly pedantic and argumentative, and added no value to the conversation.

What you choose to restrict and why will be a policy matter for each media house to decide on for themselves, and I make the points above to encourage them to be guided by more than just the law when they deliberate on these matters.

The value of comments

The high-minded rhetoric around why we have comments online (free expression, debate etc.) – at least when it comes from the media houses themselves – is only part of the story, and to my mind a very small part of it.

The value that comments have for them is that eyeballs return to their pages, either to watch the slow-motion car crash of someone being schooled or trolled in comments, or to join in the fun themselves. Either way, you’re on my page rather than a competitors, and ad revenue might increase as a result.

And (we need data here) I’m intuitively completely disbelieving of the idea that there will be a significant difference in traffic if you switch from live, unmoderated commenting to some system that involves comments being posted after a delay of some sort. Of course a delay of days might have an impact, but I doubt that 12 hours or less would.

What to do?

The panel’s report recommends pre-publication moderation, where a) the commenters identity is known to the publisher, even if the comment appears anonymously; b) word-filters flag any potentially offensive comments (for containing words likely to correlate with abusive comments); and c) editorial oversight, where “trained and qualified” editors check the comments before publication.

Step (c) is onerous, and unduly so in not taking advantage of existing mechanisms for knowing who is likely to abuse comment sections and who not. But before I get to the disagreement, let me say where I agree.

Identity: I’m a big fan of people “owning” their opinions, and taking responsibility for them. To put it simply, the fear of reputational harm is one of the ways we are kept in check, and keep each other in check.

So I’m supportive of using the “letter to the editor” sort of model where possible – use your real names, which need to be verified in some fashion, unless there’s some compelling reason why you can’t (where the editor must decide on the merits of that reason, remembering, as I said above, that you have no right to comment).

Word-filters: You’ll perhaps get lots of false-positives here, so sifting through the stuff that’s flagged might involve more work than is necessary. But besides this potentially adding unnecessary overhead, I have no principled issue with it.

Editorial oversight: Making this the norm will be far too expensive and time-consuming (of course related issues, but manifesting as two separate problems). It would also be unnecessary, as we already have ways to crowdsource information regarding who can (in general) be trusted to not abuse comment sections.

For example, here on Synapses I use Disqus, as does IOL, Daily Maverick and the Mail&Guardian. I’ve set each post up with fully moderated comments, but I do have the option of specifying that any given commenter be automatically published without going into moderation.

Any of us who dip our toes into comment spaces online know the names of some regulars. Those regulars who are not abusive can be approved pre-publication. Yes, it will take some time and work to determine who is given this privilege, but in the long-run, it would save having to look at their comments each time.

Of course, you’d want some sort of policy for granting this privilege – say, for example, 5 non-abusive comments gives you that status, and the understanding is that it gets stripped from you once you abuse it.

The level of moderation required to afford people the privilege described above is rudimentary – interns, student journalists in university courses, bored college kids etc. could all do it for a nominal fee, and at the same time flag potentially abusive comments for the attention of a “real” editor.

All of the benefits listed in section 7.2 (page 33) of the report can be enjoyed once you have established this sort of “database” of approved commenters. If you wanted to be more liberal about it, and save even more time, someone with a high reputation score on Disqus can automatically be green-lighted.

One could even consider database of trustworthy commenters, shared across media houses that use the same commenting platform.

And as I said above, if someone sins, you simply delete them – here again, the community can be of assistance, in flagging things for an editor’s attention.

By Jacques Rousseau

Jacques Rousseau teaches critical thinking and ethics at the University of Cape Town, South Africa, and is the founder and director of the Free Society Institute, a non-profit organisation promoting secular humanism and scientific reasoning.