Tamara Wilhite is a technical writer, industrial engineer, mother of two, and published sci-fi and horror author.
Trolling is a problem on the internet, but it is difficult to define, while the label of “trolling” is easily applied to things that are not actually trolling. Context-based review is time-consuming, expensive, and capricious since what one reviewer considers unacceptable may be fine by another. That’s where a universally applicable definition based on actions is probably as generic and thus applicable. For that definition, we can formulate potential technological solutions to limit trolling without limiting freedom of speech or user interaction.
What Is Trolling?
I would define trolling as unacceptable behavior. It is unfair to classify entire sets of opinions or anything said by a member of a particular group as “trolling”. Instead, trolling should be defined by specific behaviors.
Trolling as Behavior:
- Ad hominem attacks
- Following someone around forums attacking them or undermining them
- Behavior offline to compound online arguments/conflict
- Actual threats of violence, though we cannot let it become de facto blasphemy laws
What Is Not Trolling?
- Expressing politically incorrect opinions, since those in power determine what is considered acceptable, polite, “love” …
- Defending one’s views or group’s views in detail or in repeated posts, an act sometimes called “sealioning” and classified as trolling
- Commenters responding to other people’s comments
- Spam comments like “come read my blog” or “buy our fat reducer” - though that can be banned as spam
How Could You Limit Trolling?
A good general solution to this issue is preventing people from posting comments that include links away from the site they’re posting the comments on. It prevents conversations from moving to other platforms, spammy links being posted on the site, and links to other sites that lead people away from the profile. However, this is far from the only solution to trolling.
- Online mobs attacking the target, each person one posting one or two comments
This might be controlled by limiting how many people can pile in, racking up responses to someone. Perhaps 10 people can respond to someone’s post, but not 1000. Or if there is a sudden spike in comments and responses to someone, new comments are blocked.
A good protective measure for users would be something that lets a person flag a setting in their profile “I’m getting mobbed” that freezes account settings and content. The ideal is something that prevents mass down-voting of someone’s posts (like Reddit), false reporting of their content as hate speech and DCMA violations, and a wave of vicious messages from strangers, where each might be ignored, but the sheer volume overwhelms the person. End the social media update emails and pop-ups in apps that make it impossible for the person to distance themselves from the storm.
This provides the target of online mobbing the ability to limit the damage. And it is an indirect attack on trolling, since a troll might be blocked by that user, but enlisting an online hate mob lets them get around their block.
- The “observable manifest behavior” standard Patreon rolled out
This made-up standard by Patreon is utterly unfair, since they violated their terms of service to ban Carl Benjamin, also known as Sargon of Akkad, for statements not made on their site or paid for by it. This is unfair because they tolerate far worse racial epithets said by others on their site.
If the N-word is so toxic that it cannot be uttered, ever, then it is unacceptable for rappers to be repeating it in songs and blacks using it in any form in casual online comments. To say X can use this term but Y cannot is unfair, since it sets different standards for different groups. We don’t want to go down the line to say if you’re X, you can have Y and Z opinions and use profane terms A and B but not C.
It is also a violation of people’s privacy and free speech rights to say “you have to fill out a detailed demographic questionnaire so we can determine what opinions you can express and what words you can use.” That is aside from the fact that trolls can lie about their demographics to gain permission to use vicious slurs against targets.
I would suggest not using behavior on other sites as a reason to block or ban someone on your own site. It is too readily used by cry-bullies and/or trolls – they dig up something, especially out of context, and report it to admin to punish their target.
Acts That Aren’t Quite Trolling, and How to Limit Them If Necessary
Mocking someone’s name
Mocking someone's name may be an ad hominem attack, an attempt at punning, or an attempt to get around algorithms tracking the target mentioned by the troll. However, it is such a gray area that we don't want to limit it.
For example, I don’t want AI censoring people who call Trump Drumph or Dumpty Trumpy any more than AI censoring someone mocking Ocasio-Cortez as Occasional Cortex.
It is meaningless, anyway, due to euphemism creep. Focus on what really has an impact like banning extreme profanity or actual threats.
We have some incredibly bad double standards here, so I’d be reluctant to say “ban people who use it.” Youtuber Sargon of Akkad got kicked off Patreon for calling white supremacists “white n-ers.” Yet the site’s unfair double standard meant they had no problem with black rappers using the term repeatedly and often in conjunction with other horrid terms in both writing and video content. If a term is so toxic it must be banned; it must be universally banned.
The other issue is the rise of authoritarian liberalism defining terms they don’t like as “profanity” or otherwise unacceptable language. The people who want to ban words like bossy and illegal alien come to mind.
Fake News/False Facts
The best solution to bad speech is more, good speech. Censoring people based on fact sources leads to bubbles and a lack of corrective feedback. The fact-checking algorithms today assign more trust to liberal sources like CNN and Huffington Post.
True stories that the left self-censors gets wrongly labeled as unverified or false by such algorithms by Google's search engine. The liberal bias of reporting also means the details conservative sources report get micro-analyzed, causing true stories to be listed as "partially true" or "unverified" because it cannot be vetted by a liberal source at that time. The rating of individuals as trustworthy is problematic in its own right.
Twitter removes verification checkmarks from people the SJWs don't like, such as Scott Adams, and one time, his girlfriend. That made it easy for people to impersonate him and profit off his influencer status. Facebook gives people a lower trust score if they only share content from conservative sources, reducing the visibility of their posts. Those with power define which sources are acceptable and "trusted," and they'll do so per their prejudices and their preferences.
If a newspaper publishes something critical of a Big Tech firm, you can be fairly sure their search rankings and spread of their content will drop off. Or their app is harder to find relative to the competition, assuming it hasn't been banned by those in power as guilty of various modern morality statutes. At least one Bible app and Gab.AI were banned this way.
Additional Ways to Limit Trolling While Maintaining User Interaction
Have small discussion groups, so there is a variety of perspectives, but a hundred strangers cannot mob someone. This fosters trust while others being present will help check one angry person blowing up at another.
Keep the discussion content private, whether a private forum or instant messaging conversation. When the discussion is over, like when everyone logs off/exits, close it and archive it. No one wants their arguments for or against a subject showing up as a strike against them in a background check for work or resulting in a hate mob two years later. This layer of privacy also makes people more willing to discuss sensitive personal experiences.
Setting up time limits on when people can join in or respond prevents trolling or drive-by comments months later. It can also keep the discussion forum “fresh”, since prior conversations are closed and archived. You could then have 50 conversations, each seen as a unique event, on a particular subject.
This article is accurate and true to the best of the author’s knowledge. Content is for informational or entertainment purposes only and does not substitute for personal counsel or professional advice in business, financial, legal, or technical matters.
© 2019 Tamara Wilhite
deborah84 on May 23, 2019:
Nice the article. Trolling is very dangeros and may cause others to steer clear and not engage !
Garry Reed from Dallas/Fort Worth, Texas on February 24, 2019:
I've been online seemingly forever and I've developed a very simple way of dealing with those I consider to be a troll.
I respond with "Troll Alert" and then totally ignore the person from then on. Two reasons.
1. Letting others know that this person is likely a troll may cause others to steer clear and not engage.
2. Trolls are obsessed with getting personal attention. Cutting off the attention gives them nothing to feed on.