Steve McGrath, February 2017
Free speech is one thing. Harassment, rudeness, trolling, and shock-mongering are others.
If you’ve spent any time on the Internet since, say, the presidential primary season, you’ve probably had the stomach ache that comes with watching nice people turn ugly or ugly people poison nice conversations. Perhaps you’ve tried to speak out on something deeply important to you only to be flamed by an idiot intent on derailing the dialogue.
Google to the rescue.
The company, along with technology incubator Jigsaw, launched an ingenious new tool yesterday (Feb. 23) that lets moderators filter out toxic comments using a simple slider. Perspective uses machine learning to flag language that’s potentially toxic. It also tries to rate just how toxic the post might be. The final call is in the user’s hand, though, as you move the slider with your mouse to land on your own tolerance level.
Google says Perspective can be used in several ways:
- Moderators can use it to choose which comments stay and which go.
- Publishers can let readers sort comments by toxicity themselves.
- A community manager can use it to educate the community on the impact of what they’re writing.
- Or commenters can see the toxicity level of a potential post as they type
The New York Times has been testing a version.
You can test it, too. Play with it, and see how cool it really is. And pray a day will come when we don’t need it.