Jigsaw brings a smoother comment section that is automated to stop disturbing behaviour and be a cost-effective solution. The company released a tool Thursday called Perspective that uses machine-learning technology to sift through comments and understand the bad from the good. It is easy to assume that internet comments are throwaway. With this mindset, anything can be said and so often is. However, with so much of people’s lives now lived online, both professionally and personally, what gets said on the internet can stick. Asking people to police themselves seems impossible for the mass. Many websites are simply removing comments sections as they are becoming too toxic. Others employ humans to scrape comments and moderate them. This is costly and time-consuming, which is why only large and successful companies can do this. It is worth noting that the kinds of comments that spoil online sections are specific. Disagreements happen and can be civil. However, posts that are abusive, use hate speech, or generally make the section a bad place to be and need to be removed. Perspective analyzed moderators at the New York Times to understand what comments are worthy of removal. For some background, The New York Times has the power to employ 14 human moderators. That’s why a quick trawl through the outlet’s comments sections is an experience in clear thought and civility. For websites that cannot afford a dedicated comment moderating team, Jigsaw has learned from the best to empower its Perspective tool to automate moderating. It can train itself to identify harmful or hate speech and abuse. Aside from using the Times, Jigsaw also taught Perspective with thousands of comments on Wikipedia, the Economist, and the Guardian. While Perspective will learn more in the future, its current focus is on “toxicity”. More specifically, the kind of comment that may lead others to leave the conversation. Websites that adopt the tool have two options for how it is used. Firstly, they can set Perspective to remove such comments directly. Alternatively, the tool can group all toxic comments together for a single human to review later. This avoids the process of trawling all content for bad comments. The tool can also set a toxicity level which can be used to warn posters to tone down their language.
Perspective Test
For interested parties, the Perspective website lets users test the system by typing their own phrase. Naturally I gave it whirl: “You are a stupid idiot” scored a toxicity level of over 90%, which is of course correct. Indeed, when I started to get more creative by replicating the kinds of speech a casual stroll through YouTube comments will lead you to, all scored over 90%. Jigsaw admits the service is still in development and results are not always perfect. Indeed, when I typed “wrong answer, fool” it returned an answer of percentage of 55%. I suppose it could be argued that is not the most damaging of insult, but it would add little to a discussion. It will be interesting to see how Perspective grows and learns. Not least because people mostly understand the nuance of sarcasm. Sometimes comments that will look bad out of context could have been said in an understanding of humor. Jigsaw has open sourced the tool and hopes that it will help publishers automate their comments sections in the future.