Facebook will begin testing a new system to reduce toxicity in community comments. This was reported on the blog of the social network.
The system, called Conflict Alert, will use artificial intelligence to analyze comments that users leave on Facebook communities.
In case of detecting disputes or potential conflict situations in the comments, the system will mark them and send a notification to the community administrator. In addition, messages with obscene language or inappropriate language will be marked with a moderation alert box. Information about such comments will also be sent to the group moderators.
The community administrators themselves will be given new tools, which, according to the developers, will reduce the level of toxicity or reduce the conflict to nothing. In particular, it will now be possible to limit the frequency of comments from a specific user, as well as the frequency of commenting on certain posts.
Together with the new system for analyzing comments, Facebook group administrators will be given tools to “form and enhance the culture of communities.” Now they will have access to statistics on group members, which will display the number of their posts and comments, the number of deleted messages, and blocking statistics.
In 2020 and 2021, Facebook censored posts and comments related to politics and the coronavirus pandemic. In particular, the social network has banned the posting of content with speeches of ex-President Donald Trump.
Most of the messages on the social network regarding COVID-19 were marked with recommendation boxes to read the official position on the pandemic. Some of the posts, especially those that criticized the existing measures to combat the spread of the coronavirus, were blocked. However, at the end of May, Facebook announced that it would no longer delete messages about the artificial origin of COVID-19. The company explained that it changed the rules in light of investigations into the causes of the coronavirus and after consulting with health experts.