This week, Facebook revealed that it has removed 8,7 million content that violates the social network's anti-child exploitation rules, and it's all thanks to new technology. Artificial intelligence and machine learning technology, which was deployed last year by the company, was responsible for removing 99% of such posts, Antigone Davis, Facebook's global head of security, said in a blog post.
The new technology checks for posts about child nudity and even other content it is charged with. If necessary, photos and reports are forwarded to the National Center for Missing and Exploited Children.
Mark Zuckerberg's social network has already been exploiting photo-matching technology to be able to compare newly uploaded photos with familiar images of child exploitation and revenge pornography. The new tools now help prevent unidentified content from being disseminated through the platform.
In any case, the technology is still not perfect, with many parents complaining that their children's photos were removed, even though they were not explicitly considered inappropriate.
Through the post, Davis said that in order to "avoid even the potential for abuse, we have taken action on non-sexual content as well as seemingly benign photos of children in the bath" and that such a "comprehensive approach" is one reason for the removal. of so much content.
As we can see, the use of AI can be very useful to workers as well. As a reminder, a former Facebook content moderator recently sued the company alleging that viewing thousands of violent images caused her to develop post-traumatic stress disorder.
Other employees responsible for the operation also comment on the psychological effects of the job and also mention that Facebook does not offer any type of training, support or financial compensation.