In todays world, the internet as we all know is flooded with child sex abuse contents and every tech company has taking it a priority to weed out such contents before it is visible to users.
In other to tackle this and automate the process of removing unwanted contents, the search giant, of course has launched an AI-powered API that will help in identifying child sexual abuse material (CSAM). This new developnent will not only speed up the process but also prevent human reviewers’ exposure to the illegal and disturbing contents.
Google AI Based API
Until now, the approach that have been adopted by the companies to track such content is by matching the suspected images with the previously flagged content. But the new AI-based API works by using deep neural networks for scanning image processing. The API would prioritize CSAM content for review leading to a quicker review process.
According to Google, this new tool would allow companies to identify and report 700% more CSAM content as compared to the content evaluated by a human reviewer.
Google has also made the new API available for corporate patners without any charges and non-governmental organizations through Google’s Content Safety programming kit.
However, this move comes as the internet giant faces growing heat over its role in helping offenders spread CSAM across the web. Also, Last week, U.K. Foreign Secretary Jeremy Hunt critized the search giant over its plans to re-enter China with a censored search engine while Google has reportedly refused to help remove child abuse content elsewhere in the world.
Seems extraordinary that Google is considering censoring its content to get into China but won’t cooperate with UK, US and other 5 eyes countries in removing child abuse content. They used to be so proud of being values-driven…
— Jeremy Hunt (@Jeremy_Hunt) August 30, 2018
Since every tech companies is already taking leverage of AI, the tool will come handy and help reduce offensive materials ranging from nudity to abusive comments.