Google on Monday released a free artificial
intelligence tool to help companies and organizations identify images of
child sexual abuse on the internet.
Google's
Content Safety API is a developers' toolkit that uses deep neural
networks to process images in such a way that fewer people need to be
exposed to them. The technique can help reviewers identify 700 percent
more child abuse content, Google said.
"Quick identification of
new images means that children who are being sexually abused today are
much more likely to be identified and protected from further abuse,"
engineering lead Nikola Todorovic and product manager Abhi Chaudhuri
wrote in a company blog post
Monday. "We're making this available for free to NGOs and industry
partners via our Content Safety API, a toolkit to increase the capacity
to review content in a way that requires fewer people to be exposed to
it."Internet Watch Foundation, which aims to minimize the availability of child sex abuse images online, applauded the tool's development, saying it will make the internet safer.
"We, and in particular our expert analysts, are excited about the development of an artificial intelligence tool which could help our human experts review material to an even greater scale and keep up with offenders, by targeting imagery that hasn't previously been marked as illegal material," Susie Hargreaves, CEO of the UK-based charity, said in a statement. "By sharing this new technology, the identification of images could be speeded up, which in turn could make the internet a safer place for both survivors and users."