Good news for children and parents: Microsoft announced yesterday (Jan. 9) that it will hunt down online sexual predators using artificial intelligence to scan chats in search of potential child grooming.
Child grooming is a method used to lure potential victims. The predator simply talks with a targeted child over a long period of time to make the child feel safe and comfortable. If successful, the grooming can lead to sexual abuse online, which may involve forcing kids to send sexual videos and meeting them physically.
How does Microsoft's approach work?
Project Artemis uses artificial intelligence to continuously monitor chats with kids to detect conversation that could be interpreted to be grooming.
The technique, Microsoft says, “evaluates and rates conversation characteristics and assigns an overall probability rating.
"This rating can then be used as a determiner, set by individual companies implementing the technique, as to when a flagged conversation should be sent to human moderators for review.”
Human moderators could then evaluate the contents and identify “imminent threats for referral to law enforcement, as well as incidents of suspected child sexual exploitation to the National Center for Missing and Exploited Children (NCMEC)”.
Of course, this human moderation factor raises a concern for privacy. It wouldn’t be the first time that tools allegedly used for our security have been misused. On the other hand, such sensitive matters can’t be all left in the hands of an AI algorithm.
Testing for years
Microsoft says that the new tool, called Project Artemis, has been developed for the last 14 months in collaboration with The Meet Group, Roblox, Kik and Thorn, beginning with the November 2018 Microsoft “360 Cross-Industry Hackathon“, an event co-sponsored by the WePROTECT Global Alliance and the Child Dignity Alliance.
The software giant says it has successfully used Project Artemis’ underlying techniques in Xbox Live “for years”. Now it’s looking to incorporate the Project Artemis tool set into Skype, its multi-platform chat system.
Even better, Project Artemis is now available to any company that wants to incorporate its software. Developers who are interested in licensing these technology could contact Thorn starting today, January 10.
'By no means a panacea'
Microsoft warned that Project Artemis would not end online child abuse.
“Project Artemis is a significant step forward, but it is by no means a panacea,” the company said in its announcement. “Child sexual exploitation and abuse online and the detection of online child grooming are weighty problems. But we are not deterred by the complexity and intricacy of such issues.”
Earlier this week, Apple announced at a CES 2020 privacy roundtable that it scans user accounts for known images of child pornography and child abuse. Apple chief privacy officer Jane Horvath said that if Apple finds any such images, the user accounts are automatically flagged, the (London) Telegraph reported.
Apple didn't specify exactly how it does this, but its own description of the process seems to match a technology jointly developed by Microsoft and Dartmouth College called PhotoDNA, which The Telegraph said is also used by Google, Facebook and Twitter. (PhotoDNA is also used to track terrorism-related content.)
PhotoDNA compares new images to a database of known child-abuse images that have already been detected and flagged by authorities. It also works with audio an video to an extent. But PhotoDNA can't prevent grooming and future abuse, as Project Artemis is designed to do.