Microsoft's new tech scans online chats to catch child predators
New system finds potential online abuse using artificial intelligence
Here at Tom’s Guide our expert editors are committed to bringing you the best news, reviews and guides to help you stay informed and ahead of the curve!
You are now subscribed
Your newsletter sign-up was successful
Want to add more newsletters?
Daily (Mon-Sun)
Tom's Guide Daily
Sign up to get the latest updates on all of your favorite content! From cutting-edge tech news and the hottest streaming buzz to unbeatable deals on the best products and in-depth reviews, we’ve got you covered.
Weekly on Thursday
Tom's AI Guide
Be AI savvy with your weekly newsletter summing up all the biggest AI news you need to know. Plus, analysis from our AI editor and tips on how to use the latest AI tools!
Weekly on Friday
Tom's iGuide
Unlock the vast world of Apple news straight to your inbox. With coverage on everything from exciting product launches to essential software updates, this is your go-to source for the latest updates on all the best Apple content.
Weekly on Monday
Tom's Streaming Guide
Our weekly newsletter is expertly crafted to immerse you in the world of streaming. Stay updated on the latest releases and our top recommendations across your favorite streaming platforms.
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
Good news for children and parents: Microsoft announced yesterday (Jan. 9) that it will hunt down online sexual predators using artificial intelligence to scan chats in search of potential child grooming.
Child grooming is a method used to lure potential victims. The predator simply talks with a targeted child over a long period of time to make the child feel safe and comfortable. If successful, the grooming can lead to sexual abuse online, which may involve forcing kids to send sexual videos and meeting them physically.
How does Microsoft's approach work?
Project Artemis uses artificial intelligence to continuously monitor chats with kids to detect conversation that could be interpreted to be grooming.
The technique, Microsoft says, “evaluates and rates conversation characteristics and assigns an overall probability rating.
"This rating can then be used as a determiner, set by individual companies implementing the technique, as to when a flagged conversation should be sent to human moderators for review.”
Human moderators could then evaluate the contents and identify “imminent threats for referral to law enforcement, as well as incidents of suspected child sexual exploitation to the National Center for Missing and Exploited Children (NCMEC)”.
According to Microsoft, the “NCMEC, along with ECPAT International, INHOPE and the Internet Watch Foundation (IWF), provided valuable feedback throughout the collaborative process.”
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
Privacy concerns
Of course, this human moderation factor raises a concern for privacy. It wouldn’t be the first time that tools allegedly used for our security have been misused. On the other hand, such sensitive matters can’t be all left in the hands of an AI algorithm.
Testing for years
Microsoft says that the new tool, called Project Artemis, has been developed for the last 14 months in collaboration with The Meet Group, Roblox, Kik and Thorn, beginning with the November 2018 Microsoft “360 Cross-Industry Hackathon“, an event co-sponsored by the WePROTECT Global Alliance and the Child Dignity Alliance.
The software giant says it has successfully used Project Artemis’ underlying techniques in Xbox Live “for years”. Now it’s looking to incorporate the Project Artemis tool set into Skype, its multi-platform chat system.
Even better, Project Artemis is now available to any company that wants to incorporate its software. Developers who are interested in licensing these technology could contact Thorn starting today, January 10.
'By no means a panacea'
Microsoft warned that Project Artemis would not end online child abuse.
“Project Artemis is a significant step forward, but it is by no means a panacea,” the company said in its announcement. “Child sexual exploitation and abuse online and the detection of online child grooming are weighty problems. But we are not deterred by the complexity and intricacy of such issues.”
Earlier this week, Apple announced at a CES 2020 privacy roundtable that it scans user accounts for known images of child pornography and child abuse. Apple chief privacy officer Jane Horvath said that if Apple finds any such images, the user accounts are automatically flagged, the (London) Telegraph reported.
Apple didn't specify exactly how it does this, but its own description of the process seems to match a technology jointly developed by Microsoft and Dartmouth College called PhotoDNA, which The Telegraph said is also used by Google, Facebook and Twitter. (PhotoDNA is also used to track terrorism-related content.)
PhotoDNA compares new images to a database of known child-abuse images that have already been detected and flagged by authorities. It also works with audio an video to an extent. But PhotoDNA can't prevent grooming and future abuse, as Project Artemis is designed to do.
Jesus Diaz founded the new Sploid for Gawker Media after seven years working at Gizmodo, where he helmed the lost-in-a-bar iPhone 4 story and wrote old angry man rants, among other things. He's a creative director, screenwriter, and producer at The Magic Sauce, and currently writes for Fast Company and Tom's Guide.

