Explained: How Apple’s New Child Safety Software Works and Why it’s A Privacy Breach

Apple's new child safety tools are well intentioned, but they're drawing flack from privacy advocates like Edward Snowden, EFF and more.

230443

Apple’s new child safety tools, which at first seemed like a really good idea, have created quite the noise in the world of technology over the past week. The company announced the feature on August 6 and it only applies to the United States (US) right now, but it’s a platform level update, meaning it will be a part of iOS, macOS, watchOS and iPadOS, and could come to every country that Apple operates in. Of course, whether the company can enable the feature in a country will depend on whether laws in that country allow such features. But before we get to all that…

How does it work?

Child Sexual Abuse Material, or CSAM, is a problem that’s almost as old as the Internet itself. Governments around the world have wanted to tackle this, and succeeded to an extent. But it still remains. So, Apple is adding new algorithms to its platforms, which will scan images on iPhones, iPads, Macs, Apple Watches, iMessage and on iCloud to detect those that fall in the CSAM category. Offending users will eventually be reported to the authorities.

“CSAM Detection enables Apple to accurately identify and report iCloud users who store known Child Sexual Abuse Material (CSAM) in their iCloud Photos accounts. Apple servers flag accounts exceeding a threshold number of images that match a known database of CSAM image hashes so that Apple can provide relevant information to the National Center for Missing and Exploited Children (NCMEC),” the company said in a technical document.


Further, Apple claimed that the risk of the system flagging an account incorrectly was “extremely low”. “The threshold is selected to provide an extremely low (1 in 1 trillion) probability of incorrectly flagging a given account,” the document said.

Further, Apple has also said that it will keep user privacy in mind. Which means that the company won’t be able to access any of your images. The system basically matches images on your devices and iCloud against a pre-selected database of CSAM images. It attaches a cryptographic signature to each image on your device, and images that match the database are identified by their signature. Human review will be done in case an image is matched, and the user is reported.

When it comes to user privacy, Apple is perhaps the most trusted amongst all Big Tech firms in the world. However, the fact remains that this system will be built into your devices and will scan each and every image on your device, irrespective of what they are. Users will not know what images are in the CSAM database, and they will also not know when an image is found to be offending.

“There are two main features that the company is planning to install in every Apple device. One is a scanning feature that will scan all photos as they get uploaded into iCloud Photos to see if they match a photo in the database of known child sexual abuse material (CSAM) maintained by the National Center for Missing & Exploited Children (NCMEC). The other feature scans all iMessage images sent or received by child accounts—that is, accounts designated as owned by a minor—for sexually explicit material, and if the child is young enough, notifies the parent when these images are sent or received. This feature can be turned on or off by parents,” wrote the Electronic Frontier Foundation (EFF) in a blog.

Why is this a privacy concern?

As the EFF blog noted, Apple’s new systems may have the right end goal, but they essentially amount to a backdoor into the company’s platforms, something law enforcement and governments have wanted for ages.

“We’ve said it before, and we’ll say it again now: it’s impossible to build a client-side scanning system that can only be used for sexually explicit images sent or received by children. As a consequence, even a well-intentioned effort to build such a system will break key promises of the messenger’s encryption itself and open the door to broader abuses,” the privacy advocacy group said in the blog linked above.

Additionally, WhatsApp’s chief executive office (CEO) Will Cathcart, noted that the system could be misused by governments. “This is an Apple built and operated surveillance system that could very easily be used to scan private content for anything they or a government decides it wants to control. Countries where iPhones are sold will have different definitions on what is acceptable,” Cathcart said in a tweet.

“Instead of focusing on making it easy for people to report content that’s shared with them, Apple has built software that can scan all the private photos on your phone — even photos you haven’t shared with anyone. That’s not privacy,” he added.

You could argue that Cathcart, being part of the Facebook machinery, isn’t the right person to talk about privacy, but Apple’s new system has also been criticised by whistleblower Edward Snowden, politicians, activists, academics and more.