This past August, Apple announced a series of measures designed to fight the exponentially increasing crime of child exploitation. The new protections featured three main changes:
These new features were to be implemented while holding to Apple’s core value of protecting privacy and maintaining end-to-end encryption of messages. Child protection advocates were encouraged by these proactive measures by the world’s most valuable corporation to stem online child exploitation at a time when it is reaching crisis levels. And we must give Apple credit for announcing this move before the recent wave of Senate hearings that placed Facebook, YouTube, Snapchat, and TikTok executives in the Hill hotseat—amplifying to the general public what so many of us have been saying for years: BigTech is not only failing miserably in protecting kids, but it is preying on them, perpetuating the ills, and even profiting from the harms.
Just as importantly, Apple was making a critical point that privacy and child protection are not mutually exclusive—that this dichotomy is a “false choice” (to use Facebook Whistleblower Frances Haugen’s term) that privacy-rights groups propagate. And in fact, the well-resourced privacy-rights groups did just that after Apple’s August announcement—unleashing a coordinated campaign against Apple. Tech experts like Hany Farid pushed back, noting that Apple’s (“modest and long overdue”) steps are necessary, not even all that new, and limited to only a portion of child sex abuse material as it does not apply to videos.
Unfortunately, the outrage unleashed by privacy-absolutist groups, unfounded cries by critics of potential abuse, and hypocritical (notes Farid) pushback by Apple’s Big Tech peers (worried they’d be compelled to follow Apple’s principled lead?) forced Apple to pause implementation of these features pending further consultation with experts—delaying tools that could quite literally be saving children’s lives.
The National Center on Sexual Exploitation joined ECPAT and dozens of leading child safety organizations around the globe expressing their support for the industry-leading steps Apple was taking, encouraging them to roll out these features as soon as possible, establish a clear timeline, and to “go even further” in protecting children.
There were no more updates since early September—until last week when several news sources reported that the iMessage opt-in tool to blur nude images sent through the app to kids 17 and under (based on Apple ID) would be rolled out in beta, but with changes from the initial announcement: Parents of children ages 13 and under would not receive an automatic alert if their kids viewed or sent a flagged (likely sexually explicit) image. In fact, that option won’t even exist, so parents have no way of knowing if their young child has been exposed to or is engaging in high-risk, potentially illegal, and life-altering action.
NCOSE strongly believes in the need for parents to have greater control over what their young children experience online. We are disappointed that this critically important feature was removed from the initial plans.
We’re also dismayed that this tool will not be turned on automatically: parents will need to turn it on for their children, which inherently leaves many children without the privilege of highly involved caretakers vulnerable to grooming and abuse. Putting all of the burden on young children to determine what is and isn’t appropriate to view presumes too much given the fact that critical brain development is still ongoing.
Though the iMessage feature is a step in the right direction, it also places an incredible burden on children to consent to accept, view, and/or send nude images. The growing trend of sexting is in itself very risky and can cause incredible harm to the child—and would likely be considered child sex abuse material, which is a federal crime to possess or create. Disturbingly, among 9–12 years olds surveyed in 2020, 1 in 7 said they had shared their own nudes—up from 1 in 20 in 2019, according to a new report by child safety organization, Thorn. Even more terrifying, 50% of 9–17 year-olds who reported sending a nude image sent it to someone they had never met in real life, and 41% believed they were sending the images to an adult. (Thorn’s blog on the report is a MUST read for anyone with children in their lives).
Furthermore, survivors of all forms of sexual abuse and exploitation, law enforcement, and child safety experts consistently warn that sharing sexually explicit imagery is a primary way predators use to groom children: often posing as children themselves or using nude images for sextortion: as blackmail used to coerce children into doing what the predator wants. And even when children may be sharing images with each other—perhaps out of age-appropriate curiosity—the instances of those images then being shared with other people and/or being uploaded to the internet (onto porn sites or social media platforms) is common.
I
t’s important to note that even Apple’s original plan to alert parents was a feature that would have been triggered after the fact—meaning, once the child already viewed or sent a nude image.
Many parents don’t even know these risks or understand the growing crises of CSAM (including self-generated CSAM/sexting), sextortion, or image-based sexual abuse. How can teens and tweens understand it? How can an 8-year-old with a smartphone possibly understand? We also know that many children aren’t going to say anything because they’re too ashamed—or they have no idea who they can confide in. We urge to Apple reconsider their decision to not alert parents, or at the very least allow parents to decide what is best for their young children by giving them the ability to block nude images altogether and/or be notified.
We also ask Apple to turn on this tool for Apple IDs under the age of 18 as the default to potentially prevent a life-altering, even criminal activity from occurring. Parents are often overwhelmed and frustrated with the many steps already required to set up and use Screen Time. And so many parents don’t. Defaulting to safety would also further protect children who don’t have an adult in their life with the capacity or desire to provide the necessary oversight of their online life. Defaulting this tool could—at the very least—give children pause before making a terrible decision with high-risk consequences they literally don’t have the brain development to fully understand.
NCOSE and our ally Protect Young Eyes, together with other child safety organizations, have reached out to Apple about the new iMessage tool and CSAM scanning, and have encouraged them to consider several other areas Apple could continue protecting kids on their products:
Please join NCOSE and Protect Young Eyes in thanking Apple in taking concrete measures to protect kids on their products and ask them to take some additional critically necessary, common-sense steps—especially when the stakes are so high.
From: https://endsexualexploitation.org/articles/apple-should-protect-children/