ADMT & Employment
Hello all, and thanks for reading today.
Read NowGet an overview of the simple, all-in-one data privacy platform
Manage consent for data privacy laws in 50+ countries
Streamline and automate the DSAR workflow
Efficiently manage assessment workflows using custom or pre-built templates
Streamline consent, utilize non-cookie data, and enhance customer trust
Automate and visualize data store discovery and classification
Ensure your customers’ data is in good hands
Key Features & Integrations
Discover how Osano supports CPRA compliance
Learn about the CCPA and how Osano can help
Achieve compliance with one of the world’s most comprehensive data privacy laws
Key resources on all things data privacy
Expert insights on all things privacy
Key resources to further your data privacy education
Meet some of the 5,000+ leaders using Osano to transform their privacy programs
A guide to data privacy in the U.S.
What's the latest from Osano?
Data privacy is complex but you're not alone
Join our weekly newsletter with over 35,000 subscribers
Global experts share insights and compelling personal stories about the critical importance of data privacy
Osano CEO, Arlo Gilbert, covers the history of data privacy and how companies can start a privacy program
Upcoming webinars and in-person events designed for privacy professionals
The Osano story
Become an Osanian and help us build the future of privacy!
We’re eager to hear from you
Updated: August 12, 2021
Published: August 10, 2021
Last week, Apple announced a new system that aims to limit Child Sexual Abuse Media (CSAM). That is, child pornography. Which is, of course, abuse. Specifically, Apple is introducing new features to help it discover when CSAM exists so it can report it to authorities. The most controversial of these features is Apple's plan to use machine learning to determine whether users' iOS devices are storing CSAM content in the cloud.
While platforms are legally required to report when CSAM exists, they aren't required to search for it actively. This change means that Apple will scan cloud-stored images to detect when content matches criteria that the National Center for Missing and Exploited Children has identified as sexual abuse media.
When I first saw the Apple news, I went into privacy advocate mode by default. I'm sorry. It's an instinct. It felt like: Here we go again. The old argument, popularized post-9/11, that we have to give up our privacy to have security and protect the innocent.
But when I explored the issue further, I came across a Twitter thread by Alex Stamos, a longtime privacy and security champion who famously left Facebook in 2018 because he disagreed with the company's willingness to be transparent.
This is the Tweet that stopped me in my tracks. Stamos said: "First off, a lot of security/privacy people are verbally rolling their eyes at the invocation of child safety as a reason for these changes. Don't do that. The scale of abuse that happens to kids online and the impact on those families is unfathomable."
Stamos, who was not only Facebook's chief security officer from 2015-2018 but also Yahoo's Chief Information Security Officer before that, reports research indicating that 3-5% of males "harbor pedophiliac tendencies." To be clear, pedophiliac tendencies does not a predator make. But Facebook has caught 4.5 million users posting CSAM material, and that's likely just one-tenth of the potential offenders operating on Facebook still.
Then, I read a case Stamos shared in which a man named Buster Hernandez brutally sexually extorted and "threatened to kill, rape and kidnap more than 100 children and adults" across the U.S. and internationally from 2012 until his arrest in 2017.
Stamos described his experience watching the trial.
"Earlier this year I sat in a courtroom with a dozen teenage girls and women who were victimized on my watch. I heard them explain the years of psychological terror they experienced. I saw the self-cutting scars on their arms and legs.
"Don't minimize their pain. Just don't."
I then read some of the transcripts between Hernandez and his online victims, and I flashed back to my early days on the internet. I suffered nothing near what his victims did, to be clear. But here's what I lived, as a normal, middle-class girl in Portland, Maine.
I was 13. While Facebook wasn't a thing yet then, AOL Instant Messenger was. It was the phase in our teenage lives when we were experiencing the innocence of first kisses. If you can remember being 13, it's a time of enormous insecurity, naivety and sexual exploration. When Instant Messenger became popular, my friends and I suddenly started to get messages from unknown males who seemingly just wanted to chat. Seemed fun! It was at first exciting when boys, or even men, messaged us and, slowly, over some time, started to flirt.
But when it was a stranger who'd messaged me out of the blue, the conversation almost always inevitably turned into something that felt uncomfortable. I didn't yet have the language or education to understand that I was being preyed upon, but I knew the kind of messages I was getting had to be secret. If my mom or dad entered the room, I closed the chat window quickly. But even in my discomfort, there was excitement there. I liked having what felt like adult conversations, and I liked feeling wanted.
If and when the conversation started to feel dangerous, I bailed. There were times when the boys or men messaging would begin to ask for things, both virtually and in person. Even in my youth, I knew that I felt scared instead of excited, which meant I should get out of the situation. I would block the user, and that would be the end of it.
Now, 15 years later, perpetrators are using much scarier tactics.
In Hernandez's example, he would use aliases (at least 315 different email and social media accounts) to send a private message to the victim. He would say things like, "How many guys have you sent dirty pics to cause I have some of you." He would convince them that he had nude images of them already (parents, beware, teens are sending these images to each other on the regular) and threaten to go public with them if they didn't send him sexual pictures of themselves. Worse, he would threaten to "blow up" or kill members of their families or their schools themselves.
This is just one example; one person who destroyed the lives of 100+ young girls. And, if Stamos's figures are correct, Hernandez and his peers represent 1/10 of the overall problem. I now realize that sextortion and pedophilia online are a far more significant problem than I initially realized.
That doesn't mean Apple's plan is necessarily a good one. Stamos says he'd rather see Apple come up with a way for users to report exploitative messages in iMessage directly or for the company to "staff a child safety team to investigate the worst reports."
It's true; this Apple plan needs some tweaking. But now that I've looked at the numbers and read the wrenching stories within court documents of children that turned to self-mutilation or suicide attempts because they weren't protected, I'm much more willing to listen to these kinds of mass-scale and admittedly privacy-invasive proposals.
Are you?
We're going to chat about this and more at our Twitter Spaces event this week, August 12 at 1 p.m. Pacific, 4 p.m. Eastern. Join us, and listen in! It's audio-only. Like a podcast, but live. Important: You must join from your phone using the Twitter app (desktop doesn't work properly). I’d love to hear your thoughts.Enjoy reading, and I'll see you next week!
Apple clarifies plans to scan iPhone devices for child abuse images
This week, Apple announced that it would scan iPhone users’ photo libraries stored in the cloud for child abuse images, South China Morning Post reports. Though Google and Microsoft also check uploaded photos for child exploitation images, security experts say Apple’s plan is more invasive. While the program doesn’t include scanning videos, Apple says it plans to “expand its system in unspecified ways in the future,” the report states.
Read Story
Is CCPA working? It's hard to say
When California passed its privacy law, it aimed to give Californians the privacy rights Washington hasn’t been able to provide. But one year in, “it’s almost impossible to tell how many Californians are taking advantage of their new rights, or precisely how the biggest players are complying,” Politico reports. Microsoft, Google and Apple have widely disparate numbers on petitions to delete customer data since the California Consumer Privacy Act came into effect.
Read Story
Get ready for CCPA 2.0: This provision begins earlier than you think
While California’s update to its current privacy law doesn’t go into effect until Jan. 1, 2023, companies must have a personal data report for any California resident covering the 12 months prior. While this specific provision of the California Privacy Rights Act, which will replace the California Consumer Privacy Act, is not well understood, it’s essential companies understand it or potentially face massive fines, ZDNet reports.
Read Story
Google to ban targeted ads to users under 18
Google has announced it’ll limit companies’ targeted advertising to users younger than 18, Bloomberg reports. It’s one of several changes the company will make to improve privacy protections for teens. Google also said it’s planning privacy changes for YouTube, Google Assistant and the Google Play Store. The announcement follows Facebook’s recent change to ban targeting users under 18 on Instagram.
Read Story
After Apple privacy changes, advertisers begin a pivot to Amazon
When Apple announced privacy changes on iPhones that would make Facebook advertising less effective, brands started to seek alternatives. Given Amazon’s 153 million subscribers to its Prime service, Bloomberg reports that it was an “obvious choice” for advertisers. The shift began when Apple started asking users whether they’d allow companies to track their internet activity, a consent users are only granting to apps 25% of the time.
Read Story
Fintech company Plaid reaches $58M settlement for alleged privacy violations
Financial technology company Plaid has reached a $58 million settlement over a privacy case, CyberScoop reports. Plaintiffs in the case alleged Plaid was deceptive and violated their privacy by taking data from their financial accounts without consent and selling their transaction histories. The service connects customer banking accounts to financial apps like Venmo. The proposed settlement for $54 million awaits court approval.
Read Story
Are you in the process of refreshing your current privacy policy or building a whole new one? Are you scratching your head over what to include? Use this interactive checklist to guide you.
Download Now
Osano Staff is pseudonym used by team members when authorship may not be relevant. Osanians are a diverse team of free thinkers who enjoy working as part of a distributed team with the common goal of working to make a more transparent internet.
Osano is used by the world's most innovative and forward-thinking companies to easily manage and monitor their privacy compliance.
With Osano, building, managing, and scaling your privacy program becomes simple. Schedule a demo or try a free 30-day trial today.