Hello all, and happy Thursday!
Last week, Telegram CEO Pavel Durov was arrested in Paris. Specifically, Durov allegedly failed to cooperate with authorities investigating Telegram and was complicit in allowing users to engage in criminal activity through the messaging platform.
When it comes to data privacy, the case has interesting connotations. For one, it underscores the tug-of-war between privacy and law enforcement. To what extent should our technologies protect their users’ privacy? Do we pick and choose who gets to be protected, or do we also protect the privacy of those engaged in criminal activity? How do we differentiate between users involved in benign versus criminal activities?
Interestingly, Telegram is inherently less private than other messaging apps, as it doesn’t feature end-to-end encryption enabled by default, unlike, say, WhatsApp. While it is technically less private, Telegram still attracts bad actors because it makes functionally zero effort at content moderation.
Here, we come to another fundamental conflict at the core of data privacy. To what extent should these large social media platforms engage in content moderation efforts? After all, effective moderation, in most cases, requires some invasion of privacy. Most people would agree that zero moderation (as is the practice at Telegram) isn’t the answer.
For most of us, these questions are purely philosophical. But for the large social media platforms that enable global communication, they’re questions with very real answers and—as we’ve seen with Durov’s arrest—real consequences.
Best,
Arlo
Listen to part two of our conversation with Keith Enright, Chief Privacy Officer of Google.
Access 50 free resources that privacy pros can put into immediate use in their work.
Thursday, September 12th | Save Your Seat
Are you or someone you know a good fit for the roles below? Osano is proud to be an growing, award-winning place to work. Join us in our mission to transform how businesses manage data privacy.
The Federal Trade Commission (FTC) found that Verkada, a security camera vendor, violated the CAN-SPAM Act by bombarding aspiring customers with promotional emails without giving them opt-out choices. Moreover, the camera vendor was found to have failed to implement basic security measures to protect its cameras from unauthorized use. As a result, hackers were able to access live video feeds from internet-connected cameras. The FTC is requiring Verkada to create a comprehensive information security program as part of a settlement.
The FCC has signed a Memorandum of Understanding ("MoU") with the Privacy Commissioner in Canada with a goal to bolster enforcement against illegal robocalls. The agreement continues a trend of regulator partnerships, acknowledges the multi-jurisdictional nature of data privacy, and may signal increased enforcement of the Telephone Consumer Protection Act (TCPA).
Pavel Durov, the founder and CEO of the messaging app Telegram, was arrested in Paris recently over allegations that his platform is being used for illicit activity including drug trafficking and the distribution of child sexual abuse images. Specifically, Durov’s platform is allegedly refusing to share information or documents with investigators when required by law.
A California proposal to impose safety standards for powerful AI models passed a major hurdle by clearing the state Assembly on Wednesday, overcoming fierce resistance from tech companies and leading California House Democrats. The bill would require the largest AI models to certify safety testing before deployment with the aim of protecting people from potential dangers like the creation of bioweapons. The bill has divided Silicon Valley with top figures such as Elon Musk supporting the measure as a way to help mitigate potential risks to the public, while opponents such as OpenAI argue its requirements would unduly burden developers, especially small startups.
India’s Information and Broadcasting Minister Ashwini Vaishnaw told the media on August 19 that the central government is expected to release a draft of the rules under the Digital Personal Data Protection Act (DPDP Act) within a month. The intention appears to be to simplify the rules and pass them on for public consultation. After the rules are published, the public consultation period may last between 45-60 days, subject to further extensions, to draw out comprehensive feedback.
There's more to explore:
We go deeper into additional privacy topics with incredible guests monthly. Available on Spotify or Apple.
The book inspired by this newsletter: Osano CEO, Arlo Gilbert, covers the history of data privacy and how companies can start building a privacy program from the ground up. More details here.
If you’re interested in working at Osano, check out our Careers page!