Data Privacy and Security: What’s the Difference?
Information has always been a form of currency in society—from buying...
Read NowGet an overview of the simple, all-in-one data privacy platform
Manage consent for data privacy laws in 50+ countries
Streamline and automate the DSAR workflow
Efficiently manage assessment workflows using custom or pre-built templates
Streamline consent, utilize non-cookie data, and enhance customer trust
Automate and visualize data store discovery and classification
Ensure your customers’ data is in good hands
Key Features & Integrations
Discover how Osano supports CPRA compliance
Learn about the CCPA and how Osano can help
Achieve compliance with one of the world’s most comprehensive data privacy laws
Key resources on all things data privacy
Expert insights on all things privacy
Key resources to further your data privacy education
Meet some of the 5,000+ leaders using Osano to transform their privacy programs
A guide to data privacy in the U.S.
What's the latest from Osano?
Data privacy is complex but you're not alone
Join our weekly newsletter with over 35,000 subscribers
Global experts share insights and compelling personal stories about the critical importance of data privacy
Osano CEO, Arlo Gilbert, covers the history of data privacy and how companies can start a privacy program
Upcoming webinars and in-person events designed for privacy professionals
The Osano story
Become an Osanian and help us build the future of privacy!
We’re eager to hear from you
Published: October 7, 2024
With our webinars, there are always plenty of good questions and not enough time to answer them all satisfactorily. That was especially true in our recent webinar, When AI Meets PI: Assessing and Governing AI from a Privacy Perspective.
Our audience asked some terrific questions, and while we ran out of time to address them during the webinar, we are committed to providing the answers. With AI, there’s no such thing as too much information, and there is a lot of confusion and uncertainty. The more you know, the better equipped you are to use AI in a responsible and privacy-forward way.
We asked our Head of Privacy, Rachael Ormiston to answer our AI webinar questions, plus some of the more interesting questions we’ve gotten about AI and privacy lately. Here are her answers.
During the webinar, our audience brought up three frameworks they commonly see:
All are gaining traction as companies try to establish programs to support AI governance. In my view, the two that are gaining most momentum are the NIST and ISO 42001 standards.
The NIST AI 600-1 RMF is a voluntary framework to manage AI-related risk, developed in part to fulfill an October 30, 2023, Executive Order. It focuses on ethical and responsible development of AI assets and practices, and there are companion playbooks to support it. One advantage to using this framework is, if you are already using the NIST Privacy or NIST Security frameworks in your program, there is some familiarity in focus and structure that allows crosswalks to other elements of your operations.
The ISO 42001 framework helps companies fulfil their AI obligations by integrating into existing processes and programs. It focuses on responsible deployment of AI. As with other ISO frameworks, ISO 42001 requires independent assessment by a third-party auditor—and as a result, is a useful tool to show validation of efforts by external parties.
Before your organization starts using AI, you want to make sure you can quantify the associated privacy risks. As with other applications and data sources, the goal of the assessment is to begin by understanding what the AI will accomplish:
But there are other AI-centric factors that we didn’t have to consider before:
Osano offers an AI Assessment template that can be a great starting point for privacy teams.
Once data is in an LLM, it can be hard to remove. Therefore, it’s very important to assess:
There will be some types of LLM where you simply cannot remove the risk of violating a privacy right. In those cases, you simply cannot provide personal data, and you will need to set clear guidelines regarding AI usage.
The EU AI Act has a staggered schedule for when specific provisions will come into effect. General purpose AI model obligations come into effect in August 2025. If you want a thorough rundown of dates and details, we highly recommend checking out the excellent chart created by the Future of Privacy Forum.
In our own experience as a company, we have found that the terms and conditions around AI can vary based on the tech company and the pricing plan. At Osano, we are very cautious about any data that we contribute becoming part of an AI model when we’re still learning about its usage. As a result, when we use AI, it is only with approved vendors who do not use our data—because that typically requires an enterprise plan, it does cost us more. But it is an investment that we take seriously to ensure that we manage data responsibly.
Here are some AI questions we’ve heard from privacy pros, gleaned from events, and been asked by inquiring minds in recent months.
I think we are starting to see the initial AI hype dissipate. However, privacy pros should still take AI seriously and pay close attention to assessing and governing how it shows up in your organization.
GenAI has become more tangible over the past two years, but AI as a technology is nothing new. We've recently seen some significant strides forward in how we can and should use AI. As companies continue to embrace AI and find uses for it, privacy pros should be working quickly and diligently to establish appropriate guardrails for responsible usage.
In my career, I’ve seen many innovations seem to plateau or slow down, only to rapidly gain momentum again later. I think we will see AI will go through the same ebbs and flows.
AI can feel daunting, even to engineers. But you do not have to be an AI expert to be able to red flag AI issues. As we heard from Emily and Chris in the webinar, privacy pros are well positioned to support AI governance because of the skills they already have. We know of privacy pros that have completed AI workshops, such as those offered by the IAPP, while others have spent quality time with their engineers.
With AI, there is also the opportunity of lots of experimentation, either at home with ChatGPT or by watching simulations online. That might not be the right approach for everyone; but I think the key is not to feel intimidated and experiment as you feel comfortable. We're all learning!
In many ways, yes, but remember, there are many flavors of AI. That means sometimes different rules are necessary. That said, it would be great to have some degree of simplicity. Thinking back to Scott’s coffee analogy from the webinar, we might need a few different types of coffee on the menu. But we don’t want every single variation of decaf, nut milk, iced, whipped, etc. With AI I do worry that we may end up with an unnecessarily long menu unless we see some uniformity in regulation.
Yes–I think we are headed in that direction. We know that in some states, you must disclose how GenAI is being used. And in other states, there are requirements for specific impact assessments. I think it is valuable for organizations to be proactive and share more about their AI usage and what they do with it, particularly in an era where there is a tendency to lose trust. I’d love to see this become market standard.
I find that Taylor Swift memes help. But I understand that not all engineers are Swifties.
In all seriousness, the best way you can build a strong relationship is to communicate well and often and simplify your explanations and requests. Make it as easy as possible for them to work with you. Checklists are great, especially when you can integrate them with a ticketing system like JIRA or AzureDevOps. Also, begin to involve the engineering team in your privacy impact assessments now to build muscle memory for when they will regularly need to weigh in on AI.
Our webinar, When AI Meets PI: Assessing and Governing AI from a Privacy Perspective, contains much more useful information about how to ensure that AI is being used responsibly and with privacy in mind, including:
This recording (and others) are available in our Resources section.
Looking for templates to inform your privacy and AI assessment efforts? Looking for templates AND a slew of dozens of other free resources, like trackers, checklists, and expert guidance? Our free bundle centralizes the internet's best free resources for data privacy pros.
Download Now
Rachael Ormiston is the Head of Privacy at Osano. With over 15 years of professional experience, she has deep domain expertise in Global Privacy, Cybersecurity, and Crisis and Incident Response. Rachael is an IAPP FIP and has previously served on the IAPP CIPM Exam Development board. She has a personal interest in privacy risk issues associated with emerging technologies.
Osano is used by the world's most innovative and forward-thinking companies to easily manage and monitor their privacy compliance.
With Osano, building, managing, and scaling your privacy program becomes simple. Schedule a demo or try a free 30-day trial today.