Proofpoint completes acquisition of Normalyze. Read more.
What is DSPM?

FEATURED

Gartner® Innovation Insight: Data Security Posture Management
Get Report
PLATFORM
The Normalyze Platform
Supported Environments
Platform Benefits
Solution Differentiators
Data Handling for DSPM
USE CASES

Reduce Data Access Risks

Enforce Data Governance
Eliminate Abandoned Data

Secure PaaS Data

Enable Use of AI

DSPM for Snowflake

MARKETS

Healthcare
Retail
Technology
Media
M&A

FEATURED

DSPM Buyer's Guide: Report
DSPM Buyer's Guide

A toolkit to help gather internal DSPM requirements and evaluate vendors

Get Your Copy

FEATURED

CYBER 60: The fastest-growing startups in cybersecurity
Get Report

Unlocking the Value of AI: Safe AI Adoption for Security Practitioners

Ravi Ithal

January 13, 2025

As a security practitioner or CISO, you likely find yourself in a rapidly evolving landscape where the adoption of AI is both a game-changer and a challenge. In the recent webinar, Safe AI Adoption: Protecting Your Brand and Culture, I had the opportunity to delve into how organizations can align AI adoption with business objectives while safeguarding security and brand integrity. I’d like to thank Michelle Drolet, CEO of Towerwall, Inc. for hosting the discussion and Diana Kelley, CISO at Protect AI for participating with me.

Here are the key takeaways from the discussion that I believe every CISO and security practitioner should consider when integrating AI into their organization:

Visibility is Your Foundation

The first and most critical step is gaining visibility into how AI is being used across your organization. Whether it’s generative AI tools like ChatGPT or custom predictive models, understanding where and how these technologies are deployed is essential. As I mentioned during the webinar, “You cannot protect what you cannot see.” Start by identifying all large language models (LLMs) and AI tools in use, and map out the data flows associated with them.

Balance Innovation with Guardrails

AI adoption is inevitable, and the “hammer approach” of banning its use outright rarely works. Instead, create tailored policies that balance innovation with security. For instance:

  • Define departmental policies specifying what types of data can interact with AI tools.
  • Implement enforcement mechanisms to prevent sensitive data from being shared inadvertently.

These measures empower employees to leverage AI’s capabilities while maintaining robust security protocols.

Education is Key

One of the biggest challenges in AI adoption is ensuring that employees understand the risks and responsibilities involved. Traditional security awareness programs focused on phishing or malware need to evolve to include AI-specific training. Employees must be equipped to:

  • Recognize the risks of sharing sensitive data with AI.
  • Create clear policies for complex techniques like data anonymization to prevent inadvertent exposure of sensitive information.
  • Appreciate the importance of adhering to organizational policies.

Proactive Threat Modeling

AI introduces unique risks, such as accidental data leakage or “confused pilot” attacks, where AI systems inadvertently expose sensitive information. Conduct thorough threat modeling for each AI use case:

  • Map out architecture and data flows.
  • Identify potential vulnerabilities in training data, prompts, and responses.
  • Implement scanning and monitoring tools to observe interactions with AI systems.

Leverage Modern Tools Like DSPM

Data Security Posture Management (DSPM) is an invaluable framework for securing AI. By providing visibility into data types, access patterns, and risk exposure, DSPM enables organizations to:

  • Identify sensitive data being used for AI training or inference.
  • Monitor and control who has access to critical data.
  • Ensure compliance with data governance policies.

Test Before You Deploy

AI is nondeterministic by nature, which means its behavior can vary unpredictably. Before deploying AI tools, conduct rigorous testing:

  • Red team your AI systems to uncover potential vulnerabilities.
  • Use AI-specific testing tools to simulate real-world scenarios.
  • Establish observability layers to monitor AI interactions post-deployment.

Collaborate Across Departments

Effective AI security requires cross-departmental collaboration. Engage teams from marketing, finance, compliance, and beyond to:

  • Understand their AI use cases.
  • Identify risks specific to their workflows.
  • Implement tailored controls that support their objectives while safeguarding the organization.

Final Thoughts

By focusing on visibility, education, and proactive security measures, we can harness AI’s potential while minimizing risks to our organizations.

If there’s one piece of advice I’d leave you with, it’s this: Don’t wait for incidents to highlight the gaps in your AI strategy. Take the first step now by auditing your organization’s AI usage and building the foundation for secure adoption.

Let’s embrace AI responsibly and lead the way in securing the future. 

For more insights, feel free to connect with me, watch the webinar on demand, or explore Normalyze’s latest advancements in AI security

Ravi Ithal

Ravi has extensive background in enterprise and cloud security. Before Normalyze, Ravi was the cofounder and chief architect of Netskope, a leading provider of cloud-native solutions to businesses for data protection and defense against threats in the cloud. Prior to Netskope, Ravi was one of the founding engineers of Palo Alto Networks (NASDAQ: PANW). Prior to his time at Palo Alto Networks, Ravi held engineering roles at Juniper (NASDAQ: JNPR) and Cisco (NASDAQ: CSCO)