Using Generative AI to Simplify Incident Response

Gautam Kanaparthi
May 16, 2023

Generative AI seems to be the topic du jour of 2023. While a lot of discussion around generative AI within the cybersecurity domain has focused on the risk of data leak posed by employees’ use of ChatGPT and how attackers can leverage AI, we’re applying AI to help security practitioners become more productive and effective in their day-to-day activities. 

Here are a couple of examples of how Normalyze is using generative AI to help security analysts.

 

Natural language search

Every security tool has a unique way of prioritizing and presenting risks detected in its platform. Depending on their responsibilities – data security vs. vulnerability management vs. user security – security analysts may focus on different types of threats in the same platform. The same security analyst may even start with a different set of risks depending on what they’re working on at a particular time, e.g., focus on an individual data store, or on a specific type of data breach risk, or on a particular compliance violation. 

To enable these scenarios without forcing security analysts to learn a new workflow, Normalyze now enables a natural language search powered by generative AI. As a result, security analysts can perform a query using natural language – as they would ask another human – to list the top risks most relevant to their task. 

A data security analyst could ask for top risks on S3, and a compliance analyst could ask for the top 10 risks related to GDPR without having to learn Normalyze’s UI or any specific query language. 

 

Contextual remediation guidance

We hear from many security teams that they often do not have the appropriate permissions to take action to remediate the identified risks. Instead, often the devops team or engineering team are the ones with access/privileges to make the required changes. This results in a lot of back-and-forth between incident response analysts and members outside their immediate team to communicate the priority of the issue, evidence to justify an action, and appropriate actions to resolve. 

Normalyze’s generative AI engine adds contextual remediation guidance for every risk. The integrated generative AI capability generates detailed instructions for various mechanisms, including commands to run in CLI, actions to perform in the cloud console, Terraform code to use, and so on. These detailed instructions minimize the need for coordination between security and devops/engineering team members and reduce time to remediation.

 

What’s next

By leveraging natural language search and contextual remediation guidance, security analysts can quickly identify and prioritize risks across multiple platforms and take appropriate action to address them. With the help of generative AI, organizations can streamline their incident response workflows, reduce time to remediation, and improve overall security effectiveness. 

As we look to the future, we can expect generative AI to play an increasingly important role in the fight against cyber threats. It is time for organizations to explore the possibilities that AI offers and incorporate this innovative technology into their security strategies to stay ahead of the ever-evolving threat landscape.

Gautam Kanaparthi

Gautam is the Head of Product at Normalyze. He is passionate about building and scaling market-changing cybersecurity products. At Netskope, Gautam built multiple products from the ground up to help the company address new customer problems, including Nextgen Secure Web Gateway, Advanced Analytics, and Malware Scanning. Before Netskope, he was the principal product manager for Symantec Endpoint Security.