Exploring ChatGPT as an Incident Response Tool
Microsoft recently announced how they leverage ChatGPT to augment security operations and, specifically, incident response (IR) teams with AI. This announcement was part of a broader pattern with organizations discussing their experience augmenting IR teams with AI. Omar Alanezi does an effective job breaking down the traditional steps of incident response and explores which are most impacted by AI. Cado Security has concluded similarly that the mean time to resolution can be improved with AI.
I generally agree with their conclusions. For small-scale experimentation, they are accurate and reflective of the complexity of many enterprises. The difficulty for many organizations will be adopting these tools at scale in a way that provides a positive impact and does not inject bad data into already fast-moving processes while being reproducible and auditable for later investigation and analysis.
When thinking about the adoption of Generative AI in our cyber teams, we must first define our operating principles:
- All applications of Generative AI should be approved by the CISO and other risk-informed executives. They should have direct visibility into where this capability is being used and which processes are excluded from this technology application due to a larger-then-tolerable risk profile.
- The application of Generative AI should focus on the augmentation of existing teams and workflows with measured goals of enhancing their effectiveness and accuracy.
- No tools should be used for the first time during real-world events. All Generative AI tools explored and applied should be used multiple times in simulated events and tabletop exercises.
- Transparency must be part of Generative AI exploration and application, especially when it provides incorrect information. Transparency enables the organization to fine-tune where this capability is leveraged and how and from which vendors. Transparency enables all teams with visibility into learnings that affect how they plan processes, execute and communicate.
- All interactions with Generative AI should be logged, made available for review, and maintained for the life of the products deployed in your enterprise. As Alanezi says, “Asking the right question is half the answer.” That requires a log and transparency to build organizational trust in Generative AI, refine processes for effective application, and showcase your diligence to outside parties when required. These logs also become vital parts of internal training programs for maximizing the effectiveness of the tools employees use.
Finally, no Generative AI post would be complete without exploring hallucinations. Every incident is unique. ChatGPT and associated tools have the risk of exposing past experiences in unhelpful or even harmful ways, as highlighted by Adam Cohen Hillel at Cado Security, “see that the username ‘hacker’ was not actually involved in the incident—that was the model’s own invention.” Hallucinations are a growing concern in the AI community. They drive many to require humans to stay in the loop for decision-making to ensure these outcomes are not actioned on. Models are regularly improved by understanding how manifestations make it into user results.
While Generative AI is not ready to take over the role of our IR teams, it is another powerful tool to augment teams and provide complete coverage of events, timelines, system interactions, and remediation guidance. As part of an IR toolbox, Generative AI can provide better accuracy when understanding a series of events and deeper insight into their relationship and influence when investigating cyber incidents.
These tools do not replace our IR teams, nor do they enable us to decrease the size of our teams. As Generative AI tools continue to evolve, the winners in this market will be those that bring both industry experience and link to profound insights about your enterprise’s specific architecture, traffic patterns, user behaviors, and threats. The big win will be LLMs trained on a combination of industry and localized specific company data—the intersection of those two are powerful for incident response and other cyber security tasks.
Share this
You May Also Like
These Related Stories
Comments (3)