We use cookies to improve your experience. Do you accept?

Technical Blog Series: ChatGPT Signals Transformative Possibilities for SOAR and TIP Solutions

Technical Blog Series: ChatGPT Signals Transformative Possibilities for SOAR and TIP Solutions - Featured Image

Threat intelligence Feb 24, 2023

One of the key benefits of using machine learning (ML) models in security is the ability to automate repetitive and time-consuming tasks. This aids organizations in preserving finite security resources for more critical tasks.

For example, manually converting an ingested threat intelligence package into a SNORT detection rule can be a time-consuming task. However, the entire process can be automated by leveraging a combination of ML models like text-davinci, text-curie, or ChatGPT as well as a vendor-agnostic security orchestration and automation platform.

ChatGPT is fundamentally a Generative Pre-trained Transformer model. These models are trained on a large scale of data, to predict the next word from the previous. ChatGPT is a fine-tuned model, human-trained over the base model called GPT-3.5. Currently, ChatGPT APIs are in private beta, whereas the GPT-3 models (such as davinci, curie etc.) APIs are available. In our experiment, we used the OpenAIs GPT-3 model to generate predictions.

This particular use case has been built using a text-davinci-003 ML model but the same playbook can be leveraged for the ChatGPT ML model to generate detections once the ChatGPT API is publicly released.

The orchestration of detection generation has 3 core parts.

  • Receiving threat intel from the threat intelligence platform (TIP)

  • Processing the intel received into a detection rule using large-scale ML models

  • Communicating the generated detection for validation and implementation

For this blog, we’ll be using:

Gaining Insights from the TIP

While automation can greatly assist in eliminating repetitive tasks, it can also generate massive amounts of information that needs to be processed by human analysts.

To prevent this, security teams can write a rule in CTIX to send only high-fidelity intel to Cyware Orchestrate, ensuring detections are automatically generated for only those indicators that need active monitoring.

Image _: An automation rule configured in CTIX enabling only high-fidelity indicators to be sent to Orchestrate for processing. _

The above rule, in turn, triggers the orchestrate playbook for those indicators that have a confidence score of > 80%, are not false positives and are not deprecated.

This ensures that the rules that are generated and automated are for IOCs that have confidence scores >80%, are indicative of a calculated threat, and are worthy of automated, expedient actioning.

Converting the Intel Received to SNORT Detections

** Image** : The processor playbook being triggered in Cyware Orchestrate.

Once the intel package is received by Cyware Orchestrate, we proceed to automatically execute a playbook to convert this intel package to a SNORT detection. This playbook builds a prompt using the intel package received from the TIP and sends that to the large-scale ML model.

In order to get the precise output from an ML model such as text-davince-003, we need to provide it with prompts that exactly describe what we expect from it. Once the right prompt with specific details processed from the intel package is generated, Cyware Orchestrate uses its native OpenAI API integration to send it to either the default or the fine-tuned model available on OpenAI’s cloud.

This is enabled via the Cyware Orchestrate application for OpenAI, which can be installed on the Cyware Orchestrate Platform.

Image : The overview section of the OpenAI app viewed in Cyware Orchestrate.

With actions enabling prompt completion, prompt editing, image generation, and image editing, Cyware’s connector for OpenAI directly leverages the native API documented here.

Once installed, the app can then be used to write custom orchestrations leveraging the ML model’s capabilities to analyze threat information and generate meaningful rules or policies to detect and mitigate threats, among other use cases. Security teams can also inject ML-powered actions into Cyware playbooks that are shipped alongside the product itself. Running the aforementioned workflow with a fully configured application results in the outcome shown below.

Image : Overall Playbook Runlog

** Image** : Node level run log showing the ML detection being generated

** Image** : Email sent to the security team post detection generation.

But at the end of the day, while large-scale ML models are fairly intuitive and are improving day by day, we still need to have the generated detection rules reviewed for correctness, accuracy, and technical aptitude before deployment.

To ensure that, we take the response from the large-scale ML model and send it in an email to the security team who can review it and proceed to implement it, modify it, or drop the rule.

Key Takeaway

The analysis above illustrates the utility of machine learning processes in the TIP and SOAR domains. Security teams can derive enormous benefits from the rapid advancement in ML by connecting security orchestration and automation to intelligent processes that action security operations. The key here is that complete reliance on machine intelligence requires a set of safeguards to ensure confidence in outcomes. This is defined above as "confidence scores" that ensure only high-fidelity indicators rely on machine-driven action. ML-powered automation processes can drive greater efficiencies for security teams creating a force-multiplier impact throughout security operations.

Credits

CTIX

OpenAI

Cyware Orchestrate

SNORT

STIX 2.1

Related Blogs