Request a Demo Cybersecurity Assessment Latest Trellix Events Contact Us

Blogs

The latest cybersecurity trends, best practices, security vulnerabilities, and more

ChatGPT Integration with OpenDXL

In this blog post we'll guide you through the OpenDXL Python Client installation and two different OpenDXL integrations using OpenAI's ChatGPT.

In the first example, we will be creating an “event subscriber” that will ask ChatGPT to review threat events and decide if the event requires the system to be tagged in ePolicy Orchestrator.

In the second example, we will be creating a “service wrapper” for ChatGPT to demonstrate how it can response to invoking clients using OpenDXL.

OpenDXL lets developers join an adaptive system of interconnected services that communicate and share information to make real-time, accurate security decisions. OpenDXL leverages the Data Exchange Layer (DXL), which many vendors and enterprises already utilize, and delivers a simple, open path for integrating security technologies regardless of vendor.

Designed to improve the context of analysis, shorten workflows of the threat defense lifecycle, reduce complexities across security products and vendors, and increase the value of previously deployed applications, OpenDXL enables unprecedented collaboration in an open, real-time system. By attaching to a common application framework, each participant enters a unified ecosystem, one that gains value and capability as the network effect activates.

Prerequisites

Before we proceed with the integration process, please ensure that you have the following prerequisites in place:

  1. ePolicy Orchestrator: version 5.10.x.
  2. DXL Brokers: You will need to have DXL Brokers version 5.x or greater, which are deployed within a Trellix ePolicy Orchestrator managed environment.
  3. DXL Extensions: version 5.x or greater.
  4. Python:
    • Python version 2.7.9 or later
    • Python version 3.4.0 up to 3.9.x
  5. OpenSSL: An OpenSSL version used by Python that supports TLSv1.2. Specifically, you’ll need version 1.0.1 or greater.

Install the OpenDXL Python Client

Instructions:

  1. Download the OpenDXL Python Client: Start by downloading the latest release of the OpenDXL
    Python Client. You can find it as a .zip file at this link:
  2. Extract the Archive: Once downloaded, extract the content of the .zip archive to your preferred location.
  3. Navigate to the Directory: Within the extracted folder, locate and navigate to the following directory:
    • dxlclient-python-sdk-5.6.0.4\lib
  4. Run the Installation Command: Open command prompt or terminal, within the mentioned directory, execute this command:
    pip install dxlclient-5.6.0.4-py2.py3-none-any.whl

Provision the OpenDXL Client

The provisioning process ensures that the client is authorized and configured to connect to the DXL fabric, making it possible for your application to use DXL services securely and efficiently.

The OpenDXL Client can be provisioned using several methods. We'll be using the command-line method in this example. To explore other provisioning methods or dive deeper into the process, refer to the OpenDXL Python Client's official documentation here: https://opendxl.github.io/opendxl-client-python/pydoc/index.html

Instructions:

  1. Open a Terminal or Command Prompt: Begin by opening a terminal or command prompt on your system.
  2. Create a Working Directory: Choose an accessible location on your system and create a directory. This will be your working directory for the client provisioning process. An example is shown below:
    mkdir MyFolder
    cd MyFolder
  3. Execute the Command: Now run the following command using the parameters from your environment:
    dxlclient provisionconfig config EPOSERVER opendxlclient

    NOTE: If a non-standard port (not 8443) is being used for ePO or the management interface of the OpenDXL Broker, an additional "port" argument must be specified. For example, -t 443 could be specified as part of the provision operation to connect to the server on port 443.

    The parameters for the command are explained below:

    • config: the directory to contain the results of the provisioning operation.
    • EPOSERVER: the host name or IP address of the Trellix ePolicy Orchestrator (ePO) server
    • opendxlclient: This represents the Common Name (CN) attribute stored in the subject field of the client's certificate. This name will also be displayed in ePO.
  4. Provide Credentials: You will be prompted to provide your credentials for the OpenDXL Broker Management Console or ePO (the ePO user must be an administrator):
    Enter server username:
    Enter server password:
  5. Completion of the Process: Once the process is successfully completed, you should expect to see an output like this:
    INFO: Saving csr file to config/client.csr
    INFO: Saving private key file to config/client.key
    INFO: Saving DXL config file to config/dxlclient.config
    INFO: Saving ca bundle file to config/ca-bundle.crt
    INFO: Saving client certificate file to config/client.crt
  6. Working Directory: The current working directory should now look like this: Figure 1: Working Directory (Provisioning)
    Figure 1: Working Directory (Provisioning)

Create the Event Subscriber for ChatGPT

In this example, we will be setting up an Automatic Response within ePolicy Orchestrator to publish threat events to a specific topic in DXL. We will then use python to subscribe to this topic and ask ChatGPT to decide, based on the threat event, if we need to investigate the endpoint further by tagging the system in ePO.

Event-Based Communication:
The DXL fabric utilizes an event-driven communication model, commonly known as "publish/subscribe." In this system, clients express their interest in a topic by subscribing to it. Publishers, on the other hand, send events to these topics at regular intervals. The DXL fabric is responsible for delivering these events to all clients currently subscribed to the respective topic. This enables one-to-many communication, where a single event can be disseminated to multiple clients. It's key to understand that in this model, clients receive events passively, dispatched by the publisher.

Instructions:

  1. Obtain an OpenAI API Key: Before running this example, you must obtain an API Key from OpenAI.
  2. Setup the Automatic Response in ePO: Login to ePO and navigate to the Automatic Responses and perform the following steps
    • Create New Response:
      • Start by clicking on the "New Response" button.
    • Response Details:
      • Name the response as "Send Threat Events to ChatGPT over DXL."
      • Set the event group as "ePO Notification Events."
      • Choose the event type as "Threat."
      • Set the status to "Enabled."
      • After you have input these details, click on the "Next" button.
    • Set Up Filter:
      • Configure the filter to the system tree group that is appropriate for your needs.
    • Configure Aggregation:
      • Set the aggregation to “Trigger this response for every event.”
    • Define the Action:
      • Ensure the action is set to “Send DXL Event.”
      • Select the event attributes that you want to send. You can do this by clicking on the arrows associated with each attribute.
      • Ensure the topic is set to “/mcafee/event/epo/threat/response”
    • Save the Response:
      • Finally, save your settings by clicking on the "Save" button.
  3. Create the Script: Create a python script with the name “chatgpt_event_subscriber.py”. The script can be broken down into four main sections:
    • Importing Necessary Libraries and Configurations:
      • The script starts by importing the necessary modules and libraries such as 'os', 'sys', 'openai', 'time', 'json', 'requests' among others. After this, it suppresses warnings associated with insecure requests to maintain a cleaner output.
      • It also sets up a logger to keep track of events. A logging formatter is established to standardize the log output and the level of logging is set to 'INFO'.
        from __future__ import absolute_import
        from __future__ import print_function
        import logging
        import os
        import sys
        import openai
        import time
        import json
        import requests

        from requests.auth import HTTPBasicAuth
        from requests.packages.urllib3.exceptions import InsecureRequestWarning
        from dxlclient.callbacks import EventCallback
        from dxlclient.client import DxlClient
        from dxlclient.client_config import DxlClientConfig

        # Suppress only the insecure request warning
        requests.packages.urllib3.disable_warnings(category=InsecureRequestWarning)
        # Enable logging, this will also direct built-in DXL log messages.
        # See - https://docs.python.org/2/howto/logging-cookbook.html
        log_formatter = logging.Formatter('%(asctime)s %(name)s - %(levelname)s - %(message)s')
        console_handler = logging.StreamHandler()
        console_handler.setFormatter(log_formatter)
        logger = logging.getLogger()
        logger.addHandler(console_handler)
        logger.setLevel(logging.INFO)
    • Defining Global Variables and Settings
      • CONFIG_FILE: The path to the DXL configuration file.
      • EVENT_TOPIC: The topic that will trigger events in the DXL system.
      • openai.api_key: The API key for OpenAI.
      • epo_server_url: The URL of the ePO server.
      • username & password: The credentials for the ePO server.
      • api_command: The command for tagging a system.
      • api_url: The complete URL of the API endpoint.
        # Config file name.
        CONFIG_FILE_NAME = "dxlclient.config"
        CONFIG_FILE = os.path.dirname(os.path.abspath(__file__)) + "/config/" + CONFIG_FILE_NAME

        # Sample topic to fire Events on
        EVENT_TOPIC = "/mcafee/event/epo/threat/response"

        # The API key for OpenAI
        openai.api_key = ""

        # The URL of the ePO server
        epo_server_url = ""

        # Your ePO credentials
        username = ""
        password = ""

        # The API command for tagging a system
        api_command = "/remote/system.applyTag"

        # The full URL of the API endpoint
        api_url = epo_server_url + api_command
    • Creating a Function for Tagging Systems:
      • A function tag_system(hostname) is defined which tags a system in ePO when called. It takes in a hostname, sends a POST request to the ePO server, and then checks if the request was successful. If successful, it logs that the tagging was successful; otherwise, it logs an error message.
        # Function to tag a system in ePO
        def tag_system(hostname):

            try:

                # The tag you want to apply and the system you want to apply it to
                parameters = {
                    'tagName': 'Escalated',
                    'names': hostname
                }
               
                # Send the request
                response = requests.post(api_url, params=parameters, auth=HTTPBasicAuth(username, password), verify=False)
         
                # Check if the request was successful
                if response.status_code == requests.codes.ok:
                    logger.info("Tagging the system " + hostname + " was successful.")
                else:
                    logger.info("Request failed with status code: " + response.status_code)
                    logger.info(response.text)
         
            except requests.exceptions.RequestException as err:
                logger.info("Exception occurred: ", err)
    • Establishing DXL Event Subscriber:
      • The script sets up a DXL event subscriber that listens for events on a specific topic. When an event occurs, it processes the event, sends a request to OpenAI's GPT model, and then checks the response. If the response recommends further investigation, it tags the system in ePO; otherwise, it logs that the system doesn't need to be tagged.
      • The DXL client is created using the DXL configuration file.
      • An event callback class MyEventCallback is defined with an on_event method, which handles received DXL events.
      • This method decodes the event payload, converts it into a dictionary, and sends a prompt to the GPT model.
      • Depending on the response from the GPT model, it either tags the system for further investigation or logs a message that the system doesn't need to be tagged.
      • The event callback is then added to the DXL client.
      • Lastly, the script enters an infinite loop where it pauses for 60 seconds at each iteration, effectively keeping the script running indefinitely so it can keep listening for events.
        # Event Subscriber
        try:
            # Create DxlClientConfig from expected configuration file
            logger.info("Event Subscriber - Load DXL config from: %s", CONFIG_FILE)
            config = DxlClientConfig.create_dxl_config_from_file(CONFIG_FILE)
         
            # Initialize DXL client using our configuration
            logger.info("Event Subscriber - Creating DXL Client")
            with DxlClient(config) as client:
         
                # Connect to DXL Broker
                logger.info("Event Subscriber - Connecting to Broker")
                client.connect()
         
                # Event callback class to handle incoming DXL Events
                class MyEventCallback(EventCallback):
                    def on_event(self, event):
                        # Extract information from Event payload, in this sample we
                        # expect it is UTF-8 encoded
                        logger.info("Event Subscriber - Event received:\n   Topic: %s\n",
                                    event.destination_topic)
         
                        #decode payload
                        decodedPayload = event.payload.decode()
         
                        # convert payload to dict
                        threatEventDict = json.loads(decodedPayload)
                        
                        #create query
                        query = """
                        Please answer this question with only a single word, 'yes' or 'no'.
                        The following threat event was generated on an endpoint, from the context of
                        a SOC analyst, should I investigate the system further? \n\n
                        """ + decodedPayload
                       
                        #send request to chat-gpt
                        chatgptResponse = openai.Completion.create(
                            engine="text-davinci-003",  # Use the corresponding identifier for GPT-4
                            prompt=query,
                            max_tokens=30
                        )
         
                        #check to see what the response was
                        if "yes" in chatgptResponse.choices[0].text.strip().lower():
                            tag_system(threatEventDict["event"]["source"]["hostName"])
                        else:                       
                            logger.info("ChatGPT did not recommend to tag the system")
         
                # Add Event callback to DXL client
                logger.info("Adding Event callback function to Topic: %s", EVENT_TOPIC)
                client.add_event_callback(EVENT_TOPIC, MyEventCallback())
         
                logger.info("ChatGPT event subscriber is running...")
         
                # Wait forever
                while True:
                    time.sleep(60)
         
        except Exception as e:
            logger.exception("Event Subscriber - Exception")
            exit(1)
  4. Working Directory:The current working directory should now look like this: Figure 2: Working Directory (Event Subscriber)
    Figure 2: Working Directory (Event Subscriber)
  5. Start the Event Subscriber: To start the event subscriber, execute the following command from the working directory:
    python chatgpt_event_subscriber.py
  6. Generate a Threat Event: Generate a threat event and you should see a similar output as below:
    2023-05-31 15:53:59,355 __main__ - INFO - Event Subscriber - Load DXL config from:
    C:\Users\Administrator\Desktop\ChatGPT\tutorial/config/dxlclient.config
    2023-05-31 15:53:59,361 __main__ - INFO - Event Subscriber - Creating DXL Client
    2023-05-31 15:53:59,371 __main__ - INFO - Event Subscriber - Connecting to Broker
    2023-05-31 15:53:59,371 dxlclient.client - INFO - Waiting for broker list...
    2023-05-31 15:54:02,760 dxlclient.client - INFO - Trying to connect...
    2023-05-31 15:54:02,760 dxlclient.client - INFO - Trying to connect to broker {Unique id: {f457014e-b1f5-11ed-093b-00155d008c23}, Host name: tieserver, IP address: x.x.x.x, Port: 8883}...
    2023-05-31 15:54:02,778 dxlclient.client - INFO - Connected to broker {f457014e-b1f5-11ed-093b-00155d008c23}
    2023-05-31 15:54:02,778 __main__ - INFO - Adding Event callback function to Topic: /mcafee/event/epo/threat/response
    2023-05-31 15:54:02,778 __main__ - INFO - ChatGPT event subscriber is running...
    2023-05-31 15:55:06,639 __main__ - INFO - Event Subscriber - Event received:
       Topic: /mcafee/event/epo/threat/response
     
    2023-05-31 15:55:08,733 __main__ - INFO - Tagging the system WIN-XXXXXXXXXX was successful.

Create the Service Wrapper for ChatGPT

In this section we will be implementing a service wrapper for ChatGPT. This service wrapper will act as a bridge, facilitating seamless integration of ChatGPT, through its API, into your existing systems and workflows.

The documentation for the OpenAI API can be found here:
https://platform.openai.com/docs/introduction

Service-based Communication
The DXL fabric allows for “services” to be registered and exposed that respond to requests sent by invoking clients. This communication is point-to-point (one-to-one), meaning the communication is solely between an invoking client and the service that is being invoked. It is important to note that in this model the client actively invokes the service by sending it requests.

Instructions:

  1. Obtain an OpenAI API Key: Before running this example, you must obtain an API Key from OpenAI.
  2. Create the Script: Create a python script with the name “chatgpt_service_wrapper.py”. The script is explained below:
    • Importing Necessary Libraries and Configurations:
      • The script starts by importing the necessary modules and libraries such as 'os', 'sys', 'openai', 'time', 'json', 'requests' among others. After this, it suppresses warnings associated with insecure requests to maintain a cleaner output.
      • It also sets up a logger to keep track of events. A logging formatter is established to standardize the log output and the level of logging is set to 'INFO'.
        # This sample demonstrates wrapping an existing service and exposing it on the
        # DXL fabric.
        #
        # In this particular case, OpenAI ChatGPT is
        # exposed as a DXL service. This service wrapper delegates to the
        # OpenAI REST API.
        #
         
        from __future__ import absolute_import
        from __future__ import print_function
        import logging
        import os
        import sys
        import time
        import openai
         
        from dxlclient.callbacks import RequestCallback
        from dxlclient.client import DxlClient
        from dxlclient.client_config import DxlClientConfig
        from dxlclient.message import ErrorResponse, Response
        from dxlclient.service import ServiceRegistrationInfo
         
        # Enable logging, this will also direct built-in DXL log messages.
        # See - https://docs.python.org/2/howto/logging-cookbook.html
        log_formatter = logging.Formatter('%(asctime)s %(name)s - %(levelname)s - %(message)s')
        console_handler = logging.StreamHandler()
        console_handler.setFormatter(log_formatter)
        logger = logging.getLogger()
        logger.addHandler(console_handler)
        logger.setLevel(logging.INFO)
    • Defining Global Variables and Settings
      • openai.api_key: The API key for OpenAI.
      • SERVICE_NAME: The OpenDXL service name.
      • SERVICE_TOPIC: The OpenDXL topic name.
      • CONFIG_FILE_NAME: The name of the DXL configuration file.
      • CONFIG_FILE: The path to the DXL configuration file.
      • # The API key for OpenAI
        openai.api_key = ""
         
        # The name of the ChatGPT service
        SERVICE_NAME = "/trellix/service/chatgpt"
         
        # The "completion" topic
        SERVICE_TOPIC = SERVICE_NAME + "/completion"
         
        # Config file name.
        CONFIG_FILE_NAME = "dxlclient.config"
        CONFIG_FILE = os.path.dirname(os.path.abspath(__file__)) + "/config/" + CONFIG_FILE_NAME
         
        # Create DXL configuration from file
        config = DxlClientConfig.create_dxl_config_from_file(CONFIG_FILE)
    • Establishing and Running the Service
      • Create and Connect the DXL Client
        • Creates a DXL Client instance and connects it to the fabric.
      • Register the Service
        • The code proceeds to define a callback class named
          'ChatGPTCompletionCallback'. This callback class has an 'on_request'
          method which will be called when a service request is received. The callback:
          • Extracts the request payload, which is decoded into UTF-8 format and logged.
          • Sends the request payload to the ChatGPT model and waits for a response. This is done using the 'openai.Completion.create' method.
          • A response object is created with the request payload set as the response from the ChatGPT model.
          • The response is sent back to the DXL fabric.
          • If there's an exception, an error response is created with the exception message and sent back to the DXL fabric.
    • Create Service Registration Object
      • A 'ServiceRegistrationInfo' object is created, providing the DXL client instance and the service name. The topic and the callback are added to this object.
    • Register the Service with the Fabric
      • The service is registered on the DXL fabric by invoking the 'register_service_sync' method on the client object. The method will wait up to 10 seconds for registration to complete.
    • Running the Service
      • A log message is printed to indicate that the ChatGPT service is running. The program then enters an infinite loop, where it pauses for 60 seconds at each iteration. This is to ensure that the script keeps running indefinitely, allowing the service to stay up.
        # Create the client
        with DxlClient(config) as client:
         
            # Connect to the fabric
            client.connect()
         
            #
            # Register the service
            #
         
            # Create ChatGPT Completion incoming request callback
            class ChatGPTCompletionCallback(RequestCallback):
                def on_request(self, request):
                    try:
                        # Extract information from request
                        query = request.payload.decode(encoding="UTF-8")
                        logger.info("Service received request payload: " + query)
         
                        #send request to chat-gpt
                        chatgptResponse = openai.Completion.create(
                            engine="text-davinci-003",  # Use the corresponding identifier for GPT-4
                            prompt=query,
                            max_tokens=3000
                        )
         
                        # Create response
                        res = Response(request)
                        # Create the response message
                        response = Response(request)
                        # Populate the response payload
                        response.payload = chatgptResponse.choices[0].text.strip()
                        # Send the response
                        client.send_response(response)
         
                    except Exception as ex:
                        print(str(ex))
                        # Send error response
                        client.send_response(ErrorResponse(
                            request, error_message=str(ex).encode(encoding="UTF-8")))
         
            # Create service registration object
            info = ServiceRegistrationInfo(client, SERVICE_NAME)
         
            # Add a topic for the service to respond to
            info.add_topic(SERVICE_TOPIC, ChatGPTCompletionCallback())
         
            # Register the service with the fabric (wait up to 10 seconds for registration to complete)
            client.register_service_sync(info, 10)
         
            logger.info("ChatGPT service is running...")
         
            # Wait forever
            while True:
                time.sleep(60)
  3. Working Directory: Your current working directory should now look like this: Figure 3: Working Directory (Service Wrapper)
    Figure 3: Working Directory (Service Wrapper)
  4. Start the Service Wrapper: To start the service wrapper, execute the following command from the working directory:
    python chatgpt_service_wrapper.py
  5. Service Wrapper Startup: After executing the previous command you should expect to see an output like this:
    2023-05-31 11:05:23,056 dxlclient.client - INFO - Waiting for broker list...
    2023-05-31 11:05:25,424 dxlclient.client - INFO - Trying to connect...
    2023-05-31 11:05:25,424 dxlclient.client - INFO - Trying to connect to broker {Unique id:
    {f457014e-b1f5-11ed-093b-00155d008c23}, Host name: tieserver, IP address: x.x.x.x, Port: 8883}...
    2023-05-31 11:05:25,440 dxlclient.client - INFO - Connected to broker {f457014e-b1f5-11ed-093b-00155d008c23}
    2023-05-31 11:05:25,455 __main__ - INFO - ChatGPT service is running...

Invoking the ChatGPT Service

We're now at the last stage of this guide: invoking the ChatGPT service we've just created. In this step, we'll create a separate script which will be executed from a new command prompt window, while keeping the ChatGPT service running from our previous step.

Instructions:

  1. Create the Script: Create a python script with the name “chatgpt_service_invoker.py”. The code is explained below
    • Importing Necessary Libraries and Configurations:
      • The script starts by importing the necessary modules and libraries such as 'os', 'sys', 'openai', 'json', 'requests' among others. After this, it suppresses warnings associated with insecure requests to maintain a cleaner output.
      • It also sets up a logger to keep track of events. A logging formatter is established to standardize the log output and the level of logging is set to 'INFO'.
        from __future__ import absolute_import
        from __future__ import print_function
        import json
        import logging
        import os
        import sys

        from dxlclient.client import DxlClient
        from dxlclient.client_config import DxlClientConfig
        from dxlclient.message import Message, Request

        # Enable logging, this will also direct built-in DXL log messages.
        # See - https://docs.python.org/2/howto/logging-cookbook.html
        log_formatter = logging.Formatter('%(asctime)s %(name)s - %(levelname)s - %(message)s')
        console_handler = logging.StreamHandler()
        console_handler.setFormatter(log_formatter)
        logger = logging.getLogger()
        logger.addHandler(console_handler)
        logger.setLevel(logging.INFO)
    • Defining Global Variables and Settings
      • The OpenDXL service name.
      • SERVICE_TOPIC: The OpenDXL topic name.
      • CONFIG_FILE_NAME: The name of the DXL configuration file.
      • CONFIG_FILE: The path to the DXL configuration file.
        # The name of the OpenAI ChatGPT service
        SERVICE_NAME = "/trellix/service/chatgpt"

        # The "completion" topic
        SERVICE_TOPIC = SERVICE_NAME + "/completion"

        # Config file name.
        CONFIG_FILE_NAME = "dxlclient.config"
        CONFIG_FILE = os.path.dirname(os.path.abspath(__file__)) + "/config/" + CONFIG_FILE_NAME

        # Create DXL configuration from file
        config = DxlClientConfig.create_dxl_config_from_file(CONFIG_FILE)
    • Creating the DXL Client and Invoking the Service
      • DXL Client creation
        • The DXL Client is created using the configuration file.
      • Connecting to the Fabric
        • A connection to the DXL fabric is established using the client's connect method.
      • Service Invocation
        • A request to the OpenAI ChatGPT service is made. The payload for the request (the content of the message) is a question asking what is the risk associated with excluding a particular file from Trellix Endpoint Security On-Access Scan (OAS).
      • Response Handling
        • The response from the service is received and processed. If the message type isn't an error, the payload (response content) is decoded from UTF-8 and printed to the console. If it is an error, a logger error is displayed with the error message and code.
          # Create the client
          with DxlClient(config) as client:
           
              # Connect to the fabric
              client.connect()
           
              #
              # Invoke the service (send a request)
              #
           
              # Create the request
              req = Request(SERVICE_TOPIC)
             
              # Ask ChatGPT a question..
              req.payload = "What level of risk is associated with adding explorer.exe as an exclusion for Trellix Endpoint Security On-Access Scan?".encode()
           
              # Send the request and wait for a response (synchronous)
              res = client.sync_request(req)
           
              # Extract information from the response (if an error did not occur)
              if res.message_type != Message.MESSAGE_TYPE_ERROR:
                  response_dict = res.payload.decode(encoding="UTF-8")
                  print("Client received response payload: \n" + response_dict)
              else:
                  logger.error("Error: %s (%s)", res.error_message, res.error_code)
  2. Working Directory: Your current working directory should now look like this: Figure 4: Working Directory (Service Invoker)
    Figure 4: Working Directory (Service Invoker)
  3. Invoke the ChatGPT Service: To start the service invoker, execute the following command from the working directory:
    python chatgpt_service_invoker.py
  4. Request and Response from ChatGPT:After executing the previous command you should expect to see an output like this:

    The service from the previous step, should now show the request:
    2023-06-21 08:14:15,780 dxlclient.client - INFO - Waiting for broker list...
    2023-06-21 08:14:18,183 dxlclient.client - INFO - Trying to connect...
    2023-06-21 08:14:18,183 dxlclient.client - INFO - Trying to connect to broker {Unique id:
    {f457014e-b1f5-11ed-093b-00155d008c23}, Host name: tieserver, IP address: x.x.x.x, Port: 8883}...
    2023-06-21 08:14:18,199 dxlclient.client - INFO - Connected to broker {f457014e-b1f5-11ed-093b-00155d008c23}
    2023-06-21 08:14:18,230 root - INFO - ChatGPT service is running...
    2023-06-21 08:14:34,449 root - INFO - Service received request payload: What level of risk is associated with adding explorer.
    exe as an exclusion for Trellix Endpoint Security On-Access Scan?

    The invoker will then receive the response:

    2023-06-21 08:15:08,622 dxlclient.client - INFO - Waiting for broker list...
    2023-06-21 08:15:10,997 dxlclient.client - INFO - Trying to connect...
    2023-06-21 08:15:10,997 dxlclient.client - INFO - Trying to connect to broker {Unique id:
    {f457014e-b1f5-11ed-093b-00155d008c23}, Host name: tieserver, IP address: x.x.x.x, Port: 8883}...
    2023-06-21 08:15:11,013 dxlclient.client - INFO - Connected to broker {f457014e-b1f5-11ed-093b-00155d008c23}
    Client sent the request
    Client received response payload:
    Adding Explorer.exe as an exclusion for Trellix Endpoint Security On-Access Scan could carry a high level of risk. Explorer.exe is a critical system process that needs to have unrestricted access to the system for the OS to run properly. Therefore, adding it as an exclusion could create a significant security hole on your system and leave it vulnerable to malicious attacks.

You are now ready to utilize the integrations in your deployment of Trellix OpenDXL. This has showed how you can integrate with ChatGPT but, the code can be modified to work with any AI platform. Concerned about your information going to a public platform? Ask your sales representative about how to get access to the Trellix AI platform.

Would you like to discuss Trellix OpenDXL integration use cases or get help implementing Trellix OpenDXL in your organization to build a living security architecture? If you answered yes to either of these questions, please reach out to your Trellix sales representative to schedule a time to discuss options on how Trellix Professional Services can help!

Get the latest

We’re no strangers to cybersecurity. But we are a new company.
Stay up to date as we evolve.

Please enter a valid email address.

Zero spam. Unsubscribe at any time.