Using the Latest OpenAI Function API

OpenAI API


In this blog post, we delve into a practical demonstration of how the cutting-edge Function API from OpenAI can be applied within a sysadmin role. Herein, we will illustrate how an admin, logged into a PowerShell terminal, can automatically execute any command in a non-interactive mode using this API. The entire process is streamlined and automated, reducing the need for manual intervention.

You can find the entire code base on our GitHub repository: adminGPT

For visualization, here's an example:

Example1.png

Implementation: A Step-by-Step Guide

Setting up the Message Context

Primarily, to make a function call, one of the following models is required: gpt-3.5-turbo-0613, gpt-3.5-turbo-16k-0613 or, if available, the gpt-4.0-0613 model. The Function API is a standard OpenAI API call, but with some additional parameters. Below is the methodology to call the Function API:


def function_chat(self, message, model="gpt-3.5-turbo-0613", functions=[], function_call="auto",temperature=0.0):
    response = self._execute_threaded_api_call(
        openai.ChatCompletion.create,
        model=model,
        temperature=temperature,
        messages=message,
        functions=functions,
        function_call=function_call
    )
    if isinstance(response, str):
        return response
    else:
        message = response["choices"][0]["message"]
        return message

In the Function API call, you 'll notice the additional parameters: 'functions' and 'function_call'. We'll delve deeper into these parameters later in this blog post.

Moving on to the 'make_decision1()' function (line 25 of os_agent.py), which is responsible for making the OpenAI function call. The 'message' attribute holds the conversation, which is essentially a list of dictionaries. To provide some context to the gpt-3.5 model, we pass the last five messages of our conversation.


message=self.conversation.get_conversation()[-5:] 

Adding System-Level Context

Though not mandatory, we have observed improved performance when a system message is included just before the response. In the following line of code, we instruct the gpt-3.5 model to act as a sysadmin and give it a description of the system it is operating. If we need it to execute commands for a different OS, the prompt message can be altered accordingly.


message.append({"role":"system","content":""" 
You are a sysadmin for Windows. You are logged in as Admin and can run any command in powershell terminal.
Please reply to any prompt only with powershell commands. No explanations are needed. You can only run commands in non-interactive mode.
"""})

Setting up the Model and function_call


model="gpt-3.5-turbo-0613"
function_call="auto"

In the above code snippet, the 'function_call' parameter is set to "auto", which means gpt-3.5 can decide whether to call the function or not based on the user prompt.

Defining the Function

The 'functions' parameter comprises a list of dictionaries. GPT-3.5-turbo-0613 was trained with this specific schema. An error will be returned by the OpenAI API if the schema is not correct. The 'name' attribute holds a hypothetical name for an imaginary function, while the 'description' field outlines the function. Think of the 'Name' and 'Description' fields as the "system" message to guide the gpt model in what you expect it to do. Note, however, the function can be a real one if necessary, using 'eval' to directly call the function. However, this approach is risky and prone to errors. The 'commands' parameter will be the output of what we desire gpt-3.5 to generate, in our case, Windows shell commands. The function call can have multiple parameters of the type object or string.


functions=[
    {
        "name": "execute_commands",
        "description": "function to execute List of Windows commands. Interactive commands cannot be executed.",
        "parameters": {
            "type": "object",
            "properties": {
                "commands": {
                    "type": "object",
                    "description": "List of Windows commands with switches that need to be executed",
                },
            },
            "required": ["commands"],
        },
    },
]

Invoking the OpenAI Function API


decision=function_chat(message=message,model=model,functions=functions,function_call=function_call)

Interpreting the Function API Decision

Upon receiving a response from the API call, we parse the JSON object. If gpt-3.5 returns a function call (recall that we set 'function_call' to "auto", which means gpt-3.5 will call a function only when necessary), we then extract the 'commands' parameter, which contains the actual Windows commands. These commands are executed using our 'execute_command' function. It's important to note that the name of the function is different from the one provided to gpt-3.5. For a complete code reference, please visit our GitHub repository: adminGPT


function_name=""
function_response=""

if decision.get("function_call"):
    function_name = decision["function_call"]["name"]
    
    if  function_name== "execute_commands" :
        parsed_json = json.loads(decision['function_call']['arguments'])
        if 'commands' in parsed_json:
            windows_commands=parsed_json['commands']
            
            for command in windows_commands:
                output=self.execute_command(command)
                self.conversation.append_to_system(f"PS C:\\Users\\>{command} \n Output :  {output}")
                function_response=f"PS C:\\Users\\>{command} \n Output :  {output}"
return function_response

Conclusion

In this blog post, we have demystified the usage of OpenAI's Function API with an illustrative example, in the context of a sysadmin role, generating and executing commands on a Windows system through the PowerShell terminal. We detailed the setup of the message context, the addition of system-level context, defining a function, invoking the Function API, and finally, processing the decision from the Function API.

This approach harnesses the power of advanced language models like GPT-3.5-turbo or GPT-4.0, paving the way for building intelligent systems that can automate various tasks, improving efficiency, and minimizing the possibility of human error. The Function API provides us with the flexibility to build dynamic and robust systems that can respond to changing needs.

Despite the complexities involved in the process, we hope this step-by-step guide makes it easy to understand how to use the Function API. Our example, though straightforward, demonstrates the potential for applying these concepts to more complex tasks.

The source code used in this post can be found on our GitHub repository: adminGPT. Feel free to explore it, modify it, and use it as a foundation for your projects.

This it the OpenAi Original Blog Post

Remember, the power of AI is at your fingertips, and with tools like OpenAI's Function API, the possibilities are truly endless.

Get updates directly in your mailbox by signingup for our newsletter. Signup Now

Comments

Popular Posts