Function Usage

Unlocking Custom Interactions with Function Calling

Function calling empowers users to extend language model interactions, seamlessly integrating custom code for dynamic, application-specific logic. Recent models like gpt-3.5-turbo-1106 and gpt-4-1106-preview are adept at identifying when a function should be invoked.

GPTBoost logs offer convenient statistics tracking for prompts finishing in a function call. Users can effortlessly filter requests invoking specific functions, enhancing visibility and control over their tailored conversational experiences.

Evaluate function usage with GPTBoost

Discover how your functions are utilized in real time through GPTBoost's intuitive dashboard. Each interaction with the language model is logged, allowing you to effortlessly track and analyze the frequency and performance of your functions. Dive into essential metrics like calls, tokens, and costs associated with each function.

Function Usage in GPTBoost Dashboard

Effortlessly examine the distribution of function calls with just a click on the 'Browse Requests' button. GPTBoost puts the power of detailed analysis and informed decision-making at your fingertips, providing valuable insights into the real-world usage of each function.

In the API request, you can define functions, and the model can intelligently decide to generate a JSON object containing arguments for calling one or multiple functions. It's important to note that the Chat Completions API itself doesn't execute the function; instead, it produces JSON output that you can utilize to invoke the function within your code. OpenAI recommends the following workflow.

  1. Invoke the Model with Functions: Call the model by providing the user query and a set of functions defined in the functions parameter.

  2. Check if a function call is needed - The model might choose to execute one or more functions, resulting in content structured as a JSON object following your custom schema (bear in mind that the model may create parameters).

  3. Parse JSON and Execute Functions: Parse the JSON string obtained from the model's response and execute your custom functions, unpacking the provided arguments.

  4. Trigger the Model Again: Invoke the model once more by adding the function response as a new message, enabling the model to provide a summary of the outcomes to the user.

Code Examples

Have a look at the following code examples showcasing how to leverage function calling. Make sure to incorporate necessary checks and exception handling tailored to your specific use case to ensure robust and reliable functionality.

# This example is for v1+ of the openai: https://pypi.org/project/openai/
import os
import json
import requests
from openai import OpenAI

client = OpenAI( 
    # GPTBoost API base URL
    base_url= "https://turbo.gptboost.io/v1",
    api_key= os.getenv('OPENAI_API_KEY'),
)
# Weather API credentials 
weather_api_url = "https://weatherapi-com.p.rapidapi.com/current.json"
headers = {
	"X-RapidAPI-Key": os.getenv('RAPIDAPI_KEY'),
	"X-RapidAPI-Host": "weatherapi-com.p.rapidapi.com"
}

# Define function to check the weather for a specific location
def get_current_weather(city: str):
    querystring = {"q":city}
    response = requests.get(weather_api_url, headers=headers, params=querystring)
    # Return the response as str to pass in the next request to the LLM
    return response.text

# The prompt from the user
messages = [{"role": "user", "content": "What's the weather in Maldives"}]

# The function info that will be passed to the LLM
functions = [
    {
      "name": "get_current_weather",
      "description": "Get the current weather in a given city",
      "parameters": {
        "type": "object",
        "properties": {
          "city": {
            "type": "string",
            "description": "The name of the city"
          },
        },
        "required": ["city"],
      }
    }
  ]
    
# Make the initial request to the LLM
response = client.chat.completions.create(
  model="gpt-3.5-turbo",
  messages=messages,
  functions=functions,
  function_call="auto"
)
# Check if a function call is needed
if response.choices[0].message.function_call:
    function_name = response.choices[0].message.function_call.name
    arguments = response.choices[0].message.function_call.arguments
    available_functions = {
            "get_current_weather": get_current_weather,
    }
    function_to_call = available_functions[function_name] 

    # Invole the function with arguments
    function_response = function_to_call(**json.loads(arguments))

    # adding assistant response to messages
    messages.append({
              "role": "function", 
              "name": function_name,
              "content": function_response,
          })
    response_after_function = client.chat.completions.create(
              model="gpt-3.5-turbo",
              messages=messages,
              functions=functions,
              function_call="auto"
          )            
    # Print the summarized response to the user
    print(response_after_function.choices[0].message.content)
else:
    print(response.choices[0].message.content)

Last updated