Function calling empowers users to extend language model interactions, seamlessly integrating custom code for dynamic, application-specific logic. Recent models like gpt-3.5-turbo-1106 and gpt-4-1106-preview are adept at identifying when a function should be invoked.
GPTBoost logs offer convenient statistics tracking for prompts finishing in a function call. Users can effortlessly filter requests invoking specific functions, enhancing visibility and control over their tailored conversational experiences.
Evaluate function usage with GPTBoost
Discover how your functions are utilized in real time through GPTBoost's intuitive dashboard. Each interaction with the language model is logged, allowing you to effortlessly track and analyze the frequency and performance of your functions. Dive into essential metrics like calls, tokens, and costs associated with each function.
Effortlessly examine the distribution of function calls with just a click on the 'Browse Requests' button. GPTBoost puts the power of detailed analysis and informed decision-making at your fingertips, providing valuable insights into the real-world usage of each function.
Recommended Steps in Function Calling
In the API request, you can define functions, and the model can intelligently decide to generate a JSON object containing arguments for calling one or multiple functions. It's important to note that the Chat Completions API itself doesn't execute the function; instead, it produces JSON output that you can utilize to invoke the function within your code. OpenAI recommends the following workflow.
Invoke the Model with Functions: Call the model by providing the user query and a set of functions defined in the functions parameter.
Check if a function call is needed - The model might choose to execute one or more functions, resulting in content structured as a JSON object following your custom schema (bear in mind that the model may create parameters).
Parse JSON and Execute Functions: Parse the JSON string obtained from the model's response and execute your custom functions, unpacking the provided arguments.
Trigger the Model Again: Invoke the model once more by adding the function response as a new message, enabling the model to provide a summary of the outcomes to the user.
Code Examples
Have a look at the following code examples showcasing how to leverage function calling. Make sure to incorporate necessary checks and exception handling tailored to your specific use case to ensure robust and reliable functionality.
# This example is for v1+ of the openai: https://pypi.org/project/openai/import osimport jsonimport requestsfrom openai import OpenAIclient =OpenAI( # GPTBoost API base URL base_url="https://turbo.gptboost.io/v1", api_key= os.getenv('OPENAI_API_KEY'),)# Weather API credentials weather_api_url ="https://weatherapi-com.p.rapidapi.com/current.json"headers ={"X-RapidAPI-Key": os.getenv('RAPIDAPI_KEY'),"X-RapidAPI-Host":"weatherapi-com.p.rapidapi.com"}# Define function to check the weather for a specific locationdefget_current_weather(city:str): querystring ={"q":city} response = requests.get(weather_api_url, headers=headers, params=querystring)# Return the response as str to pass in the next request to the LLMreturn response.text# The prompt from the usermessages = [{"role":"user","content":"What's the weather in Maldives"}]# The function info that will be passed to the LLMfunctions = [{"name":"get_current_weather","description":"Get the current weather in a given city","parameters":{"type":"object","properties":{"city":{"type":"string","description":"The name of the city"},},"required": ["city"],}} ]# Make the initial request to the LLMresponse = client.chat.completions.create( model="gpt-3.5-turbo", messages=messages, functions=functions, function_call="auto")# Check if a function call is neededif response.choices[0].message.function_call: function_name = response.choices[0].message.function_call.name arguments = response.choices[0].message.function_call.arguments available_functions ={"get_current_weather": get_current_weather,} function_to_call = available_functions[function_name]# Invole the function with arguments function_response =function_to_call(**json.loads(arguments))# adding assistant response to messages messages.append({"role": "function", "name": function_name,"content": function_response, }) response_after_function = client.chat.completions.create( model="gpt-3.5-turbo", messages=messages, functions=functions, function_call="auto" )# Print the summarized response to the userprint(response_after_function.choices[0].message.content)else:print(response.choices[0].message.content)
// This code is for v4+ of the openai package: npmjs.com/package/openaiimport OpenAI from'openai';import axios from'axios';constopenai=newOpenAI({ apiKey:process.env.OPENAI_API_KEY, baseURL:"https://turbo.gptboost.io/v1",})// Define function to check the weather for specific locationasyncfunctionget_current_weather(location) {constoptions= { method:'GET', url:'https://weatherapi-com.p.rapidapi.com/forecast.json', params: { q: location, days:'3' }, headers: {'X-RapidAPI-Key':process.env.RAPIDAPI_KEY,' X-RapidAPI-Host':'weatherapi-com.p.rapidapi.com' } };try {constresponse=awaitaxios.request(options);let weather =response.data;constweatherForecast=`Location: ${weather.location.name} \ Current Temperature: ${weather.current.temp_f} \ Condition: ${weather.current.condition.text}. \ Low Today: ${weather.forecast.forecastday[0].day.mintemp_f} \ High Today: ${weather.forecast.forecastday[0].day.maxtemp_f}`;return weatherForecast; } catch (error) {console.error(error);return"No forecast found"; } } // The user inputconstmessages= [{ role:"user", content:"What's the weather in Thessaloniki" }]// The function that checks the weather that would be added to the request to the LLMconstfunctions= [{ name:"get_current_weather", description:"Get the weather forecast for a given location", parameters: { type:"object",// specify that the parameter is an object properties: { location: { type:"string",// specify the parameter type as a string description:"The location, only city name e.g. Beijing, China. location: Beijing" } }, required: ["location"] // specify that the location parameter is required }}]// Define a function to interact with the LLMasyncfunctionask_gpt(){constcompletion=awaitopenai.chat.completions.create({ model:"gpt-3.5-turbo-16k", messages: messages, functions: functions, function_call:"auto" })// Check if a function call is neededif (completion.choices[0].message.function_call){// Get all the arguments ready. constfunction_name=completion.choices[0].message.function_call.name;constfunction_arguments=JSON.parse(completion.choices[0].message.function_call.arguments);constavailable_functions= {"get_current_weather": get_current_weather, }constfunction_to_call= available_functions[function_name] // Call the functionconstcurrent_weather=awaitfunction_to_call(function_arguments.location);// Add the function response to the messages for the next prompt to the LLMmessages.push({ role:"function", name: function_name, content: current_weather, })try {constcompletion_after_function=awaitopenai.chat.completions.create({ model:"gpt-3.5-turbo-0613", messages: messages });console.log(completion_after_function.choices[0].message.content); } catch (error) {if (error.completion_after_function) {console.log(error.completion_after_function.status);console.log(error.completion_after_function.data); } else {console.log(error.message); }n } } else {constcompletion_text=completion.choices[0].message.content;console.log(completion_text); }}ask_gpt()
# This example is for v1+ of the openai: https://pypi.org/project/openai/import osimport jsonimport requestsfrom openai import AzureOpenAIclient =AzureOpenAI(# Set GPTBoost API base URL for azure_endpoint azure_endpoint ="https://turbo.gptboost.io/v1", api_key = os.getenv('AZURE_API_KEY'), api_version ="2023-07-01-preview",)deployment_name="gpt-35-turbo-16k"weather_api_url ="https://weatherapi-com.p.rapidapi.com/current.json"headers ={"X-RapidAPI-Key": os.getenv('RAPIDAPI_KEY'),"X-RapidAPI-Host":"weatherapi-com.p.rapidapi.com"}# The user inputmessages ={"role":"user","content":"Tell me the wind in Lemnos"},functions=[{"name":"get_current_weather","description":"Get the current weather for a given city","parameters":{"type":"object","properties":{"city":{"type":"string","description":"Pass the city to obtain the current weather"},},"required": ["city"]}}]defget_current_weather(city:str): querystring ={"q":city} response = requests.get(weather_api_url, headers=headers, params=querystring)# Return the response as str to pass in the next request to the LLMreturn response.textresponse = client.chat.completions.create( model=deployment_name, messages=messages, functions=functions, function_call="auto", )response_message = response.choices[0].message# Check if the model wants to call a functionif response_message.function_call:# Call the function. The JSON response may not always be valid so make sure to handle errors function_name = response_message.function_call.name available_functions ={"get_current_weather": get_current_weather,} function_to_call = available_functions[function_name] function_args = response_message.function_call.arguments function_response =function_to_call(**json.loads(function_args)) messages = [x for x in messages]# Add the function response to the messages messages.append( {"role": "function","name": function_name,"content": function_response, } )# Call the API again to get the final response from the model second_response = client.chat.completions.create( messages=messages, model=deployment_name# optionally, you could provide functions in the second call as well )print(second_response.choices[0].message.content)else:print(response.choices[0].message.content)