OpenAI Integration

A single line added to your code makes all the difference

All that needs to be done in order to add GPTboost functionality to your LLM app, is add one single line to your code: openai.api_base = "https://turbo.gptboost.ai/v1. Next, per each request made to LLM specified, GPTBoost will collect all the essential data for you.

# This example is for v1+ of the openai: https://pypi.org/project/openai/
from openai import OpenAI

client = OpenAI( 
    # GPTBoost API base URL
    base_url = "https://turbo.gptboost.io/v1",
    api_key = $OPENAI_API_KEY,
)

response = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "user", "content": "Tell me an interesting fact about pandas"},
    ], 
)

print(response.choices[0].message.content)

* Make sure to place the openai.api_base before the completion.

What happens behind the scenes is that GPTBoost serves as a Proxy and gathers all needed information from the prompts and completions. Now you can search, filter sort and analyze with ease from the beautiful GPTBoost interface.

IMPORTANT: Make sure you add the GPTboost base URL only to apps whose OpenAI API keys have been added to GPTboost. To log requests, GPTBoost serves as a proxy between your app and OpenAI. Hence, if you add the GPTBoost URL but do not have the OpenAI key added to your GPTBoost account, requests to OpenAI will not be authorized and stop processing!

Last updated