GPTBoost Documentation
  • Welcome
  • First Steps
    • Create Account
    • Authorize Keys
    • OpenAI Integration
    • Azure Integration
    • Streaming
    • Analyze Logs
  • Features
    • Summary & Metrics
    • Request logs
    • Error Logs
    • Models Usage
    • Annotation Agents
    • User Feedback & Voting
      • Integration
      • The Value of User Feedback
  • Advanced
    • Proxy Overview
    • Configuration params
    • Omit Logging
    • GPTBoost Props
    • Namespaces
    • Function Usage
  • Security
    • IP Security
      • Allow Only IPs
      • Block Only IPs
  • Collaborate
    • About Teams
    • Create a Team
    • Invite the Crew
Powered by GitBook
On this page

Was this helpful?

  1. Features

Summary & Metrics

All stylized in your GPTBoost Dashboard

PreviousAnalyze LogsNextRequest logs

Last updated 1 year ago

Was this helpful?

Request and token counts, latency and cost reports that bring clarity and predictability to your AI application. Our graphical dashboard provides an insightful representation of various key metrics:

  1. Request Status Breakdown: Easily distinguish between successful and failed requests in a single chart, offering a clear overview of your LLM's performance.

  2. Latency Analysis: Monitor latency trends over time to ensure optimal response times.

  3. Tokens: Visualize tokens usage, helping you identify potential critical levels and optimize your LLM.

  4. Cost Tracking: Keep a close eye on cost trends associated with your LLM, enabling effective budget management and cost control.

These comprehensive visualizations empower you to see the bigger picture, make informed decisions and optimize your LLM app's costs and performance. Ultimately you get to ensure business value is delivered to your users and there is ROI for your business from your LLM-powered product