LLM Monitoring

Sentry LLM monitoring helps you understand your LLM calls.

This feature is currently in Alpha. Alpha features are still in-progress and may have bugs. We recognize the irony.

Sentry's LLM Monitoring tools help you understand what's going on with your AI pipelines. They automatically collect information about prompts, tokens, and models from providers like OpenAI and Anthropic.

  • Users are reporting issues with an LLM workflow, and you want to investigate responses from the relevant large language models.
  • You'd like to receive an alert if a specific pipeline costs more than $100 in a single day.
  • Users report that LLM workflows are taking longer than usual, and you want to understand what steps in a workflow are slowest.

To use LLM Monitoring, you must have an existing Sentry account and project set up. If you don't have one, create an account here.

LLM Monitoring User Interface

Learn how to set up Sentry's LLM Monitoring.

Help improve this content
Our documentation is open source and available on GitHub. Your contributions are welcome, whether fixing a typo (drat!) or suggesting an update ("yeah, this would be better").