🚧 Note: LLM observability is currently considered in
beta
. To access it, enable the feature preview in your PostHog account.We are keen to gather as much feedback as possible so if you try this out please let us know. You can email tim@posthog.com, send feedback via the in-app support panel, or use one of our other support options.
LLM observability enables you to capture and analyze LLM usage and performance data. Specifically, it captures:
- Input and output content and tokens
- Latency
- Model
- Traces
- Cost of each generation
All of this data is captured into PostHog to be used in insights, dashboards, alerts, and more.
Observability installation
Setting up observability starts with installing the PostHog Python SDK.
pip install posthog
The rest of the setup depends on the LLM platform you're using. These SDKs do not proxy your calls, they only fire off an async call to PostHog in the background to send the data.
Start by installing the OpenAI Python SDK:
pip install openai
In the spot where you initialize the OpenAI SDK, import PostHog and our OpenAI wrapper, initialize PostHog with your project API key and host (from your project settings), and pass it to our OpenAI wrapper.
from posthog.ai.openai import OpenAIimport posthogposthog.project_api_key = "<ph_project_api_key>"posthog.host = "https://us.i.posthog.com"client = OpenAI(api_key="your_openai_api_key",posthog_client=posthog)
Note: This also works with the
AsyncOpenAI
client.
Now, when you use the OpenAI SDK, it automatically captures many properties into PostHog including $ai_input
, $ai_input_tokens
, $ai_latency
, $ai_model
, $ai_model_parameters
, $ai_output
, and $ai_output_tokens
.
You can also capture additional properties like posthog_distinct_id
, posthog_trace_id
, and posthog_properties
.
response = client.chat.completions.create(model="gpt-3.5-turbo",messages=[{"role": "user", "content": "Tell me a fun fact about hedgehogs"}],posthog_distinct_id="user_123",posthog_trace_id="trace_123",posthog_properties={"conversation_id": "abc123", "paid": True})print(response.choices[0].message.content)
Note: This also works with responses where
stream=True
.