Overview
LiveKit Tracing provides deep observability into your LiveKit agent’s performance by integrating the Cekura Python SDK directly into your LiveKit agent code. This integration captures detailed metrics, conversation transcripts, tool calls, and session data that enhance the information available in the Cekura platform. When you integrate the SDK, Cekura receives comprehensive data about each call, including:- Complete conversation transcripts with full message history
- Tool/function calls with inputs and outputs
- Detailed performance metrics (STT, TTS, LLM, End-of-Utterance)
- Mock tools support for testing with predictable tool responses
- Dual-channel audio recording for monitoring production calls
- LiveKit job and room metadata
Prerequisites
- A Cekura account with an API key
- A LiveKit agent project
Setup
- Testing
- Observability
Use this setup in your test agents while running simulation calls from the Cekura platform.
Integrate the SDK in your LiveKit agent
Add the Cekura tracer to your LiveKit agent’s entrypoint:What this does:
- Captures transcripts, tool calls, and metrics
- Automatically injects mock tools configured in Cekura
- Exports data to Cekura for test analysis
Add environment variables to LiveKit
Add the following environment variables to your LiveKit agent:
CEKURA_API_KEY: Your Cekura API key from Settings → API KeysCEKURA_TRACING_ENABLED(Optional): Set to"true"(default) or"false"to disable tracingCEKURA_MOCK_TOOLS_ENABLED(Optional): Set to"true"(default) or"false"to disable mock tools
Configure LiveKit provider and enable tracing
Navigate to your agent settings in the Cekura dashboard, select LiveKit as the provider, and enable tracing:
Required configuration:

- Provider: Select “LiveKit” from the dropdown
- Enable Tracing: Toggle this ON to receive enhanced data from the SDK
- Contact Number: Provide the phone number associated with your LiveKit agent
- Use this if your LiveKit agent is connected to a phone system
- LiveKit API Key: Your LiveKit API key
- LiveKit API Secret: Your LiveKit API secret
- LiveKit URL: Your LiveKit server URL (e.g.,
wss://your-server.livekit.cloud) - Agent Name: The specific agent name to dispatch in LiveKit
- LiveKit Config (JSON) (Optional): Additional room configuration parameters
You must provide at least one of the options above (you can configure both to enable both testing methods).
Enhanced Data in Cekura UI
With tracing enabled, you’ll see enriched information in the Cekura platform: The run now displays:- Room Session ID: Visible in the call provider ID field, allowing you to correlate Cekura test runs with specific LiveKit sessions
- Complete Transcript: Full conversation history from the LiveKit agent, including tool/function call requests and responses
- Provider Call Data: Detailed metadata accessible in the run details, including job information, room configuration, and raw performance metrics

- Job Information: Job ID, room name, participant details, and agent dispatch metadata
- Room Information: Room configuration, participant count, session duration, and connection details
- Raw Metrics:
- STT (Speech-to-Text): Latency, duration, and transcription timing
- TTS (Text-to-Speech): Generation time and audio synthesis metrics
- LLM: Token usage, response time, and inference latency
- EOU (End-of-Utterance): Detection timing and accuracy
- Custom Metadata: Additional metadata passed to the SDK via
**metadataparameters
Using Mock Tools with LiveKit Tracing
The SDK supports mock tools, allowing you to test your agent with predictable tool responses. This is useful for creating reproducible test scenarios without relying on live external services. To use mock tools:- Create mock tools in Cekura: Set up your mock tool configurations in the Cekura dashboard. See the Mock Tools guide for detailed instructions.
- SDK handles the rest: Once mock tools are configured, the SDK automatically routes tool calls to Cekura’s mock endpoints during testing - no additional code changes needed.
- Test with predictable data: Your agent will receive the mock responses you configured, making it easy to test specific scenarios and edge cases.
Best Practices
-
Use the right method for your environment: Use
track_session()in your test/UAT environments for simulation testing with mock tools. Useobserve_session()in your production environment for monitoring live calls with audio recording. - Use environment variables for credentials: Don’t hardcode API keys in your code
-
Keep the SDK updated: Run
pip install --upgrade cekuraperiodically for the latest features - Review tool calls regularly: Add the predefined metric Tool Call Success to your evaluators
SDK Reference
LiveKitTracer Initialization
track_session()
Tracks simulation/test calls with automatic mock tool injection. Collects transcripts, tool calls, metrics, and injects mock tools.CEKURA_TRACING_ENABLED="false"
observe_session()
Monitors production calls with dual-channel audio recording. Collects transcripts, tool calls, metrics, and records audio. Requires LiveKit credentials configured in Cekura.CEKURA_OBSERVABILITY_ENABLED="false"
get_simulation_data()
Extracts simulation data populated by Cekura when running simulation calls from the platform. Returns empty dict for phone-based calls.This data is ONLY available when using Option 2 (Automated LiveKit Testing) - running tests via “Run with LiveKit”. Phone-based calls (Option 1) will return an empty dictionary.
Next Steps
- Set up mock tools for testing with predictable tool responses
- Create custom metrics to evaluate based on provider call data
- Perform load testing with your LiveKit agent
- Explore predefined metrics
