Skip to main content

Overview

LiveKit Tracing provides deep observability into your LiveKit agent’s performance by integrating the Cekura Python SDK directly into your LiveKit agent code. This integration captures detailed metrics, conversation transcripts, tool calls, and session data that enhance the information available in the Cekura platform. When you integrate the SDK, Cekura receives comprehensive data about each call, including:
  • Complete conversation transcripts with full message history
  • Tool/function calls with inputs and outputs
  • Detailed performance metrics (STT, TTS, LLM, End-of-Utterance)
  • Mock tools support for testing with predictable tool responses
  • Dual-channel audio recording for monitoring production calls
  • LiveKit job and room metadata
This enhanced data appears in the Cekura UI, providing deeper insights into your agent’s behavior.

Prerequisites

  • A Cekura account with an API key
  • A LiveKit agent project

Setup

Use this setup in your test agents while running simulation calls from the Cekura platform.
1

Install the Cekura Python SDK

pip install cekura==1.0.0rc4
2

Integrate the SDK in your LiveKit agent

Add the Cekura tracer to your LiveKit agent’s entrypoint:
import os
from livekit import agents
from cekura.livekit import LiveKitTracer

# Initialize Cekura tracer
cekura = LiveKitTracer(
    api_key=os.getenv("CEKURA_API_KEY"),
    agent_id=123  # Your agent ID from Cekura dashboard
)

@server.rtc_session(agent_name="my_agent")
async def entrypoint(ctx: agents.JobContext):
    assistant = YourAssistant()
    session = agents.AgentSession(...)

    # Track session with automatic tool injection and export
    await cekura.track_session(ctx, session, assistant)

    await session.start(room=ctx.room, agent=assistant)
What this does:
  • Captures transcripts, tool calls, and metrics
  • Automatically injects mock tools configured in Cekura
  • Exports data to Cekura for test analysis
3

Add environment variables to LiveKit

Add the following environment variables to your LiveKit agent:
  • CEKURA_API_KEY: Your Cekura API key from Settings → API Keys
  • CEKURA_TRACING_ENABLED (Optional): Set to "true" (default) or "false" to disable tracing
  • CEKURA_MOCK_TOOLS_ENABLED (Optional): Set to "true" (default) or "false" to disable mock tools
4

Configure LiveKit provider and enable tracing

Navigate to your agent settings in the Cekura dashboard, select LiveKit as the provider, and enable tracing:LiveKit Tracing ConfigurationRequired configuration:
  • Provider: Select “LiveKit” from the dropdown
  • Enable Tracing: Toggle this ON to receive enhanced data from the SDK
Testing approach - Choose one:Option 1: Phone Number Testing
  • Contact Number: Provide the phone number associated with your LiveKit agent
  • Use this if your LiveKit agent is connected to a phone system
Option 2: Automated LiveKit Testing
  • LiveKit API Key: Your LiveKit API key
  • LiveKit API Secret: Your LiveKit API secret
  • LiveKit URL: Your LiveKit server URL (e.g., wss://your-server.livekit.cloud)
  • Agent Name: The specific agent name to dispatch in LiveKit
  • LiveKit Config (JSON) (Optional): Additional room configuration parameters
Use this if you want Cekura to automatically create LiveKit rooms. See the LiveKit Automated Testing guide for more details.
You must provide at least one of the options above (you can configure both to enable both testing methods).
5

Run tests

Run tests based on your configuration:Run Tests with Tracing
  • If you configured Option 1 (Phone Number): Use “Run with Voice”
  • If you configured Option 2 (LiveKit Credentials): Use “Run with LiveKit”
The SDK will send enhanced data back to Cekura for each test run.

Enhanced Data in Cekura UI

With tracing enabled, you’ll see enriched information in the Cekura platform: The run now displays:
  • Room Session ID: Visible in the call provider ID field, allowing you to correlate Cekura test runs with specific LiveKit sessions
  • Complete Transcript: Full conversation history from the LiveKit agent, including tool/function call requests and responses
  • Provider Call Data: Detailed metadata accessible in the run details, including job information, room configuration, and raw performance metrics
Enhanced Data Display Provider Call Data contains the following information:
  • Job Information: Job ID, room name, participant details, and agent dispatch metadata
  • Room Information: Room configuration, participant count, session duration, and connection details
  • Raw Metrics:
    • STT (Speech-to-Text): Latency, duration, and transcription timing
    • TTS (Text-to-Speech): Generation time and audio synthesis metrics
    • LLM: Token usage, response time, and inference latency
    • EOU (End-of-Utterance): Detection timing and accuracy
  • Custom Metadata: Additional metadata passed to the SDK via **metadata parameters

Using Mock Tools with LiveKit Tracing

The SDK supports mock tools, allowing you to test your agent with predictable tool responses. This is useful for creating reproducible test scenarios without relying on live external services. To use mock tools:
  1. Create mock tools in Cekura: Set up your mock tool configurations in the Cekura dashboard. See the Mock Tools guide for detailed instructions.
  2. SDK handles the rest: Once mock tools are configured, the SDK automatically routes tool calls to Cekura’s mock endpoints during testing - no additional code changes needed.
  3. Test with predictable data: Your agent will receive the mock responses you configured, making it easy to test specific scenarios and edge cases.

Best Practices

  1. Use the right method for your environment: Use track_session() in your test/UAT environments for simulation testing with mock tools. Use observe_session() in your production environment for monitoring live calls with audio recording.
  2. Use environment variables for credentials: Don’t hardcode API keys in your code
  3. Keep the SDK updated: Run pip install --upgrade cekura periodically for the latest features
  4. Review tool calls regularly: Add the predefined metric Tool Call Success to your evaluators

SDK Reference

LiveKitTracer Initialization

from cekura.livekit import LiveKitTracer

cekura = LiveKitTracer(
    api_key="your_api_key",        # Required: Your Cekura API key
    agent_id=123,                   # Required: Agent ID from dashboard
    host="https://api.cekura.ai",   # Optional: Custom API host
    enabled=True                    # Optional: Enable/disable tracer
)

track_session()

Tracks simulation/test calls with automatic mock tool injection. Collects transcripts, tool calls, metrics, and injects mock tools.
await cekura.track_session(
    ctx,          # Required: LiveKit JobContext
    session,      # Required: LiveKit AgentSession
    agent,        # Optional: Agent instance for mock tool injection
    **metadata    # Optional: Custom metadata
)
Disable with environment variable: CEKURA_TRACING_ENABLED="false"

observe_session()

Monitors production calls with dual-channel audio recording. Collects transcripts, tool calls, metrics, and records audio. Requires LiveKit credentials configured in Cekura.
await cekura.observe_session(
    ctx,          # Required: LiveKit JobContext
    session,      # Required: LiveKit AgentSession
    **metadata    # Optional: Custom metadata
)
Disable with environment variable: CEKURA_OBSERVABILITY_ENABLED="false"

get_simulation_data()

Extracts simulation data populated by Cekura when running simulation calls from the platform. Returns empty dict for phone-based calls.
await ctx.connect()  # Must be called first

simulation_data = cekura.get_simulation_data(
    ctx    # Required: LiveKit JobContext
)
Returns: Dictionary with simulation metadata:
{
    "scenario_id": 123,              # Scenario being tested
    "run_id": 456,                   # Current run ID
    "test_profile_data": {           # Test profile data
        "customer_name": "John Doe",
        "account_number": "ACC-12345"
    },
    "additional_config": {           # LiveKit config from agent settings
        "sample_key": "sample_value"
    }
}
This data is ONLY available when using Option 2 (Automated LiveKit Testing) - running tests via “Run with LiveKit”. Phone-based calls (Option 1) will return an empty dictionary.

Next Steps