Test your voice agents using text instead of phone calls. 10x faster and 90% cheaper - ideal for workflow validation, regression testing, and CI/CD pipelines.Use voice testing for ASR/TTS validation and final production checks.
Update your OpenAI API key in main.py before starting the server.
Authentication:When Cekura connects to your WebSocket server, it includes the following headers in the handshake request:
Header
Description
X-VOCERA-SECRET
Your Cekura API key. Use this to verify the connection is from Cekura.
X-VOCERA-SCENARIO-ID
The ID of the scenario being tested.
X-VOCERA-RESULT-ID
The result ID for this test run.
X-VOCERA-RUN-ID
The run ID for this test execution.
Any custom headers configured in your agent’s WebSocket header settings are also included.To authenticate incoming connections, validate the X-VOCERA-SECRET header against your API key:
When your server receives a message with "type": "end_call", the conversation is ending. Your server can handle cleanup and connection closure gracefully.
2
Expose Server
For local development, use ngrok:
ngrok http 127.0.0.1:8765
Convert the https:// URL to wss:// for WebSocket connection.
When test profile is attached on your Evaluator and run it as WebSocket Chat, any fields in the test profile starting with X- are automatically sent as HTTP headers to your WebSocket server during connection establishment. This allows your server to receive user context before the conversation begins.Example Test Profile:
async def handle_connection(websocket, path): headers = websocket.request_headers customer_id = headers.get('X-Customer-ID') account_tier = headers.get('X-Account-Tier') # Pre-load customer data before conversation starts customer_data = await fetch_customer(customer_id) # Customize behavior based on tier if account_tier == 'premium': enable_priority_support() # Now handle the conversation...
Only fields starting with X- are sent as headers. Other fields remain available for use in evaluator instructions via template variables like {{test_profile.name}}.
Use Cases:
Pre-load customer/user data before conversation starts
Route to specific handlers based on account tier or customer type
Your WebSocket server can attach metadata to a run by including a metadata field in any message sent to Cekura. Metadata is useful for passing contextual information like internal IDs, environment details, or customer attributes that you want associated with the test run.Sending metadata with a message:
{ "content": "Hello! How can I help you today?", "metadata": { "customer_id": "cust_123", "session_type": "support" }}
Sending metadata without a message:You can send a metadata-only message (no role or content) at any point during the conversation. This attaches the metadata to the run without triggering a response from the testing agent:
Metadata must be a JSON object. Non-object values are ignored.
Metadata is merged across messages using last-write-wins — if you send the same key in multiple messages, the latest value is kept.
Python example:
import jsonasync def handle_connection(websocket, path): # Send metadata at the start of the conversation await websocket.send(json.dumps({ "metadata": { "internal_session_id": "sess_abc", "environment": "staging" } })) async for message in websocket: data = json.loads(message) response = generate_response(data["content"]) # Send response with additional metadata await websocket.send(json.dumps({ "content": response, "metadata": { "model_used": "gpt-4o", "response_latency_ms": 342 } }))
Run chat tests programmatically via API.For complete API documentation including authentication, request parameters, response format, and code examples in multiple languages, see:Run Evaluator Text API Reference