Skip to main content
Cekura is designed to work seamlessly with AI coding agents. Whether you’re using Claude Code, Cursor, or VS Code, you can manage your voice agent testing and observability directly from your development environment.

Integration Options

MCP Server

Connect your AI assistant to Cekura’s API via the Model Context Protocol for real-time tool access.

AGENTS.md

Add an AGENTS.md file to your repo to teach AI assistants how to use Cekura for your project.

Quick Setup

1. Connect via MCP

The fastest way to integrate is through the MCP server. This gives your AI assistant direct access to Cekura’s API tools.
claude mcp add --transport http cekura --scope user https://api.cekura.ai/mcp --header "X-CEKURA-API-KEY:YOUR_API_KEY"

2. Add AGENTS.md to Your Repo

Drop an AGENTS.md template into your project root. This teaches any AI assistant the Cekura-specific workflows and terminology for your project.

3. Discover Documentation

Cekura’s documentation is available in agent-friendly formats:
ResourceURLPurpose
llms.txthttps://docs.cekura.ai/llms.txtPage index with descriptions for AI discovery
llms-full.txthttps://docs.cekura.ai/llms-full.txtFull documentation content in markdown
OpenAPI Spechttps://api.cekura.ai/api/schema/Complete API specification

Common Agent Workflows

Test Your Agent After a Deploy

User: "Run our regression tests against the staging agent"

AI: [Uses list-agents, then scenarios-run-scenarios-create]
"I've kicked off 12 scenarios against your staging agent.
Results will be ready in ~3 minutes. I'll check the results."

Create Metrics from Agent Description

User: "Generate quality metrics for our customer support agent"

AI: [Uses metrics-generate-metrics-create]
"I've generated 5 metrics based on the agent's system prompt:
1. Greeting Quality (boolean)
2. Issue Resolution (boolean)
3. Empathy Score (numeric 1-5)
4. Policy Compliance (boolean)
5. Call Wrap-up Quality (boolean)"

Review Call Quality

User: "Show me the worst performing calls from last week"

AI: [Uses call-logs-list with filters]
"Here are the 5 lowest-scoring calls from the past 7 days..."

Generate Evaluators from Real Calls

User: "Create test scenarios from our recent support calls"

AI: [Uses call-logs-create-scenarios-create]
"I've created 8 test scenarios from your recent call logs.
Each scenario captures the conversation flow and expected outcomes."

Best Practices

Begin with listing agents, viewing results, and reading call logs before using write operations like creating metrics or running scenarios.
If you have multiple projects, always specify the project_id parameter to scope your queries. This prevents cross-project data leakage.
Large responses (detailed transcripts, full result sets) can consume significant context. Use pagination parameters (page, limit) to control response size.
Use Cekura’s GitHub Actions integration alongside MCP to run evals on every PR and review results from your agent.

Next Steps

MCP Setup Guide

Detailed MCP server configuration for all supported clients

AGENTS.md Template

Copy-paste template for your project

API Reference

Complete API documentation

Testing Guide

Learn how to create and run evaluators