Skip to main content
PATCH
/
test_framework
/
v1
/
metrics
/
{id}
cURL
curl --request PATCH \
  --url https://api.cekura.ai/test_framework/v1/metrics/{id}/ \
  --header 'Content-Type: application/json' \
  --header 'X-CEKURA-API-KEY: <api-key>' \
  --data '
{
  "name": "<string>",
  "description": "<string>",
  "audio_enabled": true,
  "prompt": "<string>",
  "agent": 123,
  "assistant_id": "<string>",
  "type": "<string>",
  "eval_type": "<string>",
  "enum_values": {},
  "display_order": 123,
  "configuration": {}
}
'
{
  "id": 123,
  "project": 123,
  "agent": 123,
  "name": "<string>",
  "description": "<string>",
  "type": "basic",
  "eval_type": "binary_workflow_adherence",
  "enum_values": "<unknown>",
  "audio_enabled": true,
  "prompt": "<string>",
  "evaluation_trigger": "always",
  "evaluation_trigger_prompt": "<string>",
  "priority_assignment_prompt": "<string>",
  "configuration": "<unknown>",
  "overall_score": "<string>",
  "total_score": "<string>",
  "knowledge_base_files": "<string>",
  "observability_enabled": true,
  "simulation_enabled": true,
  "alert_enabled": true,
  "alert_type": "disabled",
  "significant_change_alert_status": "enabled",
  "significant_change_alert_direction": "",
  "function_name": "<string>",
  "custom_code": "<string>",
  "vocera_defined_metric_code": "<string>",
  "reviews": [
    {
      "id": 123,
      "test_set": {
        "id": 123,
        "agent": 123,
        "name": "<string>",
        "transcript": "<string>",
        "voice_recording_url": "<string>",
        "call_end_reason": "<string>",
        "duration": "<string>",
        "source_model": "CallLog",
        "source_id": "<string>",
        "created_at": "2023-11-07T05:31:56Z",
        "updated_at": "2023-11-07T05:31:56Z"
      },
      "metric": 123,
      "expected_value": "<unknown>",
      "actual_value": "<unknown>",
      "explanation": "<unknown>",
      "feedback": "<string>",
      "created_at": "2023-11-07T05:31:56Z",
      "updated_at": "2023-11-07T05:31:56Z"
    }
  ]
}

Authorizations

X-CEKURA-API-KEY
string
header
required

API Key Authentication. It should be included in the header of each request.

Path Parameters

id
integer
required

A unique integer value identifying this metric.

Body

name
string

Name of the metric

description
string

Description of what this metric evaluates

audio_enabled
boolean

Whether this metric evaluates audio content

prompt
string

The evaluation prompt used for this metric

agent
integer

The AI agent this metric is associated with

assistant_id
string

External identifier for the assistant

type
string

Type of metric (e.g., basic, custom_prompt, custom_code)

eval_type
string

Type of evaluation (e.g., boolean, enum, score)

enum_values
object

Possible values for enum-type metrics

display_order
integer

Order in which to display this metric in the UI

configuration
object

Custom configuration parameters for specific metrics. For pronounciation metric, you can set words as 2-tuple (word, phonemes) list example:

{
"words": [["hello", "hɛl.loʊ"], ["world", "wɝɚɚɚld"]]
}

Response

id
integer
project
integer | null
agent
integer | null
name
string

Name of the metric. Example: "Customer Satisfaction" or "Appointment Booking"

Maximum string length: 255
description
string

Description of what the metric measures. Example: "Measures how satisfied customers are with the service provided"

type
enum<string>
  • basic - Basic
  • custom_prompt - Custom Prompt
  • custom_code - Custom Code
Available options:
basic,
custom_prompt,
custom_code
eval_type
enum<string>
  • binary_workflow_adherence - Binary Workflow Adherence
  • binary_qualitative - Binary Qualitative
  • continuous_qualitative - Continuous Qualitative
  • numeric - Numeric
  • enum - Enum
Available options:
binary_workflow_adherence,
binary_qualitative,
continuous_qualitative,
numeric,
enum
enum_values
any

List of possible enum values for enum type metrics. Example: ["satisfied", "unsatisfied"]

audio_enabled
boolean

Whether this metric requires audio analysis. Example: true or false

prompt
string

Evaluation prompt for the metric. Example: "Evaluate customer satisfaction based on conversation"

evaluation_trigger
enum<string>
  • always - Always
  • automatic - Automatic
  • custom - Custom
Available options:
always,
automatic,
custom
evaluation_trigger_prompt
string

Evaluation trigger prompt for the metric. Example: "Evaluate metric only if call ended reason is main-agent-ended-call"

priority_assignment_prompt
string

Priority assignment prompt for the metric.

configuration
any

Custom configuration parameters for specific metrics if metric supports it. Example:

  • For Infrastructure issues
{
"infra_issues_timeout": 10
}
overall_score
string

The overall score for this metric across all test sets

total_score
string

The total score for this metric

knowledge_base_files
string

Knowledge base files associated with this metric

observability_enabled
boolean

Enable this metric for observability. Example: true or false

simulation_enabled
boolean

Enable this metric for simulations. Example: true or false

alert_enabled
boolean

Enable alerts for this metric when it fails (value < 5). Only applicable to binary metrics (binary_workflow_adherence and binary_qualitative). For other metric types, use significant_change_alert_status instead. Example: true or false

alert_type
enum<string>
default:disabled
  • disabled - Alerts Disabled
  • normal - Normal Alerts
  • significant_change - Significant Change Alerts
Available options:
disabled,
normal,
significant_change
significant_change_alert_status
enum<string>

Alert status: enabled or disabled.

  • enabled - Enabled
  • disabled - Disabled
Available options:
enabled,
disabled
significant_change_alert_direction
enum<string>

Alert direction: increase only, decrease only, or both (empty = both). Example: "increase", "decrease", or "both"

  • `` - Both (Increase and Decrease)
  • increase - Increase Only
  • decrease - Decrease Only
Available options:
,
increase,
decrease
function_name
string | null

Predefined function name Example: "get_latency" or "check_critical_deviations"

Maximum string length: 255
custom_code
string

Python custom code for the metric. Example:

_resul = True
_explanation = None
if "call_end_reason" in data and data["call_end_reason"] == "customer_satisfaction":
_result = True
_explanation = "Customer expressed satisfaction with service"
vocera_defined_metric_code
string

Vocera defined metric code for the metric. Example: "7fd534f5"

Maximum string length: 255
reviews
object[]

Reviews associated with the metric