Skip to main content
GET
/
test_framework
/
v1
/
predefined-metrics
cURL
curl --request GET \
  --url https://api.cekura.ai/test_framework/v1/predefined-metrics/ \
  --header 'X-CEKURA-API-KEY: <api-key>'
[
  {
    "id": 123,
    "name": "<string>",
    "description": "<string>",
    "code": "<string>",
    "type": "basic",
    "eval_type": "binary_workflow_adherence",
    "audio_enabled": true,
    "prompt": "<string>",
    "function_name": "<string>",
    "enum_values": "<unknown>",
    "custom_code": "<string>",
    "simulation_enabled": true,
    "observability_enabled": true,
    "configuration": "<unknown>",
    "cost": "<string>",
    "evaluation_trigger": "always",
    "evaluation_trigger_prompt": "<string>",
    "category": "Speech Quality",
    "sub_category": "<string>",
    "organization": 123
  }
]

Authorizations

X-CEKURA-API-KEY
string
header
required

API Key Authentication. It should be included in the header of each request.

Response

200 - application/json
id
integer
name
string

Name of the metric. Example:

  • "Customer Satisfaction"
  • "Response Time"
Maximum string length: 255
description
string

Description of what the metric measures. Example: "Measures how satisfied customers are with the service provided"

code
string

Unique code identifier for the metric

type
enum<string>
  • basic - Basic
  • custom_prompt - Custom Prompt
  • custom_code - Custom Code
Available options:
basic,
custom_prompt,
custom_code
eval_type
enum<string>
  • binary_workflow_adherence - Binary Workflow Adherence
  • binary_qualitative - Binary Qualitative
  • continuous_qualitative - Continuous Qualitative
  • numeric - Numeric
  • enum - Enum
Available options:
binary_workflow_adherence,
binary_qualitative,
continuous_qualitative,
numeric,
enum
audio_enabled
boolean

Whether this metric requires audio analysis

prompt
string

Evaluation prompt for the metric

function_name
string | null

Python function name for custom metric evaluation

enum_values
any

List of possible enum values for enum type metrics

custom_code
string

Custom evaluation code for the metric

simulation_enabled
boolean

Enable this metric for simulations. Example: true or false

observability_enabled
boolean

Enable this metric for observability. Example: true or false

configuration
any

Custom configuration parameters for metric. Example: ```json {

}

cost
string<decimal>

Cost in credits for evaluating this metric. Example: 0.005000

evaluation_trigger
enum<string>

When to trigger this metric evaluation

  • always - Always
  • automatic - Automatic
  • custom - Custom
Available options:
always,
automatic,
custom
evaluation_trigger_prompt
string

Prompt to determine when to trigger metric evaluation

category
enum<string> | null

Category of the metric

  • Speech Quality - Speech Quality
  • Conversation Quality - Conversation Quality
  • Accuracy - Accuracy
  • Customer Experience - Customer Experience
Available options:
Speech Quality,
Conversation Quality,
Accuracy,
Customer Experience,
sub_category
string | null

Sub category of the metric

organization
integer | null