Skip to main content
POST
/
test_framework
/
v1
/
results
/
{id}
/
rerun
cURL
curl --request POST \
  --url https://api.cekura.ai/test_framework/v1/results/{id}/rerun/ \
  --header 'X-CEKURA-API-KEY: <api-key>'
{
  "id": 123,
  "name": "<string>",
  "agent": 123,
  "status": "pending",
  "success_rate": 0,
  "run_as_text": false,
  "is_cronjob": "<string>",
  "runs": {},
  "met_expected_outcome_count": "<string>",
  "total_expected_outcome_count": "<string>",
  "total_runs_count": "<string>",
  "completed_runs_count": "<string>",
  "success_runs_count": "<string>",
  "failed_runs_count": "<string>",
  "scenarios": {},
  "created_at": "2023-11-07T05:31:56Z",
  "updated_at": "2023-11-07T05:31:56Z"
}

Authorizations

X-CEKURA-API-KEY
string
header
required

API Key Authentication. It should be included in the header of each request.

Path Parameters

id
integer
required

A unique integer value identifying this result.

Response

200 - application/json
id
integer

Unique identifier for the Result Example: 123

name
string

Name of the Result Example: "Test Result 1"

Maximum string length: 255
agent
integer

ID of the AI agent that was tested Example: 123

status
enum<string>
default:pending

Current status of the result

  • running - Running
  • completed - Completed
  • failed - Failed
  • pending - Pending
  • in_progress - In Progress
  • evaluating - Evaluating
  • in_queue - In Queue
  • timeout - Timeout
  • cancelled - Cancelled
Available options:
running,
completed,
failed,
pending,
in_progress,
evaluating,
in_queue,
timeout,
cancelled
success_rate
number<double>
default:0

Success rate of the test runs as a decimal (0.0 to 1.0)

run_as_text
boolean
default:false

Whether this test was run in text mode instead of voice mode Example: true or false

is_cronjob
string

Whether this result was created by a scheduled cronjob Example: true or false

runs
object

List of test runs associated with this result, including run details, status, scenario information, and phone numbers used Example:

{
"run_id": {
"id": "integer",
"scenario": "integer",
"outbound_number": "string",
"expected_outcome": {
"score": 100,
"explanation": [
"✅ Positive outcome explanation with checkmark emoji",
"❌ Negative outcome explanation with X emoji"
],
"outcome_alignments": [
{
"outcome": "string",
"prompt_part": "string",
"aligned": "boolean"
}
]
},
"success": "boolean",
"evaluation": {
"metrics": [
{
"id": "integer",
"name": "string",
"type": "binary_workflow_adherence | binary_qualitative | continuous_qualitative | numeric | enum",
"score": "number",
"explanation": "string | array",
"function_name": "string (optional)",
"extra": {
"categories": [
{
"category": "string",
"deviation": "string (optional)",
"priority": "string (optional)"
}
],
"percentiles": {
"p50": "number"
}
},
"enum": "string (for enum type metrics only)"
}
]
},
"timestamp": "datetime",
"executed_at": "datetime",
"error_message": "string",
"status": "string",
"duration": "string (MM:SS format)",
"scenario_name": "string",
"personality_name": "string",
"metadata": "object",
"inbound_number": "string"
}
}
met_expected_outcome_count
string

Number of test runs that met the expected outcome criteria Example: 10

total_expected_outcome_count
string

Total number of test runs that were evaluated for expected outcomes Example: 10

total_runs_count
string

Total number of test runs associated with this result Example: 10

completed_runs_count
string

Number of test runs that have completed successfully Example: 10

success_runs_count
string

Number of test runs that were marked as successful Example: 10

failed_runs_count
string

Number of test runs that failed or encountered errors Example: 10

scenarios
object

List of scenario names used in the test runs for this result Example:

[
{
"id": 123,
"name": "Scenario 1"
},
{
"id": 456,
"name": "Scenario 2"
}
]
created_at
string<date-time>

Timestamp when this test result was created Example: 2021-01-01 00:00:00

updated_at
string<date-time>

Timestamp when this test result was last updated