API view set for managing metrics. This view set provides endpoints to create, retrieve, update, and delete metrics associated with AI agents within a specific organization.
API Key Authentication. It should be included in the header of each request.
Name of the metric
255Detailed description of what the metric evaluates
The AI agent this metric is associated with
Type of evaluation (e.g., boolean, enum, score)
binary_workflow_adherence - Binary Workflow Adherencebinary_qualitative - Binary Qualitativecontinuous_qualitative - Continuous Qualitativenumeric - Numericenum - Enumbinary_workflow_adherence, binary_qualitative, continuous_qualitative, numeric, enum Possible values for enum-type metrics
Whether audio evaluation is enabled for this metric
Type of metric (e.g., basic, custom_prompt, custom_code)
basic - Basiccustom_prompt - Custom Promptcustom_code - Custom Codebasic, custom_prompt, custom_code The evaluation prompt used for this metric
The evaluation trigger used for this metric
always - Alwaysautomatic - Automaticcustom - Customalways, automatic, custom The evaluation trigger prompt used for this metric
Name of the metric.
Example: "Customer Satisfaction" or "Appointment Booking"
255Description of what the metric measures.
Example: "Measures how satisfied customers are with the service provided"
basic - Basiccustom_prompt - Custom Promptcustom_code - Custom Codebasic, custom_prompt, custom_code binary_workflow_adherence - Binary Workflow Adherencebinary_qualitative - Binary Qualitativecontinuous_qualitative - Continuous Qualitativenumeric - Numericenum - Enumbinary_workflow_adherence, binary_qualitative, continuous_qualitative, numeric, enum List of possible enum values for enum type metrics.
Example: ["satisfied", "unsatisfied"]
Whether this metric requires audio analysis.
Example: true or false
Evaluation prompt for the metric.
Example: "Evaluate customer satisfaction based on conversation"
always - Alwaysautomatic - Automaticcustom - Customalways, automatic, custom Evaluation trigger prompt for the metric.
Example: "Evaluate metric only if call ended reason is main-agent-ended-call"
Priority assignment prompt for the metric.
Custom configuration parameters for specific metrics if metric supports it. Example:
{
"infra_issues_timeout": 10
}The overall score for this metric across all test sets
The total score for this metric
Knowledge base files associated with this metric
Enable this metric for observability.
Example: true or false
Enable this metric for simulations.
Example: true or false
Enable alerts for this metric when it fails (value < 5).
Only applicable to binary metrics (binary_workflow_adherence and binary_qualitative).
For other metric types, use significant_change_alert_status instead.
Example: true or false
disabled - Alerts Disablednormal - Normal Alertssignificant_change - Significant Change Alertsdisabled, normal, significant_change Alert status: enabled or disabled.
enabled - Enableddisabled - Disabledenabled, disabled Alert direction: increase only, decrease only, or both (empty = both).
Example: "increase", "decrease", or "both"
increase - Increase Onlydecrease - Decrease Only, increase, decrease Predefined function name
Example: "get_latency" or "check_critical_deviations"
255Python custom code for the metric. Example:
_resul = True
_explanation = None
if "call_end_reason" in data and data["call_end_reason"] == "customer_satisfaction":
_result = True
_explanation = "Customer expressed satisfaction with service"Vocera defined metric code for the metric.
Example: "7fd534f5"
255Reviews associated with the metric