This API is used to list all the metrics associated with an agent or a project or both.
Filtering behavior:
API Key Authentication. It should be included in the header of each request.
Filter by agent ID
Example: 123
Filter by project ID
Example: 456
Filter by assistant ID
Example: asst_1234567890
Filter metrics by agent IDs. Supports comma-separated list.
Example: 1,2,3 to filter metrics associated with any of these agents
Filter by metric slug
Example: latency_metric or customer_satisfaction
JSON filter parameter for advanced filtering.
Example:
{"operator":"and","conditions":[{"field":"agents__id","op":"in","value":[1,2,3]}]}Supported fields: agents__id, eval_type, type, name Supported operators: in, eq, contains
Include Overall and Total scores in Metric List
Example: true or false
URL-friendly unique identifier in snake_case format. Example: "customer_satisfaction_1562"
255Name of the AI agent that was tested
Example: "Test Agent 1"
Name of the metric.
Example: "Customer Satisfaction" or "Appointment Booking"
255Description of what the metric measures.
Example: "Measures how satisfied customers are with the service provided"
Predefined function name
Example: "get_latency" or "check_critical_deviations"
255Type of metric
basic - Basic (Deprecated in favor of LLM Judge)custom_prompt - Custom Prompt ( Deprecated in favor of LLM Judge)custom_code - Custom Codellm_judge - LLM Judgebasic, custom_prompt, custom_code, llm_judge Type of evaluation
binary_workflow_adherence - Binary Workflow Adherencebinary_qualitative - Binary Qualitativecontinuous_qualitative - Continuous Qualitativenumeric - Numericenum - Enumbinary_workflow_adherence, binary_qualitative, continuous_qualitative, numeric, enum List of possible enum values for enum type metrics.
Example: ["satisfied", "unsatisfied"]
Whether this metric requires audio analysis.
Example: true or false
Enable this metric for observability.
Example: true or false
Enable this metric for simulations.
Example: true or false
Enable sampling for this metric using project-level sample rate
Evaluation prompt for the metric.
Example: "Evaluate customer satisfaction based on conversation"
Display order for the metric.
Example: 1
-2147483648 <= x <= 2147483647always - Alwaysautomatic - Automaticcustom - Customalways, automatic, custom llm_judge - LLM Judgecustom_code - Custom Codellm_judge, custom_code Evaluation trigger prompt for the metric.
Example: "Evaluate metric only if call ended reason is main-agent-ended-call"
Python custom code to determine metric relevance. Code should set _result (bool) and _explanation (str). Example:
_result = True
_explanation = "Metric is relevant"
if "call_end_reason" in data and data["call_end_reason"] == "customer-hung-up":
_result = False
_explanation = "Customer hung up, metric not applicable"Priority assignment prompt for the metric.
Vocera defined metric code for the metric.
Example: "7fd534f5"
255Custom configuration parameters for specific metrics if metric supports it. Example:
{
"infra_issues_timeout": 10
}List of knowledge base file IDs for the metric.
Example: [123, 456]
Metric Cost
Example: 0.10
disabled - Alerts Disablednormal - Normal Alertssignificant_change - Significant Change Alertsdisabled, normal, significant_change Alert status: enabled or disabled.
enabled - Enableddisabled - Disabledenabled, disabled Alert direction: increase only, decrease only, or both (empty = both).
Example: "increase", "decrease", or "both"
increase - Increase Onlydecrease - Decrease Only, increase, decrease Window size for rolling statistics calculation.
Example: 50
Standard deviation multiplier for threshold calculation.
Example: 2.0
Alpha value for exponentially weighted moving average (EWMA) calculation.
Example: 0.1
When enabled, this metric is automatically assigned to new agents created in the project.
Filters to apply before computing alerts (CallLogQueryFilter format)
Slack workspace to send alerts to
Override channel ID for this metric's alerts
255