These are standard metrics applicable across domains defined by Cekura. Below is a list of supported pre-defined metrics.
  • AI Interrupting User: Tells if the AI (main agent) interrupted the user (testing agent) during the interaction. NOTE: This only works on stereo recordings.
  • Average Latency (in ms): Tells the average response latency for the AI agent in milliseconds.
  • Average Pitch (in Hz): Returns the average pitch of the AI Agent (Main agent) during the call in Hertz.
  • Instruction Following Metric: Checks critical deviations from the expected workflows and tells where it deviated.
  • CSAT: Gives a score for customer satisfaction and tells the reason for dissatisfaction.
  • Not Early Termination: Checks if the call was ended early by the user, indicating poor user experience or unresolved issues. This metric is True if…
  • Sentiment: Determines whether the human’s overall sentiment towards the AI agent during the conversation was “happy”…
  • Signal to Noise Ratio (SNR): Signal to Noise Ratio when the AI agent (Main agent) is speaking. Compares the noise level of Main Agent against Testing Agent.
  • Talk Ratio: The ratio of time duration the AI Agent (Main agent) is speaking compared to the duration the User (Test Agent) is speaking.
  • User Interrupting AI: Tells if the user (testing agent) interrupted the AI (main agent) during the interaction. NOTE: This only works on stereo recordings.
  • Voice Quality Index: Evaluates the overall voice quality of the AI agent based on three key factors: clarity (how clear and understandable the voice is), tone (appropriateness of tone for the context), and appropriateness (whether the speech fits the context and intent). Returns a score from 0-5 where higher scores indicate better voice quality. NOTE: This metric requires audio recordings for analysis.
  • Words Per Minute (WPM): Monitors speech speed of AI Agent (Main Agent) to ensure natural and understandable delivery.