What is Red Teaming?
Red teaming is a security testing methodology that simulates adversarial attacks against your AI voice agents. By proactively testing your agents against malicious inputs, you can identify vulnerabilities and ensure your agents behave safely in production. Cekura provides 32,617+ pre-built red-teaming scenarios across 150 categories to comprehensively test your agents.Attack Types
Our red-teaming scenarios cover different categories. Some main categories are:Jailbreak
Attempts to bypass safety guardrails and make the agent ignore its instructions or ethical constraints
Toxicity
Prompts designed to elicit harmful, offensive, or inappropriate content from the agent
Bias
Tests for discriminatory responses based on race, gender, religion, or other protected characteristics
Prompt Injection
Attempts to inject malicious commands or override agent instructions
Hallucination
Tests for fabricated information or false claims about facts, code, or capabilities
Malware
Attempts to make the agent generate malicious code or security exploits
Attack Examples
Jailbreak
Toxicity
Bias
Prompt Injection
Hallucination
Malware
Modalities
Red-teaming scenarios are available in two modalities:Text
32,617 scenarios - Includes all voice scenarios plus text-only attacks that are difficult to simulate in voice calls (e.g., EICAR test strings, encoded payloads)
Voice
16,037 scenarios - Scenarios specifically validated for voice interactions
Text modality contains all voice attack prompts plus additional prompts that are difficult to simulate in voice calls (e.g.,
X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H* is difficult to speak in a voice call).