Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.cekura.ai/llms.txt

Use this file to discover all available pages before exploring further.

Overview

When your agent is grounded in a knowledge base — a set of source documents containing the facts it should surface to users — you need test cases that ask knowledge questions and verify the agent gives accurate, complete answers. The recommended approach is to generate evaluators directly from your KB files so each scenario comes with an expected outcome that reflects the correct answer.
1

Upload your Knowledge Base to the agent

In the Cekura dashboard, open Agent Settings for the agent you want to test. Find the Knowledge Base section (bottom-right of the settings page) and upload your source document. Supported formats: .txt, .pdf, .csv, .json.
A structured FAQ document is the most reliable format for grounding knowledge-based agents — clearly paired questions and answers produce better scenario generation.
2

Generate scenarios using the Knowledge Base option

Navigate to the evaluator or scenario generation flow for your agent. When prompted for a generation type or extra instructions, select or specify the Knowledge Base option. Cekura will read your uploaded KB files and automatically create a set of evaluators where:
  • The Testing Agent instructions ask the kinds of questions a real user would ask based on the document content.
  • The Expected Outcome for each evaluator describes what a correct, complete answer looks like, derived from the source document.
This means you get ready-to-run test cases with built-in correctness criteria — no need to define expected outputs by hand.
3

Run the generated evaluators

Execute the generated scenarios against your agent. Because each scenario has an expected outcome grounded in the KB, the Expected Outcome predefined metric will automatically evaluate whether the agent’s answer matches the documented facts.
4

Review failures and iterate

Failed runs indicate either a gap in your agent’s knowledge or a mismatch between what the document says and what the agent answers. Review the transcripts and adjust the agent’s prompt, KB files, or configuration as needed.

Alternative: Generate Scenarios via the Cekura Agent

If you prefer a conversational approach, you can also use the Cekura agent directly to create FAQ-style scenarios from your file:
  1. Open the Cekura agent interface.
  2. Upload your KB document and send a prompt like: “Create FAQ style scenarios from the attached FAQ file.”
  3. The agent will produce a set of evaluator scenarios you can import and run.
This approach gives you more control over phrasing and lets you review scenarios before adding them to your test suite.

Language Support

Both workflows support non-English agents — including Spanish, French, German, Portuguese, and other languages. Upload your KB files in the target language, configure your evaluator with a personality that matches that language, and the generated scenarios will test your agent in the same language as the source documents.