Test Cases and Test Runs
Overview, Creation and Actions
The Quality Assurance of AI functionalities is based on test cases and test runs. Especially after systemic changes or adjustments to AI agents, a thorough verification of functionality is essential. Test runs provide structured validation, detect potential errors early, and ensure consistent performance evaluation of AI agents.
Test Cases
A test case describes a single scenario to verify the performance of an AI agent. By adding specific tickets, different application cases can be tested to ensure that the AI agent responds as expected.
Creation:
-
In the Quality Assurance - Test Cases area, open the sidebar for the desired AI agent.
-
Add tickets intended for the test via Ticket-ID.
-
If no specific Ticket-ID is available yet:
-
In the filter of the ticket overview, select the item “Concern”.
-
Display suitable tickets and add them to the test run via ID.
-
-
Test cases can be removed if necessary via the sidebar again.
Test Runs
A test run executes a defined number of test cases to evaluate the AI functionality.
Test runs do not cause any write actions.
Test Runs Overview
The overview shows all completed test runs with the following details:
-
Test Run ID: Unique identifier for the test run
-
AI Agent: Name of the tested AI agent
-
Start Date: Start time of the test run
-
Status: Shows progress (e.g. “Processing”, “Completed”, “Failed”)
-
Tickets: Number of tickets used in the test run
-
Result: Percentage evaluation of the test results
Test runs are started in the “Test Cases” tab via the Play button to the right of the AI agent. Any number of test runs can be initiated in parallel.