Test Cases
A test case describes a single scenario to check the performance of an AI agent. By adding specific tickets, different use cases can be tested to ensure that the AI agent reacts as expected. Creation:- In the Quality Assurance - Test Cases section, open the sidebar for the desired AI agent.
- Add the tickets intended for the test via the ticket ID.
-
If there is not yet a specific ticket ID:
- In the filter of the ticket overview, select the point**** “Concern”.
- Display suitable tickets and add them to the test run via the ID.
- Test cases can be removed via the sidebar if necessary.
Test Runs
A test run executes a defined number of test cases to evaluate the AI functionality.Test runs do not cause any write actions.
- Test Run ID: Unique identifier for the test run
- AI Agent: Name of the tested AI agent
- Start Date: Time of the test run
- Status: Displays the progress (e.g., “Processing”, “Completed”, “Failed”)
- Tickets: Number of tickets used in the test run
- Result: Percentual evaluation of the test results