The Quality Assurance of AI functionalities is based on test cases and test runs. Especially after systemic changes or adjustments to AI agents, a thorough verification of functionality is essential. Test runs provide structured validation, detect potential errors early, and ensure consistent performance evaluation of AI agents.
A test case describes a single scenario to verify the performance of an AI agent. By adding specific tickets, different application cases can be tested to ensure that the AI agent responds as expected.Creation:
In the Quality Assurance - Test Cases area, open the sidebar for the desired AI agent.
Add tickets intended for the test via Ticket-ID.
If no specific Ticket-ID is available yet:
In the filter of the ticket overview, select the item “Concern”.
Display suitable tickets and add them to the test run via ID.
Test cases can be removed if necessary via the sidebar again.