Skip to main content
The quality assurance of AI functions is based on test cases and test runs. Especially after systemic changes or adjustments to AI agents, a thorough verification of functionality is essential. Test runs provide structured validation, detect possible errors early, and ensure consistent performance evaluation of the AI agents.

Test Cases

A test case describes a single scenario to check the performance of an AI agent. By adding specific tickets, different use cases can be tested to ensure that the AI agent reacts as expected. Creation:
  • In the Quality Assurance - Test Cases section, open the sidebar for the desired AI agent.
  • Add the tickets intended for the test via the ticket ID.
  • If there is not yet a specific ticket ID:
    • In the filter of the ticket overview, select the point**** “Concern”.
    • Display suitable tickets and add them to the test run via the ID.
  • Test cases can be removed via the sidebar if necessary.

Test Runs

A test run executes a defined number of test cases to evaluate the AI functionality.
Test runs do not cause any write actions.
Test Runs Overview The overview shows all conducted test runs with the following details:
  • Test Run ID: Unique identifier for the test run
  • AI Agent: Name of the tested AI agent
  • Start Date: Time of the test run
  • Status: Displays the progress (e.g., “Processing”, “Completed”, “Failed”)
  • Tickets: Number of tickets used in the test run
  • Result: Percentual evaluation of the test results
Test runs are started in the “Test Cases” tab via the Play button on the right of the AI agent. Any number of test runs can be initiated in parallel.