A scorecard is an evaluation template that determines the criteria to be checked in the quality evaluation of a work process. It forms the basis for every quality evaluation (assessment) in the system. Scorecards are created and maintained by quality management. As soon as a scorecard is active, new worklogs are automatically linked to it, provided the configured assignment rules apply.Documentation Index
Fetch the complete documentation index at: https://docs.enneo.ai/llms.txt
Use this file to discover all available pages before exploring further.
Area and Permissions
Scorecards are managed under Settings → Quality Scorecards. The following permissions apply:- View: All logged-in users can view scorecards - no separate permission required.
- Create and Edit: Requires the
qualityManageScorecardspermission. - Delete: Additionally requires the
qualityDeleteScorecardpermission.
Structure
A scorecard consists of Categories, which in turn contain Criteria. This hierarchy allows a thematic grouping of assessment points. Categories group together thematically related criteria (e.g., “Communication”, “Process conformity”). Each category has a freely selectable name and order. Criteria define the actual evaluation points within a category:- Name and Description – What is being evaluated?
- Max Points – Weighting of the criterion
- Evaluation Type –
metNotMet(met / not met, binary) ornumericScale(numeric scale from 0 to maximum value) - AI Evaluation Enabled – Whether the AI pre-evaluates this criterion automatically
- Evaluation Prompt – Instruction to the AI by what standards it evaluates the criterion
- Knockout Criterion for Category – If this criterion is rated with 0, the entire category is set to 0
- Knockout Criterion for Evaluation – If this criterion is rated with 0, the overall evaluation is set to 0
Knockout Criteria
Knockout criteria allow individual requirements to be marked as absolute. The system distinguishes two levels:- Category level: If a knockout criterion within a category is not met (score = 0), the entire category is rated with 0 points – regardless of the other criteria.
- Evaluation level: If such a criterion is not met, the overall evaluation falls to 0 – regardless of all other results.
AI-supported Preliminary Evaluation
Criteria with enabled AI evaluation are automatically pre-evaluated by the AI as soon as an assessment is processed. The AI bases its evaluation on the configured evaluation prompt. The result has the statusaiGenerated and can be reviewed and adjusted by the supervisor. After a manual adjustment, the status changes to humanVerified.
Criteria without AI evaluation remain initially unscored and must be evaluated entirely manually.
Assignment
Scorecards can be limited to certain work processes. The assignment is done via a combination of:- Ticket Tags – The scorecard is only applied to tickets with certain tags.
- Teams – Only work processes of members of certain teams are evaluated.
- Channels – Restriction to certain input channels (e.g., email, telephone).
Automatic Creation of Evaluations
The system checks every minute for new work processes that have not yet been evaluated. For each such process, it checks which active scorecards are applicable according to the assignment rules. For each suitable scorecard, an assessment is automatically created and submitted for AI processing. A work process can be evaluated by multiple scorecards at the same time, if multiple assignment rules apply.Versioning
Scorecards are versioned. Each scorecard has abaseId (constant across all versions) and a revision (continuing). When an active scorecard is edited, the previous version is set to retired and a new revision with the status active is created.
Existing assessments remain linked to the scorecard version that was active at the time of their creation. A change to the scorecard only affects newly created assessments.
States of a Scorecard
| State | Meaning |
|---|---|
draft | In process, not yet active |
active | Active - used for new evaluations |
retired | Replaced by a new revision |
deleted | Soft-deleted, no longer visible |
Evaluation Workflow
Assessments go through the following states:unprocessed– Assessment was created, AI processing is pending.aiInProgress– AI is currently evaluating the criteria.aiReady– AI preliminary evaluation completed, ready for review.reviewOngoing– Supervisor has started the review.reviewedBySupervisor– Review completed.discussedWithAssessee– Evaluation discussed with the evaluated employee.
reviewedBySupervisor or discussedWithAssessee), the status automatically changes back to reviewOngoing.
Best Practices
- Granularity of Criteria: Criteria should be formulated so that they can be evaluated unambiguously by the AI. Vague formulations lead to unreliable AI evaluations.
- Use knockout criteria sparingly: They have strong effects on the overall result. Only truly non-negotiable requirements should be marked as such.
- Configure assignment rules specifically: Broad assignments significantly increase the volume of assessments. If different teams or channels require different standards, it is recommended to have a separate scorecard for each.
- Consider versioning: Changes to an active scorecard always create a new revision. For ongoing comparative analyses, it is relevant which revision an assessment is based on.