Documentation Index
Fetch the complete documentation index at: https://docs.enneo.ai/llms.txt
Use this file to discover all available pages before exploring further.
Quality assessments, also known as Assessments, are specific evaluations of individual work processes based on a scorecard. They document the quality of handling, combine AI-generated pre-assessments with human review, and form the basis for quality reporting and employee discussions.Assessments are automatically generated as soon as a work process (Worklog) of an agent can be assigned to an active scorecard. It is the connection between a specific ticket processing process, the evaluated employee and the scorecard to be applied.
Where to view assessments
Assessments are directly available in the Ticket view: The assessment button in the ticket detail bar opens the worklog of the respective ticket with all associated assessments. The button is only visible if the logged-in user possesses at least one of the assessment view permissions (see below).
Scorecards are managed under Settings → Quality Scorecards.
Permissions
| Permission | Description |
|---|
QualityViewAssessmentOwn | View your own assessments |
QualityViewAssessmentTeamMate | View team members’ assessments |
QualityViewAssessmentAll | View all users’ assessments |
QualityDoAssessmentsTeam | Create and edit assessments for team members |
QualityDoAssessmentsAll | Create and edit assessments for any user |
QualityDeleteAssessment | Delete assessments |
QualityManageScorecards | Create, edit, and manage scorecards |
QualityDeleteScorecard | Delete scorecards |
Creation and Automatic Generation
Assessments are not manually created, but by a background job that runs every minute. This identifies writeAction type work processes for which no assessments have been made yet, checks which active scorecards are applicable, and generates an assessment for each suitable scorecard.
There can be multiple assessments per work process - one for each scorecard that meets the assignment rules.
After creation, an event is triggered that starts AI processing.
AI Processing
The Cortex service receives as a basis for evaluation:
- The flow of messages of the ticket - from the first customer contact to the last agent reply (including internal notes)
- The Scorecard with all categories, criteria, and assessment prompts
- Worklog details - timework, AI agents used, level of automation
- The customer experience - SLA adherence, CSAT
Cortex evaluates all criteria where autoGenerateByAi is activated and provides a score and a rationale for each criterion. Subsequently, the AI generates a summary assessment (aiSummary).
AI-assessed criteria receive the status aiGenerated. Criteria without AI assessment remain unscored and must be manually assessed.
Assessment Data
An assessment includes:
- Categories and criteria - with score, rationale, and evaluation status per criterion
- Total points -
scoredPoints, usedMaxPoints, totalMaxPoints, percentage
- AI Summary (
aiSummary) - free text, generated by the AI
- Supervisor assessment (
supervisorAssessment) - free text, manually entered
- Assessment date and conversation date - for documentation purposes
- Time recording - processing time (segments, follow-up time)
- AI use - which AI agents were involved, degree of automation
- Customer experience - SLA status, whether the ticket was reopened, CSAT result
- Change history - complete logging of all changes (Score, rationale, status, who changed when)
Points Calculation
How is the percentage calculated?
The calculation is based solely on rated criteria:
usedMaxPoints = sum of the maximum points of all criteria with a score (including Score = 0)
scoredPoints = sum of the actual points achieved
totalMaxPoints = sum of all maximum points of the scorecard (including unrated criteria)
percentage = scoredPoints / usedMaxPoints × 100
Unrated criteria (unscored) are not included in the calculation. This allows a valid part rating if individual criteria cannot be evaluated due to lack of data.
K.O. logic and its effect on the score
Criteria can be marked as K.O. criteria for the category or for the entire assessment.
- Category K.O.: If such a criterion scores 0 points, all points of the category are set to 0 - regardless of other criteria in the same category.
- Assessment K.O.: If such a criterion scores 0 points, all
scoredPoints of the entire assessment are set to 0.
The K.O. logic is applied every time the total points are recalculated, i.e., after each AI assessment and after each manual change.
Workflow States
Assessments go through a defined workflow:
| Status | Meaning |
|---|
unprocessed | Created, AI processing pending |
aiInProgress | AI actively evaluates |
aiReady | AI pre-evaluation completed, ready for review |
reviewOngoing | Supervisor has started the review |
reviewedBySupervisor | Review by supervisor completed |
discussedWithAssessee | Discussed with the evaluated employee |
error | Error in AI processing |
Automatic reset: If a supervisor makes changes after the review (reviewedBySupervisor) or after the employee conversation (discussedWithAssessee), the status automatically reverts to reviewOngoing. Entered assessment and conversation data are retained.
Criterion States
Each criterion within an assessment has its own status:
| Status | Meaning |
|---|
unscored | Not yet rated |
aiGenerated | Pre-rated by AI, not yet confirmed by supervisor |
humanVerified | Manually set or confirmed by supervisor |
Context Data in the Assessment
Time recording
The assessment includes the time recording data of the work process: total processing time, post-processing time, and individual time segments with the actions performed. This data comes from the employee’s worklog and is frozen at the time of the assessment creation.
AI use
For each work process, it is documented which AI agents were involved and how high the degree of automation in the handling was. This information serves as context in the quality assessment - fully automated processes are evaluated differently than those handled manually.
Customer experience
The assessment includes information about the SLA adherence (was the deadline met, how many seconds after the deadline the ticket was closed), whether the ticket was reopened, and the CSAT result (customer satisfaction survey), if available. These data provide objective quality signals independent of the scorecard assessment.
Export
Quality assessments can be exported under Settings → Data export as XLSX, CSV or JSON. The export includes all assessment data including scores, justifications, time tracking, and customer experience, and is suitable for external reporting or evaluation in BI tools.
Best Practices
- Use change history: The complete logging of all score and status changes makes assessments auditable. If there are any queries, it can be traced back to what the AI originally evaluated and what was manually adjusted.
- Maintain assessment and conversation date: These fields are mandatory for meaningful reporting - without them, assessment periods cannot be correctly evaluated.
- Use AI Summary as a starting point: The
aiSummary is not a final evaluation, but a structured starting point for the supervisor review. It should not be used as a basis for discussion without checking.- Monitoring Error Status: Assessments in the error status are not automatically repeated. They should be regularly checked to avoid assessment gaps in reporting.