Understanding the automated skill ratings in each candidate's report
In a candidate report, you will see a section on the top right for Automated skill ratings.
These results are automatically generated based on what languages and technologies the candidate used to solve the challenges in the assessment. It measures their skill by calculating their code efficiency and test case results for each challenge they solved.
More specifically, each ratings is calculated by taking the challenges solved for that skill and dividing their score by what the best possible score could have been and then displaying that visually for you.
For example, if a candidate solved 3 SQL challenges, and received the scores 5, 10, 2 for a total of 17 points out of a possible 30, then they received a little over a 50% mark. They would then have 2 /3 of the lightning bolts highlighted.
The rating options can be:
Once enough candidates take an assessment, you may see a chart similar to the one below. It shows the distribution of all candidate scores for the assessment.
- The black vertical line is the average score.
- The orange vertical line is the candidates final score. This allows you to easily see if a candidate scored above or below average.
On a candidate report, aside from our system grading your candidates solutions and providing you a final score, you are able to rate candidates yourself across a set of skills in their scorecard and leave private notes. Read more about the scorecard here.