Interpreting Paper Scores

Now that the reviews will soon be made available to authors and reviewers, I would like to inform readers on how to interpret the results.

Each paper is reviewed according to the following criteria:

This criterion is used to evaluate how well organized and formulated the paper is. It thus addresses both the general paper structure and language use, but also the "logical flow" between sections.
Here reviewers grade the paper on how original (new) the presented idea or work is, and whether it appropriately builds on and acknowledges previous work in the area: Thus, this measures both the "novelty" and the relevant literature survey in the "state of art".
Is the paper technically sound, that is, does it include evidence for all statements or conclusions that it makes? This has some overlap with the previous criterion, which also measures the use of "state of art".
Does the idea or work presented in the paper show potential to contribute to CS education?
Overall evaluation
Finally, the reviewer gives the "final mark". Here, reviewers differ in their approach: many will tend to "average out" their previous grades for the paper and give the (rough) average grade for "overall". Others may bring in additional, or alternative, arguments here, or weigh the different criteria differently.
Finally, the author can grade his or her familiarity as "Low", "Medium", or "High". This can help the program committee in deciding ho relevant or reliable a given review is, especially if reviewers disagree about the quality of a paper.

Now that the criteria are defined, here are the marks that are possible:

6 (Exceptional): Top 5%; likely to be among the top few papers at the conference
5 (Very Good): Next 15%; very strong symposium paper
4 (Good): Next 15%; suitable for symposium paper inclusion
3 (Average): Middle 25%; potentially could be used as a symposium paper
2 (Below Average): Lower 30%; correct but not too interesting; not suitable
1 (Deficient): Bottom 10%; contains serious errors or deficiencies

Thus, in general, a paper that has an average overall rating of 4.0 has good chances of being accepted - but is not guaranteed to be accepted. Quite simply, the ITiCSE conference has a good competition of authors, so that the conference regularly receives many papers that are "good enough" to be accepted, but cannot be accepted - usually because of the limitations of there only being space for so and so many papers in a conference. Additionally, a paper that has an average slightly below 4 has a (small) chance of being accepted, if the program committee feels that some of the most negative remarks were too strong compared to other reviews, or the paper would ideally fill a vacancy in the program.

If your paper has not been accepted, although the reviews look good and the overall average score is better than 3.0, please do not be too disappointed! (This has happened to me at ITiCSE or SIGCSE more often that I would have liked, too.) In most cases, it may not say as much about your submission as about the strong competition among submissions: having written a "good" paper may not be good enough, if there are too many "slightly better" papers in the same pool. What I personally always found very helpful at both conferences is to thoroughly read the reviewer comments, and then re-read my paper and try to find out what the reviewers referred to. In many cases, I could understand the criticism voiced by the reviewers. In fact, several rejected submissions actually turned into much improved (and in many cases, rather different) accepted submissions at a later iteration of the conference.

The same, of course, also applies to accepted papers - even they can usually be improved based on the reviewer comments.