Note: the following is a record of some whimsical mathematical thoughts and computations I had after doing some grading. It is likely that the sort of problems discussed here are in fact well studied in the appropriate literature; I would appreciate knowing of any links to such.
Suppose one assigns true-false questions on an examination, with the answers randomised so that each question is equally likely to have “true” as the correct answer as “false”, with no correlation between different questions. Suppose that the students taking the examination must answer each question with exactly one of “true” or “false” (they are not allowed to skip any question). Then it is easy to see how to grade the exam: one can simply count how many questions each student answered correctly (i.e. each correct answer scores one point, and each incorrect answer scores zero points), and give that number as the final grade of the examination. More generally, one could assign some score of points to each correct answer and some score (possibly negative) of points to each incorrect answer, giving a total grade of points. As long as , this grade is simply an affine rescaling of the simple grading scheme and would serve just as well for the purpose of evaluating the students, as well as encouraging each student to answer the questions as correctly as possible.
In practice, though, a student will probably not know the answer to each individual question with absolute certainty. One can adopt a probabilistic model, where for a given student and a given question , the student may think that the answer to question is true with probability and false with probability , where is some quantity that can be viewed as a measure of confidence has in the answer (with being confident that the answer is true if is close to , and confident that the answer is false if is close to ); for simplicity let us assume that in ‘s probabilistic model, the answers to each question are independent random variables. Given this model, and assuming that the student wishes to maximise his or her expected grade on the exam, it is an easy matter to see that the optimal strategy for to take is to answer question true if and false if . (If , the student can answer arbitrarily.)
[Important note: here we are not using the term “confidence” in the technical sense used in statistics, but rather as an informal term for “subjective probability”.]
This is fine as far as it goes, but for the purposes of evaluating how well the student actually knows the material, it provides only a limited amount of information, in particular we do not get to directly see the student’s subjective probabilities for each question. If for instance answered out of questions correctly, was it because he or she actually knew the right answer for seven of the questions, or was it because he or she was making educated guesses for the ten questions that turned out to be slightly better than random chance? There seems to be no way to discern this if the only input the student is allowed to provide for each question is the single binary choice of true/false.
But what if the student were able to give probabilistic answers to any given question? That is to say, instead of being forced to answer just “true” or “false” for a given question , the student was allowed to give answers such as “ confident that the answer is true” (and hence confidence the answer is false). Such answers would give more insight as to how well the student actually knew the material; in particular, we would theoretically be able to actually see the student’s subjective probabilities .
But now it becomes less clear what the right grading scheme to pick is. Suppose for instance we wish to extend the simple grading scheme in which an correct answer given in confidence is awarded one point. How many points should one award a correct answer given in confidence? How about an incorrect answer given in confidence (or equivalently, a correct answer given in confidence)?
Mathematically, one could design a grading scheme by selecting some grading function and then awarding a student points whenever they indicate the correct answer with a confidence of . For instance, if the student was confident that the answer was “true” (and hence confident that the answer was “false”), then this grading scheme would award the student points if the correct answer actually was “true”, and points if the correct answer actually was “false”. One can then ask the question of what functions would be “best” for this scheme?
Intuitively, one would expect that should be monotone increasing – one should be rewarded more for being correct with high confidence, than correct with low confidence. On the other hand, some sort of “partial credit” should still be assigned in the latter case. One obvious proposal is to just use a linear grading function – thus for instance a correct answer given with confidence might be worth points. But is this the “best” option?
To make the problem more mathematically precise, one needs an objective criterion with which to evaluate a given grading scheme. One criterion that one could use here is the avoidance of perverse incentives. If a grading scheme is designed badly, a student may end up overstating or understating his or her confidence in an answer in order to optimise the (expected) grade: the optimal level of confidence for a student to report on a question may differ from that student’s subjective confidence . So one could ask to design a scheme so that is always equal to , so that the incentive is for the student to honestly report his or her confidence level in the answer.
This turns out to give a precise constraint on the grading function . If a student thinks that the answer to a question is true with probability and false with probability , and enters in an answer of “true” with confidence (and thus “false” with confidence ), then student would expect a grade of
on average for this question. To maximise this expected grade (assuming differentiability of , which is a reasonable hypothesis for a partial credit grading scheme), one performs the usual maneuvre of differentiating in the independent variable and setting the result to zero, thus obtaining
In order to avoid perverse incentives, the maximum should occur at , thus we should have
for all . This suggests that the function should be constant. (Strictly speaking, it only gives the weaker constraint that is symmetric around ; but if one generalised the problem to allow for multiple-choice questions with more than two possible answers, with a grading scheme that depended only on the confidence assigned to the correct answer, the same analysis would in fact force to be constant in ; we leave this computation to the interested reader.) In other words, should be of the form for some ; by monotonicity we expect to be positive. If we make the normalisation (so that no points are awarded for a split in confidence between true and false) and , one arrives at the grading scheme
Thus, if a student believes that an answer is “true” with confidence and “false” with confidence , he or she will be awarded points when the correct answer is “true”, and points if the correct answer is “false”. The following table gives some illustrative values for this scheme:
Confidence that answer is “true” |
Points awarded if answer is “true” |
Points awarded if answer is “false” |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Note the large penalties for being extremely confident of an answer that ultimately turns out to be incorrect; in particular, answers of confidence should be avoided unless one really is absolutely certain as to the correctness of one’s answer.
The total grade given under such a scheme to a student who answers each question to be “true” with confidence , and “false” with confidence , is
This grade can also be written as
where
is the likelihood of the student ‘s subjective probability model, given the outcome of the correct answers. Thus the grade system here has another natural interpretation, as being an affine rescaling of the log-likelihood. The incentive is thus for the student to maximise the likelihood of his or her own subjective model, which aligns well with standard practices in statistics. From the perspective of Bayesian probability, the grade given to a student can then be viewed as a measurement (in logarithmic scale) of how much the posterior probability that the student’s model was correct has improved over the prior probability.
One could propose using the above grading scheme to evaluate predictions to binary events, such as an upcoming election with only two viable candidates, to see in hindsight just how effective each predictor was in calling these events. One difficulty in doing so is that many predictions do not come with explicit probabilities attached to them, and attaching a default confidence level of to any prediction made without any such qualification would result in an automatic grade of if even one of these predictions turned out to be incorrect. But perhaps if a predictor refuses to attach confidence level to his or her predictions, one can assign some default level of confidence to these predictions, and then (using some suitable set of predictions from this predictor as “training data”) find the value of that maximises this predictor’s grade. This level can then be used going forward as the default level of confidence to apply to any future predictions from this predictor.
The above grading scheme extends easily enough to multiple-choice questions. But one question I had trouble with was how to deal with uncertainty, in which the student does not know enough about a question to venture even a probability of being true or false. Here, it is natural to allow a student to leave a question blank (i.e. to answer “I don’t know”); a more advanced option would be to allow the student to enter his or her confidence level as an interval range (e.g. “I am between and confident that the answer is “true””). But now I do not have a good proposal for a grading scheme; once there is uncertainty in the student’s subjective model, the problem of that student maximising his or her expected grade becomes ill-posed due to the “unknown unknowns”, and so the previous criterion of avoiding perverse incentives becomes far less useful.