I don’t recall whether I’ve brought up this topic often or only occasionally, but it keeps bugging me how the IF Comp rankings are skewed by people who give out what are obviously bad faith votes. The Absence of Miriam Lane, average score 8.17, received no ratings below a 4… except for one person who gave it a 1. A Long Way to the Nearest Star received no ratings below a 6… except for one person who gave it a 1.
Does it matter for the final rankings? It does. Without that single voter giving the game a 1, A Long Way to the Nearest Star would certainly have won the IF Comp, and The Absence of Miriam Lane might have won it. Now I don’t claim that these games would have been more worthy winners than The Grown-Up Detective Agency – the two games I played were both very good, and I haven’t played the other one! But is just bugs me that one bad-faith judge can have so much influence over the final rankings, trumping the carefully calibrated votes of a dozen other judges.
And of course these ratings are bad-faith. It is impossible to believe in good faith that these pieces, which are at the very least highly competent, deserve the worst possible rating.
(I’m also kind of incensed about the four people giving 1’s, 2’s and 3’s to According to Cain, which you may or may not enjoy, but which is obviously an extremely well-crafted parser game that cannot conceivably merit such low grades. But here there are apparently four people who disagree with me, so perhaps I’m the crazy one?)
Isn’t there some kind of formula for determining the final scores that just leaves out these outlier votes? It seems a lot fairer and more fun for the authors.