The internet has different cultures for criticism and rating. The IF community has traditionally had a pretty tough critical culture: we expect that everybody who makes a game is dedicated to the rocky road of artistic growth, and feel that cotton-wool is a poor growth medium. (And also we have a certain number of people whose only joy is to grumble about shit.) This comes from a number of places – a dissatisfaction with the publisher-driven mainstream games media (which has often been little more than advertising), one or two cultural imports from academia, a certain amount of defensive pride about the high standards of our amateurism. If you come from a place where a score lower than 8/10 means ‘don’t play this’, or where most commentary is either unqualified praise or outright hatred, we can seem awfully mean.
And people respond differently to different approaches. Some people really need to have their first effort systematically torn into tiny shreds in order to do better next time; that is how they will best flourish. Some people do better with other approaches. (And not everyone even wants to do better next time. That is profoundly, deeply weird to me, but I dunno that it’s therefore invalid.) There is not any good way to tell who will actually respond best to which critical environment – I sure as hell don’t think that the authors themselves would reliably know – but I think having more than one option is, at least, hopeful.
At the same time, I believe very, very strongly in the responsibility of the reviewer to be honest and clear about their experience of a game. So the goal of the rule was to carve out a bit of space in which reviewers were encouraged to write reviews of games they did consider worthy, emphasize that they didn’t have to review every game, maybe encourage them to delay negative reviews until after the voting period, while not actually muzzling anyone. If you want to write reviews of a game you’re not keen on, you have a number of options – wait until after the voting period to post them, write reviews for every game and thus make your review votes moot, or just cancel out your own No vote. (Yes, I’m aware that submitting one Yes and one No vote would not have the same effect as submitting no votes at all.) Combine that with the fact that the precise vote a game gets doesn’t matter all that much, and it adds up to some pretty mild motivations. Which was the idea.
(To be clear, I really wouldn’t want this premise applied to the more serious affairs of IF Comp and Spring Thing, say; but I thought it’d be a good fit for lighter, lower-pressure minicomps.)
So how did this work out? Obviously it’s impossible to tell what the reviews would have looked like without this rule. Since a good chunk of the reviews were written by game authors, it seems plausible that they’d have tended towards a more convivial, Miss Congeniality-ish tone anyway.
That said, it was very clear that this rule – mild as it was – bothered more people than any of the other experiments. Some people pushed back against it by reviewing every game. Others told me that it felt weird to be adjusting their reviewing approach. That’s valuable, I think; it’s important to re-appraise stuff every now and then, and if it doesn’t feel weird then you’re not really re-appraising. There were still a number of sharp-toned reviews, or reviews that concluded No. Great! If this rule had resulted in an unbroken stream of sappy positivity, it’d have been a clear signal that it was too strong.
In general, it seemed to me that the rule – or, at least, the fact that there was a voting process – did result in more reviews than we might otherwise have seen. I’d strongly encourage future minicomp organisers to think about how to motivate reviews, and to regard voting as a key component of that.