Include numerical ratings with IFComp reviews during comp or nah?

In this thread it was mentioned that it’s polite to not include numerical ratings during IFComp.

I read that and thought, ‘Yeah, that makes sense’. But I can’t remember what I’ve done in the past; looking at, it looks like I’ve posted IFDB star ratings during the comp before. But I swear I remember at least one comp where I saved posting the ratings until the last minute.

So I don’t even know what my own opinion is on this, and I’d like to ask current IFComp authors and judges: do you prefer to have reviews give a rating during the comp, or after?

Specifically, would you prefer something like:
A-‘I love this game, lots of fun, blah blah blah 8/10’
B-‘I love this game, lots of fun, blah blah blah.’ (with the 8/10 coming after ifcomp is over).

(When I was an author, I liked having ratings because I wanted to know how good I was doing and to see who the ‘leaders’ were. Now that I’m not entering myself, I kind of enjoy having it be in suspense until the end to give more people a fair shot.)


I feel like option B is certainly the one that errs to the side of caution.


Matters of right and wrong aside: as a writer, I would worry that the number would upstage my writing. And that, vain creature that I am, is something I would never do.


I’d prefer B. I like to make my own decisions about rating, and worry slightly that seeing other authors post numbers (especially something out of 10) at this point might influence me too much. It’s also something I prefer not to do myself, though I keep a private record, and obviously vote using them. I can also convert them later to 1-5 scores for IFDB.


In that post I was mostly trying to be descriptive – like, I think mostly folks omit ratings these days, which is maybe different from how it used to be – but on a normative level I agree that that’s good, so I come down on B. Part of it’s just that I don’t like ratings in general; I can make my peace with using them in IFDB since they help make the algorithms work and support findability, but as a standalone critical tool they feel really flattening (in other words, Drew has me rumbled).

Part of it is also wanting to avoid there being a bandwagon effect; I think this isn’t much of an issue except at the extremes of 9 or 10-scoring games, but it can be real IMO. Don’t get me wrong, I like to lobby during the Comp for folks to check out certain games, but hopefully that’s more to let folks be exposed to something they otherwise would have missed. I also tend to be enthusiastic about a pretty broad swathe of games, but am relatively parsimonious in my ratings – so even if I am trying to be a bit evangelistic, “this game does some interesting stuff I haven’t seen before” probably works better on that front if I don’t have a 7/10 at the end or whatever.


I have taken to rating games that I like because that’s a way to help people find them. It’s also something nice to do for the author (I don’t rate games that I don’t like, unless they’re Infocom games). This is separate from reviewing; I just want a way to tell people that I value their work without activating my social anxiety.

So I do it with a specific purpose in mind. I agree that it can have a flattening effect on discourse. I don’t know if any of you hang out on more general gaming forums, but review threads for big release games are wild. People mostly talk about the metacritic average for thousands of posts. Introducing a value to a qualitative discussion can have a distorting effect.

Now, like Mike, I’m not saying don’t rate things. It helps community members and is a way to support writers in growing their audience. It’s just one of those push/pull things.



I’ve grown to be less a fan of ratings as I’ve gone on, giving or getting. At first it was fun to get noticed. But then I realize so much can depend on my mood. And with IFComp I tried to set out a curve and still had situations where I felt entry A > entry B > entry C > entry A if I thought about it too much. But of course we need some objective evaluation, and since I can send it in anonymously, I can do that. I trust that people saying “I just don’t like it/really like it, I don’t know why” all balances out, so if my score misfires, I did what I could to justify it.

I worry less about ratings each year. I’m happy if I can get one ten-rating. I really don’t care from whom, or how.


This question vexed me a bit last year. On the one hand, it makes a lot of sense to me that a cold, uncaring numerical score can easily eclipse any nuance of analysis in a review. Or even clumsy but sincere fumbling. On the other hand, IFCOMP needs scores, and I take the responsibility of assigning them seriously. The idea of hiding a low score behind a veil anonymity makes me feel … bad about myself? If I presume to rank anything low or high, I figure the least I can do is be clear about why, and own it. I don’t mean that as a universal manifesto or indictment of anyone else. I got my own stuff, y’know? The compromise I came up with last year was to publish but hide my scores with spoiler blur. I think I’m gonna keep doing that, despite my suspicion that the blur is not deterring anyone.

That way its on you all whether you peek or not, and my fragile self esteem lives another day. I can always revisit if I find I somehow poison the discourse.


I don’t personally attach ratings to reviews, but I also don’t want there to be some overarching community norm against them. I feel like in the age of entrants talking openly and acting as judges in their own right, we’ve moved away from rigid purities of competition towards more openness and discussion. For some people, scoring is a major component of their experience of playing through IFComp, since judges have to submit numerical scores. I don’t see why that has to be a negative thing.


I don’t put my scores in my IFComp reviews, but I don’t think including scores is inherently wrong. For me, it’s just that I expect to tweak scores before sending the ballot in. I don’t want to promise someone a 7/10 and then submit a 6/10. (I know it’s all anonymous, but I’ll still know I did it.)

There’s also the rare case where I think a game does one or two things that are really interesting, and I want to spend much of the review talking about those, but the game overall is not good. I think it would be a nasty surprise for an author to read a review that engages with things positively and then slaps a 3/10 on the end seemingly out of nowhere.


I’ve written a fair number of reviews (not as many as others on this forum, but a fair number) and I’ve seldom posted numeric scores. I don’t think its because I’m especially polite. I try to write balanced reviews, but I worry my tone has been a little too cynical or sarcastic or biting in a few cases.

I just don’t think numbers mean very much as a way of communicating the strengths and weaknesses of a piece of work. Whether I’m scoring IF entries, or student essays for a class I teach, the numeric score is the first thing that the author wants to see, and the last thing I want them to look at in terms of processing my feedback.

As an author (this year) I’d probably rather not see them either until the averages are released at the end of the competition. (and even those averages are loaded with a measure of statistical uncertainty).


I guess that the IFComp “best practices” gives the best answer, forming a C solution I personally like:

Here’s an example of a very simple rubric:

  • 10: This game epitomizes what interactive fiction can do, perhaps breaking new ground in the process. It dazzles and delights. People interested in the form will be talking about and studying this game for years to come.
  • 7, 8, 9: A good/great/excellent game you’re pleased to have played, and which you’d recommend to others (with three gradations of enthusiasm).
  • 5, 6: A respectably crafted work that didn’t necessarily move you one way or another, but which you might recommend with reservations. (A 6 offered more to hold your interest than a 5 did.)
  • 3, 4: A flawed project that doesn’t manage to live up to promise, and which you wouldn’t generally recommend playing. (A 4 has more going for it than a 3 does.)
  • 2: A work that technically qualifies as IF, but seriously misses the mark for one reason or another (or several).
  • 1: This work is inappropriate for the competition. Grossly buggy to the point of unplayability, perhaps, or maybe it’s not interactive fiction even by a generous definition of the term.

In other words, grouping the numerical rating in no more than three or four worded categories, using non-weasel wording, should NOT violate the rules, or I’m bending them too U-shaped ?

Best regards from Italy,
dott. Piergiorgio.


Note that it is only an example. They also wrote:

“Anyone in the world is welcome to judge the IFComp’s games, and judges can score games according to whatever criteria they feel comfortable with”


I don’t mind seeing ratings, either for other people’s games or my own. I don’t post them myself during the competition, but that’s because I frequently change my ratings during the comp. For instance, I may notice that I’ve rated game A and B equally high, even though I think B is better, and then I’ll change one of the ratings.

Do whatever you feel like.


Speaking purely personally, I much prefer option A. Partly for the reasons that Mathbrush mentioned he himself liked them, but also because I feel that it helps me, as a writer, better contextualise a review, to put both negative and positive comments into perspective, which (I like to think) has helped me know where to improve. It also helps me get a grasp on reviewers, by being able to quickly compare which games they favoured and which ones they didn’t, which in turn can help me figure out what kind of audience my writing is appealing to. There’s also an element of “well I know that you’re going to be giving it a score out of 10 when it comes time to judge, and I want to have all the data I can get, so why are you withholding it from me?”

But that’s all just purely personal and I can definitely see why both reviewers and authors would be uncomfortable with it. But I guess, if you do want to do it, then there is definitely at least one writer who appreciates it!


Being too lazy at the moment to look it up elsewhere, how does judge scoring work? You don’t have to submit scores till the Nov deadline? Or you can “submit” a score, but change your mind about it before the Nov deadline? How does that reflect the 2 hour limit, if you’ve ended up playing more than 2 hours of the game before the Nov deadline? Does the comp just take your word for it that your “final” score is just based on your first 2 hour impressions?


Yeah, that’s pretty much it - you’re supposed to write down your score at or before the two hour mark, and not revisit it if you play more. This can be a little tricky since for many games I do wind up fiddling with the initial score I gave after thinking about them a bit more, or playing other games that recalibrate my scale a little. But my understanding of the rules is that I can’t do that if I played the game for more than two hours, so those ratings are basically carved in stone.

(Oh, and yes, the site allows you to rate games, and change those ratings, over the entire judging period up until the deadline).


Has anyone toyed with this in the past? Wrote a fairly solid, well tested game for the first 2-3 hours of playtime, and then left the rest of the game a burning rubbish pile after that point?

By the logic above, the game should place well, despite this. Just curious.


I don’t think anyone has done that. There was a game that was perfectly normal up to a point and then ended with a weird rubbish fake-out message:


This smells more like someone ran out of time and panicked rather than intending a troll game from the outset. I could be wrong.