IFComp 2022 Review Spreadsheet

Here is a review spreadsheet! It’s copied from last year’s. Feel free to add your reviews here; if you’re posting reviews privately in the author’s forum, add “(author)” to the end of your name.


Great work!

Does anyone have any objections if this is alphabetized early and often? My instinct is, we should go by last name (if someone has it) and then put the authors off to the right, so people not in the author forum aren’t distracted.

(Also, a reminder to anyone curious: you don’t have to write anything terribly formal to get on the list, here. I think some tweeted reviews worked really well last year–In 2021, Katherine Li, who wrote Goat Game, was an example of this. And looking back to 2000 or so, well before twitter, I noticed some people, indeed, wrote very short reviews and are still credited on IFWiki or wherever.)


As I recall, this is how the 2021 spreadsheet was organized, correct? I feel like that organization worked well.


I alphabetized it based on first names/first words of names, since it just seems easier (but feel free to change it)


I agree–that seems simplest, and we’d like to keep things simple.

By the way, if anyone wondered what the recent edits were about, my local computer had the custom cell highlighting wrong. So I was trying and undoing stuff, forgetting I could just copy it over to my own local version. Sorry about that. I refreshed and it looks good again.

Also, hooray for the median total reviews hitting 1! I guess the titles won’t start turning red until we have a few more public reviews for the public review median to hit 1, but it’s always heartening to hit that first milestone.

(Note: if anyone wants to twiddle things to develop a new feature, I recommend making a copy and then cutting/pasting back to add things in. Most of the time, I remember, but this time, I didn’t.)

(Also a note: keeping a local copy can be a nice way to track which reviews have appeared since you last checked. While Google Drive tracks versions, there may be several versions per day, which are tricky to track. If you cut and paste over from the main document to your copy, you may see what’s changed all in one place–though you may have to add in columns for new authors before cutting/pasting so the columns match up. I haven’t rigorously tested this, but it seems to work pretty well so far for me. YMMV.)


Should we also make some sort of colour difference between reviews left on the forum and those published outside?

This spreadsheet was just really helpful to me!

I was looking at the reviews coming in and - like you do - wondering when someone would get around to reviewing my game. Then I looked through the spreadsheet and realised that, in fact, there are lots of games (around 40%) that have zero reviews.

I can also do something about that, because I’ve played some of the games with no reviews. So, thanks! Good spreadsheet, very helpful.


A milestone: there is now a median of 1 public review per game. Some of the entries are starting to be highlighted in red - these are games that have a below-median number of reviews. If you’re writing public reviews, it might be a good idea to focus on those.


This is such a kind way to keep track of which stories are getting what kind of attention! Thanks so much for putting it together, it’ll really help me figure out what to play next – and then I’ll be able to put reaction-thoughts in context with other reviewers!


The total number of review is now at a median of 2 :smiley:


That’s cool!

And in detailed statistical news, the median is now greater than the mean! That’s a change too.

In the early stages of the competition, the mean was greater than the median, and the distribution was left-skewed. In other words, there was a clump of games with zero or one reviews, then it tailed off, with a few games with two and three reviews.

Now, the mean is less than the median, and the distribution is slightly right-skewed. That is, there’s a clump of games with two reviews, and…well, honestly, it’s hard to observe a tail either way, but it tails off more towards the zero.

The mode has changed too! In the early stage of the competition, the most common number of reviews for a game (i.e. the mode) was zero. Now, the most common number of reviews for a game is two, which is a real shift in a few days.

This might sound dull - and it is! - but I think the median and mode are really important. The median is important because it’s really important to say things like “Over half of games have a review now” (actually, it’s now two reviews, but when Autumn mentioned that early milestone it felt really important). The mode is important because it’s important to say “The most common number of reviews for a game is X”.

And I think the distribution is important too. It’s all too easy to reach a position where the mean number of reviews is 1 or 2, but, in fact, over half of games don’t have a review. In other words, a few games have got lots of reviews, but lots of games are unreviewed. So, again, I love this spreadsheet because it gives insight into things like that.

awkward pause

I’ll see myself out.

Also, I bet I’ve got something wrong above, because it’s easy to make mistakes when talking about statistics, so feel free to correct me. I was just getting excited about statistics really.


Stats are cool :slight_smile:

1 Like

You’re not the only person who pays attention to statistics. I think those of us who do, know we can’t poke at it all the time, but I definitely do so when I wake up, and it’s fun to see the sheet populated with reviews … but I have to admit I have a local program to process things in my own way e.g. to see if the median for total games is about to be bumped up, and if so, what entries I should attack to push it over the edge. (I also like to look at data for private reviews in the authors’ forum. We are at 1 game for each.)

The shorter games tend to get a few more reviews, which feels natural, and it’s good to know I’m not the only one who may attack a game to get back in the swing of things.

I remember Autumn did some really cool predictive stuff last year as well. It seems nontrivial to repeat again this year, but it’s just neat to be able to do that with the data we have.

Also, in the spirit of watching stats, can/should @ChristopherMerriner’s notes on Alien’s Mistaken Impression count as a review? It’d be neat to get the “public unreviewed” down from 12 to 11.


Oh gosh, it’s such a feeble attempt at a review but I suppose better than nothing. But I’m so time-poor at the moment that I barely have time to play, let alone review, anything. One of the other excellent reviewers for this year (there seems to be more than ever) will no doubt come along and do it much better. I’m really enjoying reading all the reviews!

1 Like

Wow the review speed is picking up! We’re already at a median of 3 for all reviews!


Yeah, I’m really impressed!

There’s still a handful of games with 0 public reviews (A Chinese Room, Inside, Lost Coastlines, Low-Key Learny Jokey Journey, Prism, The Only Possible Prom Dress, Star Tripper, Thanatophobia, and U.S. Route 160) but a lot of them are a significant time commitment so those don’t surprise me. I’m optimistic that these will all pick up their first (public) review over the next week.


Just wrote down my impressions of two hours in Lost Coastlines. I’ll be sailing those waters many more times.


(Also, to be That Pedantic Guy, Prism should be thrown on the list.)

I was a beta tester for Thanatophobia, so I don’t want to put my thumb on the scale too much.

However, it feels like one of those entries where the unusual interface may initially scare people away. I quickly figured it out, though, and testing it was an enjoyable experience. I got comfortable with the interface very quickly.


Thanatophobia also has an issue of not being available to ipv4 connections. So it kind of limits who can play it…

Which is a shame, because it’s really good.

1 Like

Good catch! I’m in the middle of Prism so I’m not sure why I left it out.

1 Like