The IFComp 2020 Best Reviews Competition (Feedback requested in post 38)

I’ve kept mine up to date. Could I be helpful in getting someone else’s review list up to date as well? Or should each reviewer fill out their own part of the spreadsheet?

1 Like

It could help to look over people who don’t frequent the forums. If there’s a name you don’t recognize, you could check the links to see if anything has popped up on that site.

I just finished a scrub of the review spreadsheet. I checked all reviewers that published their reviews on IFDB or blogs, plus I double checked The Short Game’s entries (in the running for my vote for Best Coverage). Figured those posting reviews here probably have seen the reminders and not sure how to double check those that review on Twitch (though they have my thanks for extending IF to that platform).

For those I did look at I checked to make sure the most recent review they had posted on IFDB or their blog was logged properly, if it wasn’t I logged it and then kept going backwards until I started finding reviews that had been logged. I ended up adding seven to the spreadsheet (basically all the public reviews posted since my review of Tavern Crawler at the moment).

We are well over 1,000 total reviews (774 already public). That’s amazing! And every single game has a minimum of 3 public reviews!

7 Likes

Wonderful! Thank you so much. I added at least one Twitch reviewer’s reviews, so I’m glad I could help. That’s a lot of work, and I appreciate it!

Just a few thoughts before the Best Reviews Competition starts:

The point system of this competition system appears to be the same as IFComp. Does it also apply to how the final result is calculated?

I don’t intend to downvote any reviewer, just want to mention some possible implications.

I can imagine most people only give points to the reviewers they like. However, if the final result is based on average scores (like IFComp), then some might feel that it is fairer if they rate all reviewers they have read, both good and less good reviewers. But most might not feel like downrating any reviewers, even if they feel they don’t deserve a high rating(?). This could skew the result a lot.

So perhaps it is better to base the result on the sum of scores, instead of the average score? (perhaps you already have decided that, just wanted to mention this)

I think the average score is excellent for game comps, but for reviewers I think the sum of ratings is better as all ratings then reflect something positive. What do you think?

1 Like

I’ll chime in to say that @FriendOfFred wrote an excellent review and analysis of The Eidolon’s Escape.

3 Likes

That’s an interesting question. What if some of the reviewers are better-known than others?

Perhaps it would be better to make it like XYZZY voting, where you vote in multiple stages, picking two favorites in the first and one favorite in the second.

Although I think I should keep it the same way I advertised (although I didn’t realize that I was supposed to make it tomorrow, and was planning on making it today). If it runs again, I might change it.

1 Like

Looking through the spreadsheet now, I discovered a blog of great reviews I hadn’t seen before: The Stack by Carl Muckenhoupt. His reviews are not on IFDB and his blog is not on Planet IF, otherwise I would surely have read them already. Is it customary for anyone to submit blogs to be included on Planet IF or only for the blog’s author to do so?

3 Likes

Also, I can’t see @patrick_mooney’s reviews, which I enjoyed a lot, in the spreadsheet. Is this deliberate, or have I perhaps seen wrong?

2 Likes

I don’t believe it’s deliberate. The spreadsheet is community-run, so the community just seems to have not picked up on his, probably because they (I think?) started later.

So I’ll add him to the voting list.

I think what I’ll do for the votes (to address Denk’s concern) is that I’ll use Bayesian averaging, like the IFDB top 100, where you ‘add’ an average number of average votes to each score at the end. That makes 4 scores of 9 better than 1 score of 10, for instance.

I plan on making the form now, and who knows how long it’ll take, but if anyone wants to add any other reviewers last-minute, let me know!

2 Likes

Ooh what if you take the first letter of every review that’s not on the spreadsheet and then that’s an anagram for the metapuzzle?

6 Likes

Thanks, Stian. I missed the initial announcement that there was a spreadsheet this year, but I’ve added them to the sheet now.

3 Likes

Results are now available:

5 Likes

Now that the review comp is over, it’d be useful to get and give feedback.

On my end, the biggest problems were trying to collect data and report it accurately. Collecting data with google forms was fairly easy, but being accurate didn’t work well for me. I omitted some people from the forms on accident (luckily Patrick Mooney was later added and even won a prize). Worse, I calculated the winner wrong, so those are things I’d have to work on, perhaps bringing on an extra person to help check for accuracy.

Getting voters was tough; only 13 people voted in the end, almost all of them other reviewers.

On your end, how did things go? Was the competition beneficial to you? Did it cause any problems? Is it a nice way to recognize helpful people or a divisive way to split people into have’s and have-not’s?

And,

Should it run again next year?

1 Like

To me, it looked like there were lots of positive feedback to all reviewers who posted on the forum at least, so for the purposes of recognising the reviewer’s effort, I’m not sure it added much, for most of us, anyway, though I’m happy for the winners and hope it felt like an extra bonus recognition for them.

What I would appreciate more is constructive criticism, reviews of reviews. It was indeed one of the prizes, but it is something every reviewer could benefit from.

3 Likes

One reason I didn’t participate in voting was that I’ve still only played a fraction of the games and I’ve avoided reading tons of reviews to avoid spoilers. So I didn’t feel like I had an informed enough opinion to vote fairly.

5 Likes

I also didn’t vote because I hadn’t read reviews for games I didn’t play due to spoilers, but also because my brain disease problems meant I quickly forgot the details of those reviews I did read! I very much enjoyed reading those reviews though.

I’m not sure about this competition going forward given the limited number of votes. But I think it helped encourage a strong reviewing community, not setting out to win the competition per se, but more as a community event. Which is great, and certainly a feeling I had, encouraging me to keep reviewing.

5 Likes

First and foremost, I really enjoyed the side-contest and would definitely like to see it run again next year (and that would be true even if I hadn’t won a prize). That said, I have a some miscellaneous thoughts.

One thing that struck me was how much voting in the review competition was conditioned in small ways by the small practices of the Comp itself, if that makes sense. For one thing, the “How do I decide what to vote? What are my criteria? What’s a five? What’s a nine? Who am I to make these determinations?” questions that went through my head while I was voting were a lot like the kind of questions that go through my head when I vote on Comp entries. I think that that may have gently discouraged more people from voting. One way to ameliorate that might be for people to do what some Comp reviewers have done, which is to publicly post their evaluation criteria for other people to mull over.

But I think it’s also the case that just doing it again will result in more people voting, just as the Comp itself has more voters than it did in 1995. If the review competition is something that everyone knows in advance will occur, people might (a) read reviews with an eye to voting, making it take less effort at the end; and (b) have gone through the psychic adjustment of resolving their doubts by the time the Comp is over; and © be able to schedule for it better. I think 13 people voting the first time around is an encouraging start!

It might be helpful to have a breather in between the end of the Comp and the beginning of review voting: after two months of the Comp, it may be hard for people to rev up their enthusiasm for voting on reviews. Maybe a week for people to polish reviews up that they’ve written on Comp games and/or finish writing reviews for games they’ve played, while we wait for the official Comp results to be tallied and announced, would let people who just voted on Comp games relax a bit? On the other hand, maybe a break will cause people’s energy to dissipate and result in fewer votes because they wandered off and did the other things humans do in their human lives when their attention flags. It’s hard to predict.

I got some good, encouraging feedback from the results and would love to get more next year. One way to help accomplish this might be to have a “feedback for author” section under each voting entry, as in the Comp voting form. I don’t know how easy this would be to set up with the existing Google Forms-based setup, though.

7 Likes

I enjoyed this contest, as it gives reviewers a chance to give each other recognition.

I left the first question (about which reviewer provided the best coverage) unanswered, as I felt it was asking me to count how many reviews each reviewer posted and then choose the one with the highest number. And I didn’t want to do that homework!

I did answer the second question, but I only gave ratings to the reviewers I had read, which was only a few. (And the one I rated the highest, Anssi, didn’t win.). I’m not pointing out a flaw in the contest here, just sharing my experience.

If it were up to me, I would combine both questions into one, and add a second question to recognize one favorite review. So the questions would be like:

  1. Rate the reviewers, taking into account both quantity of coverage and quality of criticism.
  2. What’s the one review that really stood out to you? (There would be two drop-down menus here, one to choose the reviewer and one to choose the game.)
4 Likes

Let’s see, a couple of thoughts that may or may not be useful:

  • I definitely enjoyed the competition – I was trying to keep up with reviews as folks posted them but also tried not to read reviews for games I hadn’t played yet, so it was nice to have a prompt to go back over the ones I’d missed, since I found more than a few gems I’d missed. And having it run after the competition was over was nice too since it helped pass the time between the Comp closing and the results being announced. So I’d definitely support it coming back next year!

  • I also thought the split between coverage and individual review was a little challenging, though I think it was ultimately helpful since it allowed folks like Patrick to get just recognition for their contributions (FriendOfFred I think wrote only one review, but it was one of my favorites!) Maybe here’s somewhere where rubrics would help? Not saying my approach was great, but I tried to start with a 1-10 rating based on how many reviews a person wrote (using a nonlinear approach though since most folks wrote between 5 and 40 and I wanted to differentiate them). Then I added or subtracted a point based on the diversity of the games they reviewed (including parser vs. choice, but also less-reviewed games or those by less-well-known authors), and then added or subtracted a point based on the overall robustness of the coverage. It felt a little more interesting than mechanically assigning a score based on the number in the spreadsheet, at least.

  • Speaking of the spreadsheet, I’m not sure how many folks had easy access to it, since it seems like some, including reviewers, missed it. In retrospect, pinning it to the top of the forum, and/or maybe linking to it in reviewers’ threads, might be helpful for highlighting it, since having it available makes reviewing the reviewers much easier!

  • Getting more feedback for reviewers I think is a worthy goal, and hopefully this competition helped foster it. I’m doing an in-depth review of one of the reviewers as part of a prize I offered, which hopefully will be fun (I know I’m looking forward to it!) but that was actually inspired by Mike Sousa, who many years ago did his own review post of all the IF Comp reviewers. I think he might be considering doing something like that again this year, which would be awesome, but I think the competition was also good for pushing me to provide at least a couple sentences of feedback for the reviewers I was able to get to, which hopefully was interesting or rewarding for folks.

  • In terms of incentivizing more participants, I wonder whether taking a page from the Comp and explicitly asking folks to rate at least 5 (or however many) would help? That might help folks with less time to be comprehensive feel like their participation is invited and valuable too.

  • I’m sure others will have great ideas about better technological solutions for counting the votes and compiling the results, but that’s above my pay grade. I’m definitely willing to help out with any of the administration tasks or double-checking things if that’s helpful, though. If I don’t volunteer next year, feel free to recruit me while reminding me of this post :slight_smile:

Thanks again for running the competition, and to all the reviewers and voters!

4 Likes