The IFComp 2020 Best Reviews Competition (Feedback requested in post 38)

That’s an interesting question. What if some of the reviewers are better-known than others?

Perhaps it would be better to make it like XYZZY voting, where you vote in multiple stages, picking two favorites in the first and one favorite in the second.

Although I think I should keep it the same way I advertised (although I didn’t realize that I was supposed to make it tomorrow, and was planning on making it today). If it runs again, I might change it.

1 Like

Looking through the spreadsheet now, I discovered a blog of great reviews I hadn’t seen before: The Stack by Carl Muckenhoupt. His reviews are not on IFDB and his blog is not on Planet IF, otherwise I would surely have read them already. Is it customary for anyone to submit blogs to be included on Planet IF or only for the blog’s author to do so?

3 Likes

Also, I can’t see @patrick_mooney’s reviews, which I enjoyed a lot, in the spreadsheet. Is this deliberate, or have I perhaps seen wrong?

2 Likes

I don’t believe it’s deliberate. The spreadsheet is community-run, so the community just seems to have not picked up on his, probably because they (I think?) started later.

So I’ll add him to the voting list.

I think what I’ll do for the votes (to address Denk’s concern) is that I’ll use Bayesian averaging, like the IFDB top 100, where you ‘add’ an average number of average votes to each score at the end. That makes 4 scores of 9 better than 1 score of 10, for instance.

I plan on making the form now, and who knows how long it’ll take, but if anyone wants to add any other reviewers last-minute, let me know!

2 Likes

Ooh what if you take the first letter of every review that’s not on the spreadsheet and then that’s an anagram for the metapuzzle?

6 Likes

Thanks, Stian. I missed the initial announcement that there was a spreadsheet this year, but I’ve added them to the sheet now.

3 Likes

Results are now available:

5 Likes

Now that the review comp is over, it’d be useful to get and give feedback.

On my end, the biggest problems were trying to collect data and report it accurately. Collecting data with google forms was fairly easy, but being accurate didn’t work well for me. I omitted some people from the forms on accident (luckily Patrick Mooney was later added and even won a prize). Worse, I calculated the winner wrong, so those are things I’d have to work on, perhaps bringing on an extra person to help check for accuracy.

Getting voters was tough; only 13 people voted in the end, almost all of them other reviewers.

On your end, how did things go? Was the competition beneficial to you? Did it cause any problems? Is it a nice way to recognize helpful people or a divisive way to split people into have’s and have-not’s?

And,

Should it run again next year?

1 Like

To me, it looked like there were lots of positive feedback to all reviewers who posted on the forum at least, so for the purposes of recognising the reviewer’s effort, I’m not sure it added much, for most of us, anyway, though I’m happy for the winners and hope it felt like an extra bonus recognition for them.

What I would appreciate more is constructive criticism, reviews of reviews. It was indeed one of the prizes, but it is something every reviewer could benefit from.

3 Likes

One reason I didn’t participate in voting was that I’ve still only played a fraction of the games and I’ve avoided reading tons of reviews to avoid spoilers. So I didn’t feel like I had an informed enough opinion to vote fairly.

5 Likes

I also didn’t vote because I hadn’t read reviews for games I didn’t play due to spoilers, but also because my brain disease problems meant I quickly forgot the details of those reviews I did read! I very much enjoyed reading those reviews though.

I’m not sure about this competition going forward given the limited number of votes. But I think it helped encourage a strong reviewing community, not setting out to win the competition per se, but more as a community event. Which is great, and certainly a feeling I had, encouraging me to keep reviewing.

5 Likes

First and foremost, I really enjoyed the side-contest and would definitely like to see it run again next year (and that would be true even if I hadn’t won a prize). That said, I have a some miscellaneous thoughts.

One thing that struck me was how much voting in the review competition was conditioned in small ways by the small practices of the Comp itself, if that makes sense. For one thing, the “How do I decide what to vote? What are my criteria? What’s a five? What’s a nine? Who am I to make these determinations?” questions that went through my head while I was voting were a lot like the kind of questions that go through my head when I vote on Comp entries. I think that that may have gently discouraged more people from voting. One way to ameliorate that might be for people to do what some Comp reviewers have done, which is to publicly post their evaluation criteria for other people to mull over.

But I think it’s also the case that just doing it again will result in more people voting, just as the Comp itself has more voters than it did in 1995. If the review competition is something that everyone knows in advance will occur, people might (a) read reviews with an eye to voting, making it take less effort at the end; and (b) have gone through the psychic adjustment of resolving their doubts by the time the Comp is over; and © be able to schedule for it better. I think 13 people voting the first time around is an encouraging start!

It might be helpful to have a breather in between the end of the Comp and the beginning of review voting: after two months of the Comp, it may be hard for people to rev up their enthusiasm for voting on reviews. Maybe a week for people to polish reviews up that they’ve written on Comp games and/or finish writing reviews for games they’ve played, while we wait for the official Comp results to be tallied and announced, would let people who just voted on Comp games relax a bit? On the other hand, maybe a break will cause people’s energy to dissipate and result in fewer votes because they wandered off and did the other things humans do in their human lives when their attention flags. It’s hard to predict.

I got some good, encouraging feedback from the results and would love to get more next year. One way to help accomplish this might be to have a “feedback for author” section under each voting entry, as in the Comp voting form. I don’t know how easy this would be to set up with the existing Google Forms-based setup, though.

7 Likes

I enjoyed this contest, as it gives reviewers a chance to give each other recognition.

I left the first question (about which reviewer provided the best coverage) unanswered, as I felt it was asking me to count how many reviews each reviewer posted and then choose the one with the highest number. And I didn’t want to do that homework!

I did answer the second question, but I only gave ratings to the reviewers I had read, which was only a few. (And the one I rated the highest, Anssi, didn’t win.). I’m not pointing out a flaw in the contest here, just sharing my experience.

If it were up to me, I would combine both questions into one, and add a second question to recognize one favorite review. So the questions would be like:

  1. Rate the reviewers, taking into account both quantity of coverage and quality of criticism.
  2. What’s the one review that really stood out to you? (There would be two drop-down menus here, one to choose the reviewer and one to choose the game.)
4 Likes

Let’s see, a couple of thoughts that may or may not be useful:

  • I definitely enjoyed the competition – I was trying to keep up with reviews as folks posted them but also tried not to read reviews for games I hadn’t played yet, so it was nice to have a prompt to go back over the ones I’d missed, since I found more than a few gems I’d missed. And having it run after the competition was over was nice too since it helped pass the time between the Comp closing and the results being announced. So I’d definitely support it coming back next year!

  • I also thought the split between coverage and individual review was a little challenging, though I think it was ultimately helpful since it allowed folks like Patrick to get just recognition for their contributions (FriendOfFred I think wrote only one review, but it was one of my favorites!) Maybe here’s somewhere where rubrics would help? Not saying my approach was great, but I tried to start with a 1-10 rating based on how many reviews a person wrote (using a nonlinear approach though since most folks wrote between 5 and 40 and I wanted to differentiate them). Then I added or subtracted a point based on the diversity of the games they reviewed (including parser vs. choice, but also less-reviewed games or those by less-well-known authors), and then added or subtracted a point based on the overall robustness of the coverage. It felt a little more interesting than mechanically assigning a score based on the number in the spreadsheet, at least.

  • Speaking of the spreadsheet, I’m not sure how many folks had easy access to it, since it seems like some, including reviewers, missed it. In retrospect, pinning it to the top of the forum, and/or maybe linking to it in reviewers’ threads, might be helpful for highlighting it, since having it available makes reviewing the reviewers much easier!

  • Getting more feedback for reviewers I think is a worthy goal, and hopefully this competition helped foster it. I’m doing an in-depth review of one of the reviewers as part of a prize I offered, which hopefully will be fun (I know I’m looking forward to it!) but that was actually inspired by Mike Sousa, who many years ago did his own review post of all the IF Comp reviewers. I think he might be considering doing something like that again this year, which would be awesome, but I think the competition was also good for pushing me to provide at least a couple sentences of feedback for the reviewers I was able to get to, which hopefully was interesting or rewarding for folks.

  • In terms of incentivizing more participants, I wonder whether taking a page from the Comp and explicitly asking folks to rate at least 5 (or however many) would help? That might help folks with less time to be comprehensive feel like their participation is invited and valuable too.

  • I’m sure others will have great ideas about better technological solutions for counting the votes and compiling the results, but that’s above my pay grade. I’m definitely willing to help out with any of the administration tasks or double-checking things if that’s helpful, though. If I don’t volunteer next year, feel free to recruit me while reminding me of this post :slight_smile:

Thanks again for running the competition, and to all the reviewers and voters!

4 Likes

Is there a way to see the feedback given?

I just got mine in a PM from @mathbrush here on the forums. Mine boiled down to “good reviews, wish there were more!” which is about the most encouraging thing I can think of.

2 Likes

Ooooh, in that case… if you’ve got something for me, @mathbrush , I’m definitely interested. :slight_smile:

1 Like

Another idea: Voting for reviews instead of reviewers. That way, it should be easy to vote for at least five. Coverage could be quantified without votes.

3 Likes

First, I’d like to publicly thank @mathbrush for running the review competition this year. I appreciate his hard work not only writing reviews himself but also encouraging other people to write them.

Regarding the competition itself, I’d like a better understanding of our collective goal.

  1. I figure that everyone wants more public discussion of IFcomp, and
  2. we’re assuming that more public reviews will encourage more public discussion

The review spreadsheet works to encourage discussion. When I was an author, it was fantastic to have a single place to look for updates. As a reviewer, putting links on the spreadsheet drove a substantial amount of traffic to my blog.

The competition encouraged people to keep the spreadsheet updated, which was good (even if it wasn’t perfect).

But does the review competition encourage more discussion and reviews? That’s a difficult question to answer.

6 Likes

I think the reviewer competition is a great idea. And getting more reviews for IFcomp is a good thing. But the other competitions could also need some more reviews.

In lack of a better measure, and in order to compare the competitions, I took data from IFDB:

IFComp 2020 IFDB reviews so far: 3.5 reviews per game (many authors are still adding some)
Ectocomp 2020 IFDB reviews: 3.0 reviews per game
Spring Thing 2020 IFDB reviews: 1.8 per game

Add to this that almost no blogs(?) review Ectocomp and Spring Thing, we must conclude that especially Spring Thing games get much less attention.

First, I was thinking, that if the reviewer competition covered both IFComp games and Ectocomp games (would be practical as they overlap) this would help Ectocomp get more reviews, which would be good, even though it doesn’t help Spring Thing.

Not sure what to do with Spring Thing though? It could have its own reviewer competition perhaps if anyone would like to run it. A XYZZY-award for best reviewer could also be helpful.

4 Likes