IFComp 2019 Follow-Up Survey: Responses Requested by Feb 9th

Convincing people to submit “quality over quantity” is sort of a moot point - 2019 was the year I can say there was very little chaff in the mix - everything was a competent release, nobody trolled, there were only a few compatibility issues, and there was only one game which I know was released by accident because the author had IRL issues and had completely forgotten they had a test build uploaded to the site that got entered and released.

My suggestion, at minimum, was to extend the voting/judging period to the end of the year - three months to play 80 games instead of six weeks.

I had also suggested releasing the games in “waves” of about 25 over the course of four months with release windows in October, November, and December, with voting concluding at the end of January.

The rush of activity comes at the reveal: Witness the hot and heavy burst of excitement on game release, then review/discussion burnout about week 4 when the new shiny of Ectocomp happens. If the IFComp games are released in regular monthly waves, each one has a better chance of being noticed and receiving fresh attention against ~24 other games instead of 70-whatever in its entirety.

My suggested tradeoff to quell “my game didn’t get as much time” is to implement a secondary author entry deadline window. The first 25 games that are submitted by the normal end of September deadline are guaranteed to be in the first wave release in October. Authors who experience catastrophe or who prefer it could submit by the second “final” deadline at the end of October when the second 25 games are added to the roster and released. All remaining games would then be released at the end of November in a third wave. (The original intent deadline remains the same - open July, closed end of August - so no unexpected entries would be considered.)

As someone who does lots of test uploads, perhaps the IFComp submission page could include a “my game is complete and ready for entry” checkbox and the date on ticking that is what sequences the entries. Or authors could specify a preference “I’d like my game to appear in the first/second/third wave” in a drop-down.

The voting period would run until the end of January for all games, so everything gets at minimum two months of voting and play (longer than is given now).

The carrot for getting the game submitted by the first deadline is earlier release in what becomes a four-month playing/voting extravaganza. Each of three waves gets an entire month for consideration on its own, then there’s an extra month for people to evaluate the entire comp and all its entries as a whole with more perspective and adjust/curve their scores.


I think the main question should be not decreasing the number of games, but rather how we increase the number of players.


There is no problem with the amount of games. I will say it again, for emphasis: there is no problem with the amount of games in IFComp. There should be 100+ entrants. Give me 200+. To see so many people from such different backgrounds over the past three years making unusual, innovative, fun games is something to behold and celebrate. This year, approximately eighty people made games, art, experiences—eighty people made something and felt compelled to share it with the world. That’s cool. And for some folks this is a problem? We have to bring that number down? I find this baffling.

There is no problem with the structure of IFComp. The sheer variety of games on an equal slate with one another is part of the appeal. I’m thinking about this more as an entrant than a player: I have entered IFComp twice, and each competition was an incredible learning experience. There is no better way to learn how to make compelling games than to be in IFComp: alongside creators who have been writing for decades and new writers with innovative ideas. I think this should be emphasized and encouraged.

As far as I can tell, the problem is the same as it’s been for years: number of judges / players. (I see Ade just posted this exact thought as I was typing. What a nifty forums.) As I was writing Animalia in 2017, I remember seeing coverage of IFComp in PC Gamer, Rock Paper Shotgun, etc., and, egotistically, I confess I was hoping that such coverage would continue and I’d see my lil’ ol’ name in a big city Internet magazine. That seems to have lapsed in recent years, and it might be worthwhile pursuing those avenues in the future. Not just to attract players with egos like me — I imagine it would be valuable for increasing the number of judges as well. Maybe Kotaku or RPS or something?

(If anyone from those outlets is reading this, I volunteer. Maybe Elizabeth and I can pitch VYE to one of them, ha.)


Most seem to agree that we don’t mind many games, but would like more judges. How about letting entrants judge too, just not for their own games?

I would also like to advocate the stance that it’s not important that the “best” game wins. If we dismiss that goal, there is less need to maximise fairness in the system, e.g. with multi-round voting or voting restrictions for entrants.


Really what I’d like to see is a pairwise voting system. You are presented with two games at a time. (Perhaps filtered by genre or type to avoid material you are not interested in.) Votes would then be counted using the Condorcet method. I think this would bring in more judges because you’d only have to play two games (but you could play more). Deciding which game is better is easier than assigning an absolute score on criteria no one is clear about.

I suggested this last year and there was some skepticism about it. My suggestion would not be to change the voting for the next comp, but to use the judge’s scoring to rank their choices and apply Condorcet to that. In the case where a judge scores two games equally, you award a half vote to each. You could then publish an unofficial Condorcet ranking and see how it compares to the current method and get feedback on what people think about it.


In your view, which game should win?

I thought VYE was awesome, and along with The Short Game podcast has been very supportive of IFComp. Especially when you brought in guests to help cover more of them. I could totally see it as a seasonal feature on a gaming site.

I think IFComp got some extra news coverage when Emily Short was writing a guest column in one of the mainstreams. Possibly someone with connections (IFTF?) might be able to do a series of press releases that Big Magazines could slap on their site with very little work if they don’t have the staff to actually play and review. Not just a “hey, the Comp is happening” but “Here are some selected games you may be interested in…” throughout the duration. Every time IF is featured on a site I read, the comments always include at least one person proclaiming “I thought text adventures were dead! Thanks for letting me know!”

I tried to ping Danielle Riendeau (of several podcasts and game-news venues) about the Comp one year because she’d mentioned Twine works several times on a podcast. I know Vinnie Caravella and Abby Russell of the Giant Beastcast both enjoy old school and indie adventures and there was a discussion during Game of the Year how few games incorporate meaningful player-choice - perhaps they might at least shout out the Comp in their news segment if they were aware of it. We could all try our best to reach out to the community-at-large…

I even remember someone made a wiki-entry for my and other IFComp games one year that showed up on the frikking Giant Bomb website - which was awesome.


The one with the most/highest votes, obviously :wink:

I infer he meant other side-categories besides just the top ten that games could nab. Kind of like how @mathbrush gave every game an honorary yearbook-style “Best Dressed” type of award.

Perhaps IFComp could include some similar recognition-only category ribbons that voters who don’t play five games could call out similar to Golden Banana and Miss Congeniality like “Most Fun” “Creepiest Game” “Best SciFi” “Best Dialogue” “Best Stylistic Presentation” etc.

My point is that it’s possible to tune the system in order to best decide which games are democratically considered the best. It is also possible to tune the system in order to make sure that as many games as possible are played and reviewed by as many players as possible. These two goals may conflict. If so, I would regard the latter as preferential.


I like the idea of a longer voting period. I’ve struggled myself with the short period, not always having available time, and feeling that I can’t judge all that I want to. I also feel under a huge amount of pressure as a judge during that period, which really doesn’t help encourage more judges I think.

I also think there should be even clearer clarification for potential voters making clear that they only need to play 5 games at least. Because I’ve had quite a few people say to me that they’d like to judge, but they couldn’t play all the games, thinking they’d need to play them all. And so they ended up playing none at all, because of this misconception.

I filled out the survey but I’m not sure how helpful my response was. I was sort of middling re how I feel about the game number. In principle I’m in favour of the more the merrier. But as a judge I am concerned about judging enough, and getting through enough in the short time available - which hasn’t grown as the number of entrants has grown.

As a past - and likely future - IF Comp entrant I’m also concerned about putting too much of a quality bar on entrants. My game was my first completed game, and there are lots of things I’d do better and differently now. But taking part was a hugely valuable process for me, and one that I’ve grown from.

One thing I do find discouraging as a judge is when competitors haven’t specified how long a game will take to play. It’s surprising how many don’t. I am limited in time to judge, and more likely to play games I can fit in. Which can tend me towards shorter games, but that’s another issue …

Re quality, despite what I say above, I would like to see a bit more encouragement before entry to maybe have a checklist for authors to work through. Has this game been playtested? Have you filled out its summary details eg length, type, etc? Have you tested it plays online ok? Because it is surprising how many don’t do all of the above. There is guidance in the IF Comp website re playtesting etc, but perhaps it needs to be more prominent for entrants before they submit a game. But I wouldn’t like to discourage new entrants.


I think a Condorcet approach would be really nice.

I don’t agree those who think we “just” need more judges, or, to the extent that I agree, I think it’s appropriate to make some changes to the competition to make it more engaging for casual judges.

We already say “you just have to rate five games,” which seems easy enough, but I feel like I hear a lot from judges, “I feel uncomfortable trying to score just five games when I don’t know the full range of games that are out there. I can know that out of the five games I played, I liked game X the best, but how can I give game X a 10 if there’s another game in the competition that’s way, way better? I can only rate five games if I’m extremely well-versed in the IF scene in general; I’m not qualified to just rate five games.”

Condorcet solves that problem nicely. It only asks the judge to rank the games they played; they don’t need to know objectively whether all five of them are great or all five of them are terrible.

I also think casual judges are skeptical to wade into a large number of games and pick five random ones. “What if I get five bad games? I’ll be doing my part to make the competition a success, but I won’t have fun if I just have to slog through five games and give them all 3s.”

That’s where I think multi-round elections shine. “All 20 of these games are pretty good. You can play just five of them, or, heck, play all 20 of them, because they’re probably all worth your time.”

If we want more judges, strategies like these are what we should be thinking about.

1 Like

“Top reviewer” is a great idea, especially if it encourages judges to review more games.

The competition gets approximately the same benefit from attracting one new judge as it does if we convince an existing judge to rate five additional games. (And I suspect it may be considerably easier to convince judges to rate more games than it is to acquire new judges.)

1 Like

I really liked that the website randomized order by default instead of staying in alphabetical order in 2019.

What about an alternative voting method that waives “You must vote on five games”? Keep the normal ballot with a minimum of five, but also have an IFComp page that emphasizes “This will take two hours or less of your time.”

Send invites to noted journalists/authors/insiders/reviewers to participate publicly, linking to an IFComp page that says: “This website will show you five games. Pick one, play it two hours or less, then vote on it. Write a review or an article about the game and your process and/or IFComp if you’d like; the link will get widely shared by the community.”

1 Like

I am sceptical about Condorcet but also curious (I admit - I am a math geek! :grinning: )
I guess one drawback is, that judges who don’t like math may find it less transparent. An average rating is more simple. I am also concerned that there will be too many draws, though I don’t really know much about it. It could be interesting to test it though. I don’t expect IFcomp organizers to implement it into the web-application in the near future, but if someone would volunteer to do the work offline it could be a fun side-contest in addition to The Golden Banana and Miss Congeniality!

Or perhaps, IFcomp may even still have the raw data from 2019, so we could try the Condorcet method on those?

1 Like

If the organizers made unaggregated voting data (anonymized of course) from the competition available (last year’s or from the upcoming comp), it would be easy enough to apply Condorcet to it. I’d be curious to see how far apart the two methods are in their rankings.


Sorry, I changed my mind a bit after reading the Condorcet article thoroughly. There is a problem but it is most likely solvable but must be adressed:
After reading about the Condorcet method I am of the impression that the seven mentioned variants does not take into account that some games gets played more than others.


To save time, candidates omitted by a voter may be treated as if the voter ranked them at the bottom

This is not a fair assumption. If a judge has not played a game, we can’t know if the game is better than all those played or worse than all those played. Thus a game which doesn’t get played much, will be ranked at the bottom by most judges because they didn’t play it. The right thing would be to assume that all the games, which are not played by a judge are of medium quality, not lowest quality. That is of course possible, but this problem isn’t described in the article, even though it is mentions seven variants.

IFcomp’s average rating method doesn’t have this problem. A game may get 10 votes with an average of 8 points, and thus it will place better than a game with 50 votes of 2 points each. I think it is important that the best game wins, not the most accessible. A not so accessible game may become available for online play later or it may be ported to another IF-engine.

1 Like

I was envisioning this as a common, global schedule. (Since nobody wants to scroll all the way back up to the post I’m replying to, “this” was the idea that a set of games should be “featured” each week, comprising all the games.) That way people would potentially be talking about the same games at the same time.

Hanon’s idea of releasing the games in waves would do what I’m aiming for here even more so!

1 Like

I’m pretty sure this was mentioned, but the games aren’t scored against each other. Not voting for a game doesn’t hurt it. Judges shouldn’t be ranking games they haven’t played.

In that case, there’s a risk that the global order affects how the games are received. Earlier games will get much more attention than later ones. I presume this is the reason that the site now displays the entries in random order by default, to ensure a level playing field.

Here I think it’s important to specify that the date of submitting a finished version should be interpreted in the coursest way possible: If it was before day X, the game appears in the first wave; before day Y, second wave; else third wave. If we allow the difference in timestamps of two games to affect the probability that one appears before the other, then we create an incentive for authors to hold back on bugfixes once they have ticked the “ready for release” box.

One would also need to consider how this affects the ability to fix bugs during the judging period(s). Can updates be allowed at all, in a fair way, if there are multiple waves?