@Jacqueline the survey form seems to have been closed prematurely, and isn’t accepting new submissions/responses. Is that meant to have happened? I thought it was intended to stay open for some time to come. Thanks…
I see that there are 1843 followers for IFCOMP on Twitter, so what I really would like to understand is why can’t we even get 100 votes for a competition entry?
Are most of the 1843 IFComp Twitter followers - Have quit the IF scene? Too busy? Disinterested? Other?
I’m being serious when I ask why the IFCOMP can’t have 1800 people voting in the competition. I know that’s not realistic, but I’d be very interested to understand why Interactive Fiction seems to be barely engaging with it’s own community and what can be done to remedy that.
I think a lot of people don’t feel qualified as judges. Even though you only have to play 5 games and rate them, many people feel that they shouldn’t judge because they haven’t played that many games. Or they may feel that they don’t have enough experience to tell what’s good and what’s not.
Also, voting hasn’t gone down all that much. At its former height, around 2000, Being Andrew Plotkin got 134 votes. This IFComp +=X got 129, and there was significantly more games this time around.
Speaking for myself, I do find that I am in exactly the position Brian suggests. Although I know that in theory I can vote so long as I’ve played five games, that seems really unfair. Also, to be frank, I find the number overwhelming now.
Rating games on a 1 to 10 scale is difficult when you can only play a small number of the total entrants. I played a few games last year, but didn’t feel qualified enough to rate. Am I being too hard on people? not hard enough? I just felt I did not have a good enough frame of reference to be fair and so I wound up not judging.
I think participation could be improved by changing how voting is done.
Everyone feels qualified enough to rank the games they’ve played relative to each other. Saying I liked A better than B, better than C, is easy. So, have people do that and then use pairwise vote counting to tally the results.
Speaking for myself I would be delighted to play and rate all of the entries, however as I understand it, I am prohibited from doing so because I am entering a game myself. If the number of entries is on the rise that could somewhat explain a shortage of judges. You’ve got me a little worried about finding enough playtesters now, though…
I will definitely play and vote, and my qualifications as a judge do not stretch further than six months of playing IF (meaning this will be my first IFComp!).
All methods of voting have faults, but here I don’t think it matters much. There will be reviews, criticism and praise, and a lot of good games. Whichever game wins will be good, and all good games will probably receive some worthy attention.
I don’t think the Condorcet method works if different judges rank different numbers of games. It works for elections because if you don’t rate Candidate X at all, it’s safe to assume you don’t want them to win. But probably most IFComp judges don’t play all the games, and would be happy for the winner to be a good game that they didn’t get round to.
Besides, personally I’d almost always find it MUCH easier to rate two excellent games as 10’s than to choose which is better.
I won’t be entering the competition this year, so am really looking forward to playing and judging as many of the games as possible, and hopefully reviewing them here too.
Also to any entrants looking for playtesters I’m happy to test any games that I can run on my Mac (so either Mac native, Unix/Python, web-based or most game files, inc Inform and much TADS). Drop me a PM if you are looking for another tester. Happy to help!
I think someone with better math skills than I would have to analyze that. I guess what I’m really looking for is a crowd-sourced bubble sort of the entries. I was hoping that Condorcet would provide that.
I think the official way to address that would be to have runoff elections, narrowing the field of the second round of voting to a small enough number of candidates that you could reasonably try most/all of them. e.g. re-run the vote with the top 10 candidates as finalists.
I think many people would participate in the second round and not the first, and that they’d find time to play at least five of the top ten, which is at least a representative sample.
Having said that, you can simulate this for yourself just by observing the reviews for games while the contest is running. After a couple of weeks, a few games will have a larger number of reviews, and if you look at the scores in those reviews, you can identify some front-runners. Then, try a few of those.
Once you’ve played a few possible front-runners, you can probably assume that any other game you try can be fairly compared against them, allowing you to more confidently rate a game from 1 to 10.
That’s the way I normally handle this for the SF Bay Area IF Meetup. During IFComp, but before IFComp is finished, I look for games with a large number of reviews and games that seem to have high averages. (A spreadsheet normally forms, which helps considerably.) Then I suggest those games at the meetup.