ParserComp 2022 - PLEASE VOTE!

Woohoo! Twenty brand-spanking-new parser games to gobble up. Go have fun!


Too tired now for more than just clicking over two or three of them…

Cool games here!

Tomorrow begins the play time…

Thanks for work so hard out there!


Yay, congrats to all the entrants! Lots of fun-looking games to dig into, in a variety of genres, so I’m looking forward to exploring them (I’m planning on doing a review thread, as is my wont).


Good look everyone! 14 entries this year.

Very nice!


Nice number of new games! Thanks to authors and organizers for fun in these hot summer days.


Some strange was happened this night. I waited untill submissions time were up and there was 17 entries. This morning there are 19 (last year were 18 entries).
CONGRATULATIONS for the participants and the organizers… Now let’s play.
I know on my own that there are very good games this year to play.



I think that I see some issue in votations sheet.
There is no BEST GAME tag?
How are you going to add or substract points due to 1 point items? This could disturb the total points of many games! Furthermore, if you add the points for all categories, many games depending on its genre can be downvoted if they lack of NPCs or puzzles.

1 Like’s voting system is unideal in many ways - but we just have to live with it. Their main issue (which they don’t seem bothered about changing) is that you can’t leave a voting category blank, ie you can’t vote no stars for ‘not applicable’.

The guidance is to vote 1-5 stars in main categories and in supplementary categories vote 1 star for not applicable (if, for example, a game uses no graphics and sound, or has no feelies of any sort), otherwise vote 2-5 stars.

Within the limitations of the system, this seems the best compromise for now. There’s no ‘best game’ category as that will be determined by the total of all scores (we’ll weight scores and calculate outside of itch).


Possibly another couple of late submissions on their way, from folks who screwed up the submission process (but got in touch with us just before the deadline). Stand by!


Yuhuuuu!!! ^.^

1 Like

It’s really neat to see old names and new names and current names.

Neil DeMause is one name from way back. His last game was in 2002. My immediate question is, does anyone know the date of its release? (It’s TADS, so you can’t use the Inform trick of looking at the version number. Or, well, I can’t, or I don’t know.) It’d be a cool oddity to have a comeback after a 20+ year break.


Hmm, so if we have a game that is perfect and we think it deserves 5, we actually can’t give 5.0 to that game if that game doesn’t have graphics or some other form of multimedia. The best we can give is 4.8. Which is a bit strange for text based games. It’s even worse if it’s a puzzleless game. The best we can give to puzzleless game is 4.32. And if it’s a puzzleless game without any multimedia, the best we can give is 4.12. Even worse if it’s a puzzleless game without any multimedia, without any feelies and without any help or hints. In that case the best we can give to a game that we think is perfect is 3.72. Not because puzzles are bad, or because multimedia is bad, or extras, help or hints are poorly made, but just because these elements are not part of the game. And these elements are optional for parser games.

So, how to vote without penalizing games for not having elements that are optional?


Yes, it’s a fudge whichever way you look at it. We’d be fine if 0 stars was an option, but it aint… so we proceed as best we can, under the assumption (probably faulty, but ok for our purposes) that all the games will feature the things in the main categories and that some might not feature the things in the supplementary categories. The weighting in the latter categories is only 5% each, so that effect of penalizing a game for not featuring one of those criterion will be minimal in any case, and we have faith that all will come out in the wash and the results will be an accurate reflection of people’s sympathies towards the games.

And afterwards we’ll look at it all and work out a better way to do things (possibly outside of next year.


Thanks for your work! All this stuff of voting seems something messy : /

But, hey! We have games! Yipeeeee!!!


But 5% is not that small… if a game doesn’t have any of those 3 optional elements, that game is penalized with 0.6, so the best such game can get is 4.4, just for not having optional elements.

I really don’t know how to vote under this system, but I can’t just give 1 to a game just because game doesn’t have optional element. It was acceptable to give 1 in such cases when there is a general category Best Game that is used for ranking. But in this system these 1s will affect ranking.


Organizer has written that they will calculate ranking values outside itchio box. This is a lot of work, but it seems to be no other possibility.

1 Like

It just means that a perfect game that features all the main categories + all the optional categories will score higher than a game that features all the main categories and none of the optional categories. A game that scores poorly in all the main categories (85% of the total) but has top-notch graphics, help and feelies (15% of the total) [a fairly unlikely scenario] won’t beat an excellent game that doesn’t have those optional features.

(The proviso here being that I am not, by any stretch of the imagination, a statistician).

The idea of not having a bald ‘best game’ category is that the other categories contribute directly to that measure, ie the winning score.

Anyway - it is what it is, and we’ll see how things shake out.


Given the bitter partisanship, bad-faith rules lawyering, and hyper-competitive rancor that characterized last year’s ParserComp, this feels like it’s courting disaster — but I suppose we’ll just need to muddle through and hope the eventual human toll doesn’t wind up being too dear a price to pay.

(I am being silly, last year was lovely and I’m sure this will be fine)


We have been discussing the scoring issue for some time. We are going to use the raw scores doing our best to normalize each games rating to minimize the effect of not having supplementary material. The ratings will be scored in at least two ways.

We may end up with a dual scoring system that has winners ranked in each track.

I am already working on a more effective scoring system for next year outside of the itch system.

Thank you for your input.


@fos1 and I have discussed this at length and decided that duelling by rapier (with successive rounds of elimination) is the only fair way to settle things if the scoring system proves inadequate.