[Following post my personal views only, not being said in mod voice–which is the default for all my posts, unless I say otherwise]
I’m pretty sure the reason ParserComp was a one-off was because the organizer had other commitments the next year and couldn’t do it, and no one picked up the ball. In fact there’s some posts from the next year where she was planning it, and got enthusiastic responses.
So if someone wanted to pick up the organizing mantle and do ParserComp again, I’m sure you’d get an enthusiastic response. Be the change you want to see in the world!
As for running down the top 10, it does seem like it’d be nice if the Results page included the story formats and play online links instead of IFDb links, so you could easily see what stories were in what format and–perhaps more to the point–play the games directly from there. I had the same issue when trying to figure out which games were parser. Maybe someone should suggest this to the organizers! (Um, I guess I’m not being the change I want to see here. I’ll write the organizers myself.)
Another historical note is that a lot of IF discussion used to take place on a Usenet group called rec.arts.int-fiction, many of whose participants eventually came here because of familiar issues with Usenet… and, as I understand it, raif was started by aficianados of hypertext fiction long before Twine who were, much to their annoyance, completely swamped by people talking about parser games.
(FWIW I’m someone who basically only ever works in parser when he does anything, partly because I’ve never been able to figure out how to do anything in Twine beyond straightforward branching and checks based on whether a certain passage has been visited.)
Brian–who was that? I didn’t know the author of Diddlebucker was a previous entrant.
Author of one of those two parser games that placed in the top ten here. I also played and reviewed (on the authors’ forum) all the games in the competition.
If you’re searching for good parser games to play, I strongly recommend looking at the raw scores rather than the rankings: There are a lot more games in IFComp now, and the raw scores are more comparable across competitions. In 2013, Machine of Death had a 6.00 raw score and earned 8th place in the competition. In 2018, Diddlebucker! had a 6.00 raw score and earned 30th place.
To make it easier for those looking for top-scoring parser games, here’s a list of all parser games in IFComp 2018 with a 6.00 score or better:
Alias ‘The Magpie’
Basilica de Sangre
Terminal Interface for Models RCM301-303
The Origin of Madame Time
The Temple of Shorgil
Dynamite Powers vs. the Ray of Night!
If you like parser games, every one of these is worth playing. You can go even further down and find some good parser games, too. I could hardly put Six Silver Bullets down, for example.
(And, of course, there are plenty of truly excellent choice-based games in this competition.)
I agree that choice-based games have always been with us, I’m not saying they shouldn’t be part of the comp. What I am saying is the two should be split into categories. One for parser and one for choice-based. For the first Interactive Fiction Competition Inform and TADS were considered different enough to have there own categories. I would say parser and choice-based are different enough to rate there own categories.
It seems to me that many people (including myself) would like to see more good parser games.
It seems to me that many people (not including myself) want to do this by having less choice games or putting them in a corner.
But is this the best method? One of the absolute best parser authors of this decade is Chandler Groover, who produced Midnight. Swordfight., Eat Me, and many others.
He started in Twine, and began writing in parser once he saw that parser games were getting a lot of attention (he can correct me if I’m wrong). If there had been a division, he would be gone.
And as an author, there is very little difference in writing the two. The idea that one is more ‘low effort’ than another is silly. Lynnea Glasser, Emily Short, Hanon “The Hanon” Ondricek, Andrew Plotkin, CMG, and many others have moved seamlessly back and forth between the two genres. It inspires greater creativity and cross-pollinates between two fertile fields. Think of Detectiveland!
I think it would be a shame to try to grow parser games by decreasing choice. So why not find other ways to support the parser, like reviving parser comp? That would take just as much work as coming up with rules for splitting up the current comp.
Wait, what exactly is the objection here? A parser game won the Comp. The last time a non-parser game won was Detectiveland, and before that, uh… Photopia, in 1998? (Which obviously was a parser game, though choice-based “in spirit.”)
I’m as concerned as anyone that the Nelson-esque grand parser opuses are becoming a lost art. But I’m not seeing cause for panic in the results of this or previous years’ comp results.
I write parser games just because I get ideas for parser games, not because they get more attention. But you’re totally right that I wouldn’t be here if there were a division. I never played Infocom. I discovered IF through Porpentine. Choice brought me to parser.
I can tell you my problem with this. It centers on the game that won the competition. In this comp, the parser games I scored a 7 ended up down around 16. It used to be, if you took the 7s, you got 4th place. Because I knew something like this would happen, I didn’t even feel I could score Alias ‘The Magpie’. I played it for 2 hours 45 minutes, wrote a review, and still didn’t score it. Because I knew if I scored lower than a 10 it would drop the score. I also knew it was good enough to win the competition. I didn’t want to hurt its chances because of the way things are weighted right now, so I didn’t vote on it and I don’t like that. It kind of puts me in a bind, I’m not really sure what I should do going forward, I might have to quit playing the comp.
Even if you had scored the game a 1, its score would only drop to a 8.13, which is still well above the #2 game at 7.82. In fact, you would need to add five additional 1 votes for it to lose the top spot.
I suggest that going forward you vote your honest opinion.
If one set of voters plays only the parser games, and rates them according to a harsh scale, whereas a disjoint set of voters plays only choice games, and rates them generously, that is indeed a legitimate problem with the current voting system.
I don’t think that’s what’s going on, though. As I mentioned above, a parser game won the comp, and although other parser games placed significantly lower, there are reasons to expect them not to be crowd-pleasers: “Junior Arithmancer,” “The Temple of Shorgil,” and “Ailihphilia” are narrowly-focused puzzle boxes, “Terminal Interface for Models RCM301-303”, “Tethered,” and “The Origin of Madame Time” are very short, and “Diddlebucker!” has a coarser, older-school aesthetic than the very polished “Magpie” (which is not intended as disparagement of any of these games—I played them all, and enjoyed them.)
This one’s easy: play all of the games (or at least, a balanced selection of choice and parser games) and rate them on the same scale. Even if you don’t think you’ll enjoy the choice games, give them a try, and be open to being pleasantly surprised. Many choice games are not my cup of tea, but I quite enjoyed “Erstwhile” and “Cannery Vale” this year, for instance.
In the competition, I think “parser-based” really means “keyboard-driven”, and “choice-based” really means “mouse-driven”.
Unfortunately, I have a reflexive tendency to think “parser-based” means “adventure game”, and “choice-based” means “CYOA story”, but of course sometimes that is just wrong. Detectiveland is a command-based adventure game where you click to construct the commands. It has much more in common with a traditional parser adventure than it has with a CYOA story. But I think it gets classified with “choice” in the choice-vs-parser system. (This year, I felt the same way about Master of the Land and Adventures with Fido.)
I wonder if it would be more straightforward to classify “keyboard” vs. “mouse” instead of parser vs. choice. (Also, “parser” is such a jargony inside-baseball word – I wish it was only used in technical discussions.)
Really, I wish there was a good way to differentiate “adventure game structure” from “CYOA structure”, but I realize there are in-betweens, and entries that fall outside those categories.
Thanks! I knew my game might be limited in scope. But I wanted to make up for it in breadth (and totally not beating the concept into the ground so stop saying that ,) and I wanted it to be enjoyable for those that enjoy this sort of things. Or at least, if anyone read the walkthrough, they’d find it amusing.
I’d like to echo evouga, so that you know it’s not just another judge saying these things, but a contestant. The short glib answer is “these things tend to even out, and you have to have faith and not overthink things. Use the energy you might’ve used to overthink to judge a few more games. You’ll be happier, and so will everyone in IFComp.” Besides, you can always normalize your own score ratings later.
Don’t worry about the (link) spotlight effect. OK, it’s not quite the spotlight effect. But don’t worry too much. It’s good that you care enough to realize your vote isn’t without consequences. There’s a tendency to say “I dunno, it might be just me” but that is mitigated by others’ votes Your vote will, long term, not leave contestants upset. Other people will see things differently, maybe even overrating things, and you are part of the average, and you don’t need to feel guilty about shifting ratings. If your vote does have too much weight, then the people promoting and organizing the event need to work on how to make it so more people vote. That’s not abdication of responsibility. That’s just recognizing you can’t do it all, but you want to do what you can.
I say this as someone who read an (in my opinion) unfair “classic smackdown” review that gave me a 2/10 that may have cost me 3 places in the rankings. I got over it quickly enough and, now I think of it, I didn’t check how they rated the other three games I’d have leapfrogged without that 2. And it isn’t worth the energy to revisit the details.
Because I was pleased with what I did, and while I felt the review was out of line and overly magnified a small bug, it gave me something early to focus on for a post-comp release. And it underscored a general thing that’s popped up in some of my other games.
An (imperfect) corollary for me is: if I am testing a game and noticing something that looks odd, do I say nothing because there’s a possibility nobody saw it yet and it might waste the author’s time to bring it up? Everything is evidence that helps you rate a game. And with IFComp’s new feedback, you can just ping the authors with something you liked, or a small bug you found. In a way, that is more valuable to me than a high number. It gives me something to consider and fix and tinker with.
And speaking of testing: DrkStrr, you sound conscientious about how you go about evaluating things. I don’t know if you tested any games this year, but I bet a lot of writers could use your work next year, if you want to volunteer. I know I felt a lot more comfortable discussing a game’s faults/understanding mistakes happen after testing a few games.
Also, let’s not overthink the IFComp ratings and placements! Just to take my own game as an example, it has a mean score of 6.96, a standard deviation of 1.58 and a total of 49 votes. If we calculate the standard error of the mean, it is 1.58 / (sqrt(49)) = 0.22. This means that the 95% confidence interval of what the mean score should have been – even assuming that all judges are ‘perfect’ and that there is no systematic bias of any kind – is between a 6.52 and a 7.4. So assuming that the current scores are a perfect measure of the game’s quality, Terminal Interface could still easily have placed anywhere between 9th place and 24th place. That’s the kind of confidence we are talking about. So it’s really not worth worrying too much about a few places here and there!
(This is even more true for games with low vote totals. The mean score of Six Silver Bullets could have easily been anywhere between a 6.82 and 4.88, which means that it might well have placed as high as 19th place or as low as 51st place!)
Yeah, dichotomies tend to be too simplistic to capture reality, but trying to come up with one can lead to nice insights. I’ve been thinking along these lines:
On the one hand, we have stories with a single plotline, gradually discovered by exploring and navigating a geographic region. Worldbuilding happens mainly through setting. After reaching the single, successful end of the story, the entire plot has been laid bare, there are no significant secrets left to discover, and the replay value is akin to the experience of rereading a good book – you get to revisit something familiar that you like, and possibly discover more details. These are observation-centered interactive stories. Examples: Trinity, Alias ‘The Magpie’, Lux, Erstwhile.
On the other hand, we have stories where you navigate, not a geographic region, but a branching tree of plotlines. Reaching a cutscene or an ending is comparable to entering a room in a game of the former kind. There are many different endings. You have to replay several times, gradually learning new aspects of the story, finding more content that contributes to the worldbuilding. As you replay, you skim through paragraphs you’ve read before, just like you skim through the descriptions of already visited rooms when speeding across the sprawling map of an observation-centered story. These are action-centered interactive stories. Examples: Animalia, Bogeyman, Ostrich, Midnight Swordfight.
My examples are subjective, of course, and the model is far from perfect. Lux and Terminal Interface are primarily observation-centered, I think, but each has two endings. And where on earth would you put Canneryvale?
Suggestion: Start your own annual parser-only comp where you can set the rules you like and exclude anything you don’t feel belongs. Heck, I’d enter it, I bet others would too, and you might even manage to peel some important parser work out of IFComp!
It’s definitely worth a shot if you feel as strongly as you do.
You wouldn’t even have to start your own comp, you could just run your own voting procedure in parallel with IFComp, and collect votes for only the subset of IFComp games that you want to consider. (You could do something similar for the XYZZYs, too, which maybe makes more sense.)
Rather than dividing up the IFcomp, what is really needed is a competition specifically for BIG adventure games and simulations that take much longer than two hours to play.
The IFcomp always includes games that can’t be finished in two hours, simply because this is where all the prizes and reviews are and authors would like to get some reward for the many hundreds of hours that they spend making their games, often over periods of years.
Many authors have an idea for a game that should play for longer than 2 hours, but are encouraged to cut it back to fit into the IFcomp time limit to maximize their score.
To encourage authors to put in the effort needed to write a big game you would need a reasonable prize pool, although not as big as the main IFcomp as there would be far fewer entries.
To keep the number of entries to a manageable level for judges I would have an entry fee (that becomes part of the prize pool), as was done for the spring thing a few years ago.
The competition should also allow entries that were entered in a shorter form in the main IFcomp and then later greatly expanded.
I like the idea of a separate comp focused on longer games. Like GrandGuignol is to PetitMort in Ectocomp. Anything that gives authors more opportunity to release games and players time to experience them all - also hopefully not all packed at the end of the year.