I don’t think this is true. It was slow, but not that slow.
(Maybe someone with the actual hw can verify.)
I can’t speak for Zork specifically. I’m pretty sure I played it on a DOS PC, not a C64.
Of the many other adventure games I played on a C64, I can definitively say that none of them were that slow. Not even the ones I played on CompuServe over a 300 baud modem.
Yes, but you must remember we are talking about ParserComp here - that most improbably and consistently controversial of text game competitions!
Anyway, there’s quite enough food for thought in this thread as it is, so let us sleep on it / enjoy a leisurely breakfast (as your local time zone demands) and then we will work out a way forward.
(I had to Google ‘Kobayashi Maru’. It does feel rather like that, doesn’t it?)
It does not help having internet wars about this, let us look forward to improve ParserComp 2026.
One way to minimize the effect of people being much better at promoting their game outside the traditional IF circles would be to apply a much simpler calculation method for the results, just like IFComp: The score could simply be the average of all submitted scores. As it has been the last few years (this year too?), is that games with less players got penalized. This will not happen with the IFComp method. The drawback is that we should avoid that people with VERY few ratings win the competition by accident, e.g. a game with only one score which was a 10. This could be avoided with much simpler methods than the currently much popular advanced statistics.
Just a very simple example (no method is perfect after all):
Games with less than 5 votes are automatically ranked lower than games with 5 or more votes. This is after all much more forgiving than the (approximately) 10 votes rule of IFComp, where games with less than 10 votes (approx) are not ranked. Or other simple measures could be applied. No statistics expert can prove that one method is more fair in real life than other methods.
Just an idea to get rid of the enormous impact of wide game promotion…
To reiterate my previous post: these are extremely impressive stats, and I’m curious about which of your targets were most effective. I’d love to see the traffic breakdown on your itch.io dashboard.
As someone who does have a following on social media for blogging about Japanese subculture media and have people play my games because of it, I agree with MathBrush and feel it is rather improbable that “wider game promotion” has much to do with what happened.
I don’t think linking games to other places is sketchy by itself. It’s a valid marketing strategy in many circumstances, and I can see potential confusion around what it means to “excessively self-promote” as an overcorrection over what allegedly happened. Notably, none of the two alleged games has links to the voting page for ParserComp, so I’m skeptical if people who found the game via Reddit or elsewhere would be even aware it was submitted to ParserComp (unless they happen to be on desktop and notice the top right tab) and know how to vote in the competition properly. This is what makes me at least skeptical of the claim the games are bringing in new people in the first place.
And ParserComp games AFAIK don’t get that many votes in the first place. It can create new problems for the organizers.
So, I think the issue is more about potential vote duplication/manipulation. It is very easy to game the current systems because they aren’t tracking how many games the players played or some other metric that shows they actually care about the competition’s spirit. Even if we assume there’s no foul play in this year’s ParserComp, the concerns about the voting being rigged are still real and legitimate. If not this year, perhaps next year.
I like the idea of using IFComp math where score is the average of all ratings regardless of how many scores the game gets voted, and I see no problem with setting a minimum number of votes to place, and a minimum number of scored games to judge. IE, you can’t just pop in and give one game five stars because that’s how vote-flooding occurs.
I know everybody’s already weighed in here, but I think I have some relevant stuff to add.
Why me?
In 2010, I cofounded Choice of Games, a publishing house for interactive novels.
In 2012, this happened.
If votes for “Zombie Exodus” had been allowed to stand, we would have flooded the ballot box, sweeping every category, even categories that don’t really fit, like Best Puzzles. (“Zombie Exodus” barely has puzzles!)
ParserComp has a rule against soliciting votes. It doesn’t seem like you know why that rule exists. (I’m not even sure you care why that rule exists.)
So, I think it’s worth saying a few words about why ParserComp has a rule against “soliciting votes.”
That rule was not in place for XYZZY 2011, but it’s now commonplace among IF competitions, and totally unheard of in other elections. (Imagine telling political candidates that they aren’t allowed to ask people to vote for them!)
Why on earth would anyone have such a rule? Is it because we only want to limit the competition to card-carrying party members?
No. IF competitions are elections, but the spirit of these elections is that the entrants will compete on “quality” and not “awareness,” and that there’s a meaningful difference. We’re trying to elicit and discover the best games, not the best marketing.
More people were aware of “Last Audit of the Damned” than other games, but does that make your game better?
That is “the whole point of what we’re doing.”
The distinction between awareness and quality is greater depending on how many people are already aware of the candidates. When voters don’t know anything about the candidates, you can win an election just by making people more aware of you than your competition.
The IF community and ParserComp are small enough that most people don’t know about us. So, if solicitation of votes is allowed, the election will be won and lost entirely based on who can do the most marketing.
Even if ParserComp’s entire purpose were to bring people to the IF community, that was not the whole point of what you did.
The voters who voted for “Zombie Exodus” in 2012 mostly didn’t join the “IF community.” For the most part, they didn’t even try the other nominees. (Zombie Exodus is a great game, but it’s hard to imagine why anyone would vote that Zombie Exodus has better puzzles than PataNoir, which won “Best Puzzles” only when votes for “Zombie Exodus” were disqualified.) The voters for “Zombie Exodus” were part of the “Zombie Exodus” community, perhaps the ChoiceScript community, but not the IF community.
I think the same applies to “Last Audit of the Damned.” Instead of driving traffic to your game, you could have been driving traffic to ParserComp and the IF community at large. If you’d posted “Please vote in ParserComp!” instead of “Last Audit of the Damned, now live at ParserComp 2025,” all games would have received more ratings, and we would all be closer to knowing which game was actually best.
Other IF competitions ask participants to rank a certain minimum number of entrants, strongly encouraging fans of a particular game to at least try other games, giving them a chance to join the broader community. (Or at least to help detect when voters are voting in bad faith, e.g. when a voter gives “Last Audit of the Damned” a perfect score, and gives “EYE” and “Swap Wand User” the worst possible score.) I think that a minimum number of entrants would be a good addition to ParserComp’s rules.
As Denk points out, score voting can also help, as long as every game has “enough votes.”
But, as long as our goal is to find the “best game,” and as long as we’re doing it with an election where the vast majority of people have never heard of us, we’ll need to have rules against soliciting votes, both to avoid incentivizing solicitation, and to ensure that the best game wins.
You’ve quite clearly broken the rule against soliciting votes. I hope you can now see why IF competitions have that rule. I think it would be wise of you to acknowledge the rule and disqualify yourself, rather than asking us to “thank you” for what you’ve done.
For my part, in the years since 2012, I’ve done my best to support the IF community, not just my own company. I think you have an opportunity to do the same now, if you want to take it.
And… listen, as regards the community…
“Stultifyingly boring.”
Do you even want to be a part of the (small) IF community? Being part of that community means being a part of a community of people who can at least respect Twine games.
Are you here to promote the community, and to help us build/find the best games, or are you just disappointed at how little the community benefited you?
Do you care whether your game was better or worse than any of the others? Have you developed a sense of taste in what makes good or bad interactive fiction?
Michael, did you even try the other ParserComp entries?
I’d like to clarify something here. The voting was done via a Google form. Anybody could vote and you did not require an itch.io account. You needed to provide an email address. This was probably to ensure that the same person did not vote on the same game multiple times. However, it’s easy to create multiple email addresses. As the organisers have the email addresses, they could easily send an email to any voters that look suspicious to (a) check that they are legit email address, and (b) ask for some form of verification to confirm their ratings were legit.
Personally, I think using a Google form is a mistake. itch.io has a voting platform that, although far from perfect, has several advantages:
It is designed to minimise sock puppetting and they take this quite seriously. If there are any doubts, you can ask them and they’ll investigate. They no doubt have further information at their fingertips, such as what date the account was created, activity on the account and IP addresses. They can’t check voting done via a Google form, as it’s not on their platform.
It has an option whereby you must vote on a minimum number of games before your votes are counted.
It has an option whereby you have to vote on games that are presented to you at random and from then on you can vote for anything.
It has an option whereby you can create and vote on different categories (as in the ParserComp Google form).
When using multiple categories, you can nominate one of those categories as the one that determines the winner (as in ParserComp) or take an average of all the categories. I find the latter a much fairer way of voting, as it forces the judge to give more thought to the relative strengths and weaknesses of the game.
It does not allow the author to vote on their own game.
It has a system whereby games with a low number of votes have their scores downgraded, sometimes unfairly so, but you can see the raw scores and the adjusted scores.
Results are available instantly as soon as voting ends.
The results show you the number of votes, the mean score in each category, the ratings in each category and so on.
In ParserComp this year, we had to wait a few days for the results and we haven’t yet seen a breakdown of the statistics in each category. As an author of one game, co-author of another three games and tester of a further three games, I’m very interested to see that breakdown so that I can see what players thought and where to improve.
The Google form also had two free-form questions: “What did you like most about the game?” and “How could the game be improved?“ Will anyone, especially the authors, get to see the responses to those questions?
To be clear, I was referring to the comments on his entries (here and here), all of which are from new accounts that have done nothing but comment positively on his games.
Of course, Garry, of course. We just thought we’d iron out this little local difficulty first before publishing everything else - which we will do shortly (feedback and all).
I will point out that the organizers were forthcoming about voting totals at a couple of points during the judging period.
If, in fact, thoughtauction’s games ended up with substantially more players rating them than other games, it must have been quite late in the process, as interest in them was low-to-middling up until the last few days. It certainly doesn’t seem as if there was a committed, engaged audience voting on them from the get-go.
With three days left in the voting period, EYE had 10 out their eventual 14 ratings, ~71%.
At the same time, Swap Wand User had 12 out of their eventual 15 total ratings, 80%.
At the same time, we were ~91% through the voting period (June 30th - August 3rd, or 32/35 days). Yes, there’s always a bump of last minute voting, as seen in both Eye and Swap Wand User (and others), but consider this.
Meanwhile, Last Audit of the Damned had 6 votes of its eventual total of 22, or roughly 27%.
Mystery Academy also had 6 votes of its eventual total of 23, or roughly 26%.
That’s not just a last minute bump in late voters, that’s a bonafide damn miracle.
Wild.
Thank goodness so many people came in to vote for those two immediately after the organizers publicly announced the vote total thus far on July 31st!
I have a hunch that certain moves in Zork Zero on an Apple IIc, such as ordering the knight around, may take more than 90 seconds. I don’t have the real hardware to prove it, though.
EDIT: Still, it was probably faster on average than playing an AI game on a modern computer.
If that’s the case, then why not just take a victory lap with the Reddit/Discord/ the indie gamedev community? If that’s true, it’s not clear why you’d care about the Parsercomp ratings or this discussion at all. (Unless you had something to prove?)
What makes you say that? I’d say no chance of this being correct. What makes me say that? The typical pause is a few seconds for drive access or data-crunching. The longest pause is loading the game at the start or performing the equivalent later on (e.g. maybe moving to a whole new world or disk side in a game that has one). On Wishbringer, for instance, the initial game load is only fifteen seconds. That’s as long as things get in that game on an Apple II.
I did an edit of my post but linked to the wrong video … so here’s the right one!
Hm, posting the right link gives a ‘video no longer available’ error, which isn’t right because I can play it in my browser fine. Might be a revamped forum thing? Just add dots after www and youtube to get the address:
www youtube com/watch?v=_IqtVcqAEME
If you have an idea of where the long moves are, you might be able to find them in this multipart video walkthrough of Zork Zero on the Apple IIc. Looking at this port with its horrid double hi-res text and slow printout rate, I would now be less surprised to find it may be really slow. This video appears to be going at real speed, but doesn’t include the drive noise.