This is an interesting thread!
I didn’t go into my IF Comp responses with any consideration about what my relationship to other reviews would be (since, you know, I just dropped in without having experienced something like this before).
Over the course of responding to the games, in hindsight, I did organically develop a kind of strategy for interacting with other reviews.
Basically: if a (puzzle/parser) game didn’t have a walkthrough and I got completely stuck with no recourse, I would occasionally look to other reviews/transcripts for gameplay help so that I could get through more of a game before the 2-hour time limit expired. This did result in me seeing spoilers, but that felt like the lesser evil (and often, to be honest, when a game was underclued/buggy/messy I was less invested in the narrative to feel that bad about story spoilers).
Otherwise, my strategy was to loosely draft out my response before reading other reviews. After doing a rough draft of my own responses, I would look at other reviews in case I was overly duplicating what someone else said or missed some significant aspect of the game. I would revise/expand my comments with some influence from other reviews before submitting (if available… I think there were a few games that I happened to be the first public response/review for). I’m having a hard time thinking of a moment when I completely scrapped a major comment/section because of another review, though. I most considered this I think for the Redjackets response, where I felt really exposed and out there compared to the other reviews that were posted at the time, but went ahead with it to… I’m still not even sure how to feel about the results.
Anyway. I think narrating independent experiences at least somewhat siloed off from other reviews is preferable, but I don’t personally feel the need for my own purposes to be so extreme as to not even glimpse another review before posting mine.
To say more, this discussion reminds me a lot of conversations about polling firms “herding” results. Essentially, if they get an outlier result from their sample compared to what other firms have published, they are less likely to release that poll, even though its divergence from the consensus might be more accurate. This results in a kind of collective action problem where if only one firm used this strategy, it would benefit from not souring its own reputation as inaccurate, but if every firm does this, the available data becomes less and less accurate collectively. If every firm could agree to publish even their outlier results, the results would become more accurate overall.
Unlike that example, though, I feel like the stakes here are low enough that I’m fine with glancing at available reviews before finalizing mine. As a swarm of anxiety vaguely stuffed into the shape of a person, I find it helpful to take opportunities to at least reduce the instances of making a complete ass of myself on a public forum for the sake of my mental health