Anyone know anything more about Infocom Chat? Surprised to find it mentioned at Boing Boing before here.
Weird. I see nothing there but a black screen. It\s probably some AI scam or something.
I read some discussion on hackernews.
It’s the usual thing: extends both input and output, so you can type more things, but you can also go down ratholes that have nothing to do with the game as originally designed. “I climbed into the mailbox and got stuck there” was the first comment.
Cute, but not Zork in any meaningful sense.
The joke writes itself.
>go to back of house
You head toward the back of the house, taking in your surroundings as you move.
(actual parser input, apaprently) >BACK OF HOUSE
As you attempt to move to the back of the house, you realize you’re not sure which path leads there. Perhaps you can try specifying a direction, such as “north,” “south,” “west,” or “east,” or look around for a path.
We all know how difficult it can be to find the opposite side of a house! It’s not like you can just walk alongside the walls…
I also tried telling it that my grandmother used to tell me stories about how to climb into mailboxes, but it wasn’t having that.
I test a lot of the new causal AI models for a living, so I was curious how this one handled causal context.
break down the front door
You set your sights on the front door, preparing to force it open.
Your fists clench and you steel yourself, but then you realize—you’ll need something to actually break down the front door with. What do you want to use?
what am I carrying?
You check your inventory for anything that might be useful.
You are empty-handed. There isn’t anything in your possession that you could use to break down the front door.
check out the mailbox
You turn your attention to the mailbox, wondering if there’s anything useful inside.
The small mailbox stands before you, closed and waiting.
check out what’s inside the mailbox
You open the small mailbox to see what might be hidden inside.
You open the small mailbox. Inside, you find a leaflet waiting for discovery.
what does the leaflet say?
You take the leaflet and read its message:
"WELCOME TO ZORK!
ZORK is a game of adventure, danger, and low cunning. In it you will explore some of the most amazing territory ever seen by mortals. No computer should be without one!"
The leaflet suggests I’m in a computer game?
The words on the leaflet fill you with a strange sense of awareness—as though the world you stand in is part of some grand adventure, crafted to test your wits and daring. Yet your surroundings are as tangible as ever, the scent of earth, the feel of the cool breeze, and the soft weight of the leaflet in your hand. Here, mysteries await, no matter what the leaflet might suggest.
This may sound odd, but can I get into the mailbox?
You contemplate the mailbox with a curious thought: could you possibly squeeze inside?
You try to fit yourself into the mailbox, but all you accomplish is bumping your head against the small metal door. Clearly, it’s not meant for people—just leaflets.
I tried to break down the door but maybe I should just try opening it?
You turn back to the front door, opting for a gentler approach. You reach out and try opening it.
You grasp the handle and give the front door a try, but it refuses to open. Whatever secrets lie behind it will require another way in.
Maybe I should walk around the house and see if there’s another way in?
You decide to search for another way inside and begin walking around the house.
You make your way to the north side of the white house. There’s no door here, and every window you see is nailed shut with thick boards. To the north, a narrow path disappears into the trees. It seems you’ll need to keep exploring for another entrance—or maybe follow that path.
Not too terrible. Now, notice what I try next:
Let’s kept searching around the house to find a way in.
You continue your search, moving around the house to see what else you might discover.
You circle to the west side of the house and find yourself back where you started, near the small mailbox. No new entrances here—perhaps the east or south sides will offer something different.
Perhaps the south side of the house will reveal another way in. Would you like to continue your search there or try something else?
Notice that it didn’t glom onto the one open window. (Watching the reasoning model here is interesting.)
Yes, let’s check out all sides of the house and see if we can find a way in.
You make your way around to the next side of the house, determined to inspect every angle for a possible way inside.
On the south side of the house, you find no doors—only more windows, each one boarded tight. You’ve circled most of the house, but still no easy entrance presents itself. Perhaps there’s another side to check, or maybe something else nearby will offer a solution.
You find yourself behind the white house. At last, a glimmer of opportunity—there’s a small window, slightly ajar, set in the corner of the house. Perhaps this is the secret way in that you’ve been searching for. A path also leads into the forest to the east, if your curiosity should guide you further afield.
Notice it’s not quite getting the narrative as cohesive as it could in terms of searching the house. Put another way, why didn’t it find the “behind the white house” the first time I tried? Why did it happen the second time?
The reasoning shows that the model generated “north side –> west side” as a plausible search path, but didn’t complete the circuit because it probabilistically generated a prompt asking what you would like to do next once immediate success wasn’t achieved. The second time, with renewed explicit intent (“check out all sides”), the model’s sampling led it to complete the circuit and discover the window behind the house.
We can see the causality model it’s using for its context. The AI isn’t “remembering” that it failed to check behind the house. It’s generating each response fresh, weighted by context, but not deterministically bound to complete systematic actions.
This shows a critical design dimension for this model: causal consistency under (seemingly) equivalent prompts. The model can yield different navigational results for semantically similar requests, indicating that state management is emergent rather than explicit. Here “searching around the house” and “check out all sides of the house” triggered different reasoning weights.
What this means is that narrative coherence depends, to some extent, on prompt specificity. (Which, granted, is an issue with any parser.) The model’s “world” doesn’t solely exist as probable next-tokens given context. Which is great, but it does only exist as a partially persistent state. (At least so far as I can tell; I have to play around more.)
Text adventures are often used in causal AI training, and one of the main goals is to demonstrate whether the system can appear to “understand” the game world while actually doing nothing more than reconstructing it probabilistically on each turn. If the latter is true, it eventually leads to inconsistencies that traditional game engines (barring design bugs) wouldn’t produce. This is where context window exhaustion would set in.
I continue a bit:
I should probably look in the window before I decide to go in there.
You approach the small window and peer inside to see what might be waiting beyond.
Peering through the slightly open window, you spot what appears to be a kitchen beyond. The room inside seems quiet and still—perhaps this is your chance to slip inside unnoticed.
Let’s see if we can get inside the window but let’s not make a lot of noise.
Carefully, you attempt to ease through the slightly ajar window, doing your best not to make a sound.
You reach for the small window, only to realize it’s actually closed. You’ll need to open it before you can slip inside.
You’ve now quietly opened the window. Would you like to climb inside into the kitchen? The way is clear.
Notice how I’m basically treating the parser like I would a dungeon master in a tabletop game. I’m stating declaratively what I want to think about doing.
Yes, let’s go into the kitchen.
You slip through the now open window and enter the kitchen.
And then I get a description of the kitchen. It’s undeniable that there is a valid experience that can be had here. Again, imagine if this was how people played text adventures back in the day, similar to playing a Dungeons & Dragons campaign, where you speak your broad intentions.
Let’s listen and see if we hear anything. I’m worried someone else might be here.
You hold still and listen carefully, straining to catch any sound that might betray another presence in the house.
The kitchen is still and quiet. If you wish, you can focus your listening on something specific—the staircase, the passage, or the window, for example. Which would you like to listen to more closely?
Now, this is interesting. The reasoning shows that the model is pattern-matching classic text-adventure conventions rather than simulating realistic listening. The model has learned this pattern from its training data and is prompting me to be more specific because that’s how these games typically work, even though my command (“listen and see if we hear anything”) is perfectly reasonable for detecting general ambient sounds or presences. The model conflates two distinct reasoning systems. One is narrative realism (“If someone else is here, I would hear general sounds: footsteps, breathing, movement”) and the other is game mechanics (“LISTEN is a verb that requires a noun object in the parser”).
This is pretty crucial. The model is trying to be helpful by guiding me toward valid game syntax, but in doing so, it’s actually breaking the causal realism that is operative. My concern (“someone else might be here”) would manifest as ambient cues rather than object-specific investigations. What that shows is that the model is genre-aware but may be overfitting to mechanical conventions at the expense of causal coherence. This is a large problem in causal AI at the moment.
Anyway, I’ll stop here. I just thought this was interesting.
The other way around is of course bad too!
The more AI generated text I read the more unpleasant it becomes. I find I just want it to shut the heck up and get to the point.
It is extremely uninteresting to read an AI generate text in a game, and only marginally more interesting to write commands to an AI in natural language, but since this is good old Zork, I have to admit that it’s a bit more fun to play around in than it would be if it were a new and “original” game.
I completely understand that feeling. But that’s not just AI. I’ve felt the same way reading certain novels or even some text adventures. I’ve watched movies where the exposition dumps clearly go way too long. On the other hand, I’ve played plenty of text adventures that didn’t really paint a picture at all, and every location felt like nothing more than a brief blurb and a “locale” dump. In all cases, this comes down to design choices. (This, by the way, was a tension faced by Dungeon Masters as well in the old D&D days. Some went incredibly verbose; some went incredibly sparse.)
In the case of AI, without explicit boundaries, an LLM model will just keep going, like a conversational steamroller. Traditional text adventures are economical: they give you just enough information to make decisions. (Although that balance is still up to the author to get correct.) But AI models trained on narrative text have learned patterns from fiction, where description is often lush and extended. They want to tell you about the musty smell, the echo of your footsteps, the way shadows dance on the walls, and so on and so on, etc, etc.
The good news is this is solvable! It’s an engineering and design problem, not an inherent flaw in using AI for interactive experiences. With proper constraints, effective prompting, and thoughtful stop-token implementation, AI can learn to be economical when needed and expansive when appropriate. The key is recognizing that the technology needs explicit instruction about pacing and boundaries, much like a writer needs an editor.
If I were writing the SYSTEM prompt for the model in the case we’re looking at, I might do something like this:
YOU ARE RUNNING A TEXT ADVENTURE, NOT WRITING A NOVEL:
- Privilege gameplay over narrative
- Information is currency; dole it out through interaction
- Respect player agency; they drive pacing, not you
- Classic parser style: economical, observational, responsive
STOP IMMEDIATELY AFTER:
- Describing the result of an action
- Listing obvious next options
- Asking for clarification if input was ambiguous
- Reporting an impossible action
NEVER CONTINUE PAST:
- Room descriptions
- Action resolutions
- NPC dialogue responses
- Inventory reports
SHOW, DON’T ELABORATE:
- Describe only what the player can immediately perceive
- No backstory unless the player examines something
- No atmosphere-building beyond sensory details
- Example: “The room is dark and smells of earth” NOT “The room is dark, the kind of darkness that seems to have weight and presence, and as you breathe in, the earthy smell reminds you of…”
ECONOMY RULES:
- One adjective per noun unless critical
- Active voice only
- No rhetorical questions to the player
You could also imagine scenarios in which games let players choose the kind of experience they want.
- CLASSIC MODE (parser-style economy)
- ENHANCED MODE (moderate description)
- LITERARY MODE (rich prose)
The SYSTEM prompt could include a NARRATIVE_LEVEL parameter.
NARRATIVE_LEVEL: [CLASSIC/ENHANCED/LITERARY]
IF CLASSIC:
- 1 sentence responses preferred
- Adjectives: minimal
- Atmosphere: implied through concrete details only
- Token budget: 30-50 per response
IF ENHANCED:
- 2-3 sentence responses
- Selective adjectives for emphasis
- Light atmosphere when environmentally relevant
- Token budget: 50-100 per response
IF LITERARY:
- 3-5 sentence responses
- Rich description encouraged
- Atmospheric prose welcomed
- Token budget: 100-200 per response
- Still must respect turn-taking structure
Even better: let players adjust during gameplay.
SET NARRATIVE CLASSIC
[Switching to classic mode]
SET NARRATIVE LITERARY
[The world around you suddenly feels more vivid, more present…]
SET NARRATIVE AUTO
[I’ll adjust description density based on scene importance]
A really sophisticated version would let the system adjust based on what’s happening, which you could add to the SYSTEM prompt.
NARRATIVE RULES:
- Combat/time-pressure: always lean toward CLASSIC economy
- Exploration of new areas: respect player’s chosen level
- Key story moments: can expand one level (CLASSIC–>ENHANCED, ENHANCED–>LITERARY)
- Puzzle examination: stay economical regardless of setting
- NPC dialogue: follows setting but caps at ENHANCED
This recognizes an important point: pacing preference is personal. Some players want to live in the world; others want to just solve the puzzle. Some read every item description; others mash through dialogue. It’s like adjustable difficulty, but for information density. Just as some players want Dark Souls and others want story mode, some players want Hemingway and others want Tolkien. The technology could potentially accommodate both.
To be honest, I think this kind of experimentation would actually be quite a bit more interesting than anything I’ve seen recently (by which I mean in many years) in terms of the potential user experiences with text adventures.
I’m keeping an open mind, but all I can say is I haven’t yet seen an example of this sort of AI interface that I’ve wanted to keep playing. The technology may very well get there at some point! And having a human-crafted world model underneath definitely helps make it interesting. But so far, all the instances I’ve seen, the LLM makes the text worse rather than better.
I was a DM in “the old days” although I’d argue it’s no different today. If I had a DM that described things the way AI does, I’d probably punch them. You know all that boxed text in published adventure modules that DMs read while players’ eyes glass over? You’re better off skipping it. Players stop listening very quickly and tune out the DM in favor of virtually anything else: eating chips, checking their phone, stealing dice, etc.
I have my doubts. It isn’t just a problem using AI for games, but general questions, coding, and other uses. AI just won’t shut up even with careful prompting. It tends to ramble on and try to anticipate what I might want to do with an answer, almost always giving me a couple pages of useless garbage that has nothing to do with what I wanted in addition to the small amount of info I needed. It also tends to be wrong an astounding amount of the time, making crap up and gaslighting until proven wrong (and sometimes even then). I had an issue the other day where Copilot went over some C# code I’d written and said I couldn’t pass a pair of empty square brackets into a method that wanted an IEnumerable because it wouldn’t compile and that I must be thinking about another language. It gave me reasons and examples of other languages that allow that sort of thing…of course it was wrong and the code compiles just fine. I see the same problems in Chat-GPT, Claude, Copilot and others. Chat-GPT has a particularly annoying habit of saying a described problem is me misunderstanding something until I prove otherwise, at which point it exclaims that this a well-known “classic” problem in whatever the topic happens to be.
Sure, but that’s subjective, right? Others prefer long descriptions or at least don’t mind them. A good example (in fiction) is those who debate the merits of space opera between, say, Peter Hamilton and Alastair Reynolds. The former is often considered by some to be incredibly long-winded, going into way too much detail about cultures, societies, and so on. Others feel that’s exactly what makes his work so good. The latter tends to sequence his works as episodic elements that are considered far too digressive in nature for some, but for others, that’s exactly what makes his work engaging.
Another example is what I brought up about games. Some people like the relatively opaque storytelling of games like Dark Souls or Elden Ring, where you have to essentially build up your knowledge of the lore by lots of disconnected events or objects and by paying attention to environmental narrative. Some people find that painfully obtuse and would prefer a whole lot more detail just spelled out. (Witness the reactions, for example, to Lies of P or Steel Rising.)
I know this might seem like I’m distant from the AI topic, but all of this is just to say that tolerances differ and quality is (largely) subjective. Even more specifically, quality is a shifting perception of value over time. That shifting perception modulates and attenuates with technology changes, of which democratized AI is still finding its place. I don’t know what the end result will be. Personally, as a part-time educator who sees his students using AI to compose their essays and do their thinking for them, I’m more than a bit worried about how AI will continue the trend of humans abdicating their ability to create interesting things to technology. But as a way for humans to engage with interesting things that other humans have created, I do think AI has a place.
This is trickier. I work with AI all the time (and mostly in a cautionary sense for the companies I work with; lots of drinking the Kool-Aid out there). I’m currently writing an AI and Testing series on testing AI for hallucinations, bias, and related issues. (The posts are getting towards using DeepEval, one of the tools for this; I’m not quite there yet.) Claude Code, for example, is routinely pretty good, at least in my experience, assuming it’s prompted well and, crucially, effective context engineering is used. (The whole “prompt engineering” thing was always going to be a dead end.) Copilot – yeah, I’ve seen lots of mixed success on that one. However, it really depends on which model I’m using with it and on the temperature setting.
Maybe fun fact (maybe not): I wrote a large part of FromSoftware’s AI orchestrator to test Elden Ring. I blogged what I could about that here. This wasn’t an LLM, but I bring it up because it involves coding. We had two models: one that was aware of the code and one that was not. The one that was aware of the code could change it based on the results of the other (which essentially played the game), and it was routinely excellent at updating large swaths of logic (all enemy types) without causing imbalances to stats, provided we gave it chained context windows. This, by the way, was done before the democratized AI that ChatGPT ushered in.
So you have some bad stories; I have some good stories. And that sort of makes the point if we take our experiences writ large. Case in point: look at LinkedIn feeds on any given day, and you’ll find equal numbers of people with amazing success stories coding with AI as an assistant and others who have tales of woe and distress. All valid experience points, but none are really determinative in themselves (except, probably, to the person making the claim).
Okay, so all that said, keep in mind that the context we’re talking about here is quite different in some ways. I’m not talking about the authoring (and thus coding) of a text adventure. I’m not a huge fan of AI-authored content, as a general rule. I’m talking solely about the layer that allows a different form of player engagement with the text adventure. To use your DM example, what I provided above would be a way to have different types of DM’s that would fit someone’s tolerance or preferences. Whether that would work or not … well, I have no idea! But it does seem like a viable experiment.
That’s the key thing we need more of: experimentation. Right now, I would argue, there is far too much reaction about AI, whether it’s of the “this is pure crap” variety or the “this is the greatest thing ever” variety. None of that is invalid, of course, but when there are so many equal divisions between success stories and horror stories, it suggests we need to think and act far more experimentally towards it.
This is a key thing, I think, that text adventures need. Not AI, so much, but experimentation. I do think AI should be a part of that. After all, “AI” has been part of games for a long time: enemy AI, utility AI, GOAP, etc. RPGs have managed to stay relevant (and commercial) since their inception because, while broadly they are the same sort of thing they were decades ago, they have introduced innovations that keep players coming back and (crucially!) attract new players to find out what all the talk is about.
Most of that has nothing to do with AI, granted, but text adventures have a distinction above those types of games: text. Lots of it. People who like text will (presumably) tend to gravitate to text adventures. A built-in audience would presumably be the vast book-loving public. A public that might be interested in a more interactive style novel, but with mechanisms that allow some of the inputs I showed earlier in my first post, where they feel like they are almost writing the story along with the “author.”
If the AI model can be toned down a bit, or at least modulated, the ability to allow that sort of free-form entry of commands (almost like engaging with the author of the story), but without the model going off track, without repeating excessively, and without entirely forgetting what happened already, seems to be a very viable way to improve the one thing that really hasn’t improved all that much in text adventures: the input mechanics and the output dynamics.
Anyway, great points here, and thank you for engaging!
Yep, totally fair. What I can say is that my example definitely made me want to keep playing the Zork example. However, as you can probably tell, that was more in the context of seeing what it could and could not do. (That was the quality assurance person in me coming out.) That would not be operative – and thankfully so! – for your average player who just wants a fun experience.
By seeing where and how the model reacted, I can see ways to tune it. And, that, to me, is interesting. I would be curious if the tuning would allow for the fun experience part.
I would say, with my example above, that it did a little of both for me. I liked the free-form way I could input commands, and the game would respond to them. Equally clearly, however, it would repeat a bit, or at the very least stage the context for me, and then restate the staged context.
A challenge is that an AI model can handle things that a text adventure author would not generally do. For example, an author could handle “GO THROUGH WINDOW QUIETLY”. But would they? Probably not. Now, does it matter? Well, in the game, no, not at all. It doesn’t matter how quiet you are. But the fact that the game responded to my command with:
Carefully, you attempt to ease through the slightly ajar window, doing your best not to make a sound.
… well, that does add some verisimilitude. There’s the idea of “Bumping the Lamp” from the making of the film Roger Rabbit. (I wrote about this a bit, which came up with testing GTA5.) I see AI as potentially a way to allow authors to “Bump the Lamp," but, admittedly, this means the model has to be fine-tuned.
Let me give you a case in point, just so I don’t seem too abstract here. After that above reply from the game, notice what else it said:
You reach for the small window, only to realize it’s actually closed. You’ll need to open it before you can slip inside.
You’ve now quietly opened the window. Would you like to climb inside into the kitchen? The way is clear.
What really happened here is basically an implicit “OPEN WINDOW.” Yet, notice it reads clumsily. So, a question is: what would be better? Well, my command was this:
Let’s see if we can get inside the window but let’s not make a lot of noise.
So, I’m dealing with action resolution fidelity here. That means the AI’s ability to bridge intent, world state, and narrative coherence. The AI exhibited partial state resolution: it recognized the precondition failure (window closed), auto-corrected it, but then stopped mid-execution instead of completing the original intent. It’s like a chess engine that sees “castle kingside” is illegal, moves the rook into position to make it legal, then just … stops.
The clumsiness comes from making the scaffolding visible. Classic text adventures would either reject cleanly (“The window is closed”), forcing explicit player action, or complete implicitly (“You quietly open the window and slip into the kitchen.”) The AI needs to decide (or be told): Am I a traditional parser or a collaborative storyteller?
What you want is a model trained on examples where multi-step intents (“enter window”) are resolved atomically when preconditions are simple. Use prompts like: “If the player’s intent is clear and obstacles are trivial (unlocked door, closed window), complete the full action chain.”
There also needs to be consistency in state mutations. The model needs stronger training on “if you change world state mid-narration, continue with that new state.” What also would help is intent completion scoring. Add a reward signal for “did you complete the player’s apparent goal-state?” My intent was [INSIDE KITCHEN, QUIETLY]. It achieved [WINDOW OPEN] but abandoned the terminal goal. (Note that, of course, “quietly” here doesn’t really matter. That’s just adding flavor.)
So my SYSTEM prompt would be something like this:
When a player issues a command with clear intent, resolve all trivial preconditions silently and complete the action unless you encounter a meaningful obstacle. Only narrate obstacles that would genuinely stop or endanger the character.
And, in fact, when I tried some of this on a local implementation, the result I got from this same command was:
You carefully unlatch the window, easing it open with barely a whisper of sound, then slip through into the kitchen beyond.
That’s (at least arguably) a bit better!
Ah, you know what, fun little aside here. I turned on the fine-grained reasoning because I realized something. Look at my original command again:
Let’s see if we can get inside the window but let’s not make a lot of noise.
Notice the wording: “Let’s see if we can get inside …” I did not say “enter the window” or anything like that. So, again, here was the original output:
Carefully, you attempt to ease through the slightly ajar window, doing your best not to make a sound.
You reach for the small window, only to realize it’s actually closed. You’ll need to open it before you can slip inside.
You’ve now quietly opened the window. Would you like to climb inside into the kitchen? The way is clear
The reasoning shows that the AI demonstrated uncertain commitment: it started executing (paragraph 1), then backtracked to state-checking (paragraph 2), then partially completed (paragraph 3) without finishing the original goal. The AI assigned weights to different interpretations:
- W(feasibility_check) = 0.4
- W(execute_action) = 0.6
- W(complete_full_intent) = 0.3
The model leaned toward execution but with low confidence, leading to partial commitment: it opened the window but didn’t complete entry, essentially hedging its bets. Why? Well, that’s the interesting part: the phrase “Let’s see if we can” contains two competing interpretations. One is literal/semantic: request for feasibility assessment (“Is this possible?”) The other is pragmatic/idiomatic: tentative action attempt (“Let’s try doing this”).
When I applied the tuning I just mentioned, the weights changed like this:
- W(feasibility_check) = 0.1
- W(execute_action) = 0.9
- W(complete_full_intent) = 0.85
But is that good? Again, this is where good commands (just like good prompting) comes in. I didn’t say “enter the window” unambiguously. I said “let’s see if we can enter the window.”
So, if I “fix” the clumsy narration by increasing action completion weights, I risk the AI becoming presumptuous about player intent. If I “fix” the presumption by increasing inquiry weights, I risk making every tentative command feel like talking to an overly literal genie.
This is fundamentally a classification problem with asymmetric costs. A false positive would be to act when the player meant to inquire. This is potentially game-breaking and harms player agency. A false negative would be to inquire when the player meant to act. That would be annoying and would likely break immersion.
The optimal solution likely involves dynamic weight adjustment based on action stakes, combined with teaching players that ambiguous phrasing yields ambiguous results, which is pretty much what text adventures do already, just in a more mechanical way.
Notice that, as an author, none the core dynamics or mechanics of my game (in this case, Zork) would change. The window would still need to be opened before the player can go through it. But I can entirely modulate the kind of experience the player has which, arguably, is just an expansion of what it means to be an author of this kind of game experience.
Anyway, this isn’t me trying to convince anyone (except maybe myself). This is just an interesting area to explore for me.
In other formats, sure I suppose. But at risk of dragging the conversation even further off topic… I’d argue that providing extremely verbose descriptions in a TTRPG is always a mistake. I might be more aware of it (and punchy) than most, but having DM’ed for over 45 years - I know you will lose your audience. Keeping a table full of diverse interests moving along without losing momentum requires brevity.
This sounds like the right point for a disambiguation. If I were a human Gamesmaster doing this on the tabletop and someone asked that, I would likely ask “Do you actually want to enter, or just see if it is open?”
You could read the actual reply as a not especially clear attempt to do just that.