If two UI modes are available by default in a system, then yes, that’s what I’d expect. People will test under one and figure the other must be okay because it’s built in.
This is similar to the problem that occurs with, for example, VoiceOver in apps in iOS. (The sight-impaired accessibility mode.) VoiceOver accessibility is built into the OS and all standard iOS components. If you build an app, VoiceOver should always work. Except, turns out, you still have to test it and fix bugs because “should” only gets you 85% of the way there.
Split-screen presentation is a much smaller leap, obviously! The problems that will pop up in a naive conversion are subtler. That doesn’t mean they don’t exist.
For example, when I’m working on an Inform game, I am constantly running the game and testing little bits of output. If I add one sentence to a room description, I’ll recompile and jump in there in the story window – just to see how that sentence reads in context. But the context is the commands before and after it! The previous room; transition lines like “You step through the door.”
If the game has a split-screen UI, that context is different and I’ll wind up making different choices. Maybe I’ll combine some paragraphs into one so that they flow better as a standalone text.
Maybe I’ll add some context to the room description itself rather than putting it into a transition line or an “every turn” output later.
This is low-level polish, but it matters, and you don’t know how it will matter until you see it the way the player sees it.
(I haven’t written any Inform code for a split-screen UI. However, Seltani uses a split-panel layout and I wrote a lot of Seltani regions. This is the sort of stuff I thought about.)