Hey all – hopefully this is the right forum for this kind of question.
I’m currently lead game designer at the GlassLab, a nonprofit game development studio with foundation funding to make educational games. (There’s a Time article out today about our work modifying SimCity: nation.time.com/2013/11/06/makin … deo-games/ )
I’m leading the development of a tablet-based adventure game intended to teach ELA argumentation (common core reasons and evidence section if you follow such things). It’s set on Mars and is in collaboration with NASA. While we were in our concepting phase, we were having difficulty recruiting a lead programmer, and so I built our initial prototype in Inform7 (it’s here: absurdly simple: erinhoffman.com/hiro/Archive-Concepting/Release/ ).
This was supposed to be a temporary measure, but (and this likely will be no surprise to you given the education links on the inform site) the kids responded so well to it in playtesting that we were honestly taken aback. They don’t have the context to find the text interface to be “old” – to them it’s completely new – and the effect was just magical, even considering all the flaws in the prototype. 10 out of 10 kids that we tested it on did not want to stop playing, and all of them cited “the typing” as one of the most interesting things about the game. Unlike other games in their lives, it made them feel empowered over their environment.
They did, however, want graphics. The original design for the game was in more of the Monkey Island tradition, and we want to stick with the basics of this (touch items to interact with them). However, I always had a natural language “talk to the ‘AI’ city computer” section based on observations we made at the Exploratorium of the magnetic draw that the Daisy chatbot had for middle school students in particular. (Again with the typing.)
This is all a long way of saying that what I’d really like to do at this point is lay Cocos2d on top of Inform7, and use Inform for the entire game state machine. There would still be touch-based interactions, but these would essentially be button shortcuts that fired commands to the Inform parser. Our world would be pulled from the Inform build, and all objects would be instantiated in the Inform database, then drawn in 2d.
Have you ever heard of this being done? Are we crazy to attempt it? Is there anything we should watch out for? Should we not try to use all of the Inform source, but bite off one of the open source components for a segment of what we’re trying to do?
The main thing that I’d like to capture/preserve if possible is the way that the kids felt so empowered to try ‘anything’ on the objects in the world and get a scripted response. I can mimic this just on a case-by-case typed command basis, but as a designer I’m also excited about the prospect of potentially using Inform as a level-building tool.
If you were going to do this, how would you go about it? Where would you start?
Any advice or thoughts EXTREMELY appreciated!