He’s talking about procedurally generated narrative, and advocating for the idea that, instead of trying to bend simulations into narratively interesting paths, we should simulate lots of characters and then (possibly automatically) curate and present the stories that turn out to be interesting.
He goes into more depth about his projects here than I’ve seen elsewhere, covering three simulation/story-product pairs: World & Diol/Diel/Dial, Talk of the Town & Bad News, and the in-progress Hennepin & Sheldon County. Worth at least a quick look through the table of contents. It seems fairly modular so you can drop in and read sections that look interesting without having to digest the whole thing.
An open world IF with emergent gameplay can work, but it needs a metric ton of content and a lot of “directed non-direction” planning on the part of the author.
It sounds from @JoshGrams recap though that the intention isn’t to turn the player loose in an unscripted world to see what happens, but rather for the author to play with the procedurally generated content and use that to inspire plots and throughlines that the player is directed to. I’m all for this in theory…if one is looking for inspiration and if they have that metric ton of time to accomplish it… Procedural generation can work to inspire that feeling of a living, breathing, world, but all that content to seed the world still has to be created by the author, and one needs to avoid the “10,000 Bowls of Oatmeal” problem.
It’s probably not as random a procedure as the “a million monkeys with typewriters” kind of thing, but the content to noise ratio seems as though it could be really high without intelligent input and planning by the author. Sometimes a random monkey will happen upon something brilliant by accident, but a lot more of the time they won’t.
I think procedural generation is a great tool, but by no means a shortcut to world creation.
Yeah, it’s a huge amount of work: he spends nearly twice as much time talking about the simulation (70-100+ pages to give a brief overview of each of the three examples.) as about the “games” built on top.
His primary argument is about using curation (human or automatic) to avoid the oatmeal problem. So Bad News is a human-curated and human-narrated game where the player is finding the next-of-kin of a deceased person to notify them of their relative’s death. So a “wizard” searches the generated town history for interesting stories and communicates with an “actor” who plays the part of all the townspeople, and then the player talks to the actor to describe what her character is doing. Basically the simulation is designed to produce a history that is consistent and contains “an overabundance of narratively potent situations […] misanthropes, recluses, social status, the interpersonal circumplex, unrequited love, asymmetric friendship, internal conflict, love triangles (polygons), extramarital obsessions, business rivalries, town institutions, family feuds, family drama, and forbidden love” which the performers can improvise upon.
In his final work-in-progress experiment, he extended the simulation to have a more narrative focus and is working (was working? I can’t tell if it’s dead or not) on a system to have a computer automatically select stories and generate a podcast all on its own. He has a couple short proof-of-concept audio samples on Soundcloud.
Anyway. I didn’t find the whole thing hugely compelling, but there’s some interesting stuff in there.
As a writer, I’m going to have to say this sounds like a dreadful idea. It’s the antithesis of creativity, and it’s soulless. In addition, we should absolutely not consider such an idea viable or meaningful unless and until it’s proposed by someone who has written one or more successful novels in the ordinary way and can therefore be presumed to know what the heck he’s talking about when he uses words like “interesting.”
When it comes to determining what’s interesting, I think it’s the reader, not the writer, that is the most qualified to decide.
I think it is quite possible to get a dramatic story from a simulated story world. Consider a written transcript of a game of pac-man-- lots of life and death situations and reversals of fortune. One can imagine some sort of algorithm to identify a story arc and to prune away the repetitive and irrelevant bits and leave a decent story (The Perils of Blinky) in its wake.
Is it soulful? I don’t know. I haven’t read it yet. I do find attempts at generative text fascinating though and designing simulations is its own art form.
Haven’t figured out how to quote a message that I’m replying to…
I disagree with your first statement. If the writer doesn’t KNOW what’s interesting (or meaningful, which I would say is a much better word), no meaning will have been transmitted in the written work. None. The work will be meaningless. It may appear meaningful, but that will be an illusion – and worse. It will be a con job.
Your example, using pac-man, actually makes my point. If you feel that would be an interesting story, we can safely judge that you don’t understand what constitutes a meaningful story. I’m sorry, but there’s no way to sugar-coat it. I’m not trying to insult you here, I’m just trying to be clear. I would invite you to compare your imagined pac-man story to ANY story written by ANY real flesh-and-blood writer who is decently in control of his or her work. Pick any writer you like – Faulkner, Ian Fleming, Stephen King, Terry Pratchett, pick any three.
The meaning of a story is NOT in “life and death situations and reversals of fortune.” The meaning of a story is the impact that those situations have on a character in the story. And the impact on a character simply cannot be modeled or counterfeited by an algorithm. It can’t be done. I’ve written novels. I’ve had novels published by actual New York publishers. I do know the difference.
Let me begin by saying that I understand the points you’re making, and I can certainly emphasize with them to a certain degree. Your arguments are succinctly put, relatable, and many people would agree with them. I will now proceed to pick them apart.
First, a matter of terminology. A thing can be interesting without being viable or meaningful. Just to pick a starting point, Lewis Carroll’s poem Jabberwocky is arguably interesting because many of the words are meaningless. Stories that involve time travel can be interesting, but time travel isn’t compatible with physics, and therefore neither viable nor meaningful. Conversely, a formulaic episode of a TV drama might be thematically meaningful, commercially viable, yet utterly uninteresting.
But my main objection to your argument as quoted above, is the notion that “we should absolutely not consider such an idea […] until it’s proposed by someone who […]”. That’s argument by authority. A central tenet of the Enlightenment is that an idea should be judged on its own merits, regardless of who is speaking.
Sure, it makes sense to weigh in an advocate’s background, merits, incentives, and biases in order to see a bigger picture. But to “absolutely not consider” an idea is to deliberately put on blinders, and lock oneself into an intellectual dead end. It is also disrespectful. It would be equally unfair for me to dismiss your points simply because you are a writer, and therefore have an economic stake in the outcome of the discussion.
Let’s go with “meaningful” then, veering somewhat from your original statement. The above paragraph reminds me of the classic Chinese Room argument against strong artificial intelligence. I’m not trying to build a strawman here, but one important insight from that debate, in my view, is that it’s not strictly necessary for a word wrangler to know what’s meaningful in order to produce meaningful text. Meaning can emerge from other processes, such as random variation and selection. And selection is right there in Ryan’s title, “Curating…”.
This is an unfair comparison, because you only mention exceptionally good writers. If, as you argue, there’s a strict, qualitative, causal relation between knowing what’s meaningful and being able to produce good stories, then that relation would be independent of the level of technical refinement. If your line of reasoning holds true, then an advanced, highly polished algorithm would never be able to beat even the most unskilled amateur writer of self-insert fan-fiction, who, by virtue of being human, “knows what’s meaningful”.
Just to be clear, Ryan does not in any way claim that his current implementation is anywhere near the levels of those internationally acclaimed writers. Nor do you attempt to judge his current implementation; you are categorically opposed to the very notion of machine-generated stories. In fact, let me reiterate your claim that “we should absolutely not consider such an idea”. On the contrary, I think the idea is viable, but that it needs to be refined and improved.
Nevertheless, let me proceed for a while on your terms. All bestselling authors were once unskilled beginners. As machine-generated stories become increasingly sophisticated, I predict that they will eventually reach a level of refinement where human experts will find them difficult to distinguish from early, less known works of Faulkner, Fleming, King, and Pratchett.
To back up this claim, let me refer to David Cope’s work on artificial creativity in music. His computer programs generate music in the style of composers such as Mozart or Chopin, based on statistical analysis of existing works, in combination with hand-written algorithms. These programs clearly don’t know what’s “meaningful”, yet they produce new music that is sufficiently similar to human-made works that experts have failed to tell them apart in blind tests.
Being intimately familiar with one particular way of doing things does not improve one’s ability to assess entirely different approaches. If anything, it has the opposite effect, locking you into parochial, rigid patterns of thinking. Don’t walk into that trap!
Heavier-than-air flying machines are impossible. (Lord Kelvin) There is not the slightest indication that nuclear energy will ever be obtainable. (Einstein) While theoretically and technically television may be feasible, commercially and financially, I consider it an impossibility, a development of which we need waste little time dreaming. (De Forest, inventor of the vacuum tube)
My only comment is that I think this type of thesis is more “inside baseball” research for education and learning purposes, and for the benefit of an audience like us who routinely write IF and sometimes use proc-gen text. I don’t believe the author of this piece is intending “this is the entire method to create an IF game/story…” Sort of how proc-gen novels are usually experimental thought pieces without expectation that it will become a bestselling novel.
Yes, I think this is very much not supposed to be something that eventually replaces or provides a convincing simulation of novels written by humans. That would probably be mistaken and bad.
(For some examples of insidious writing that’s supposed to simulate real stuff, some minor league baseball games at least used to have automated recaps that just extracted information from the box score and would surely have left out anything interesting that actually happened on the field. And there’s this Jamie Zawinski post about an AI that plays Frogger that generates explanations for its decisions–except the explanations seem like they’re generated completely after the fact without any connection to how the AI made the decisions. Which as Zawinski says is absolutely horrifying, since the AI’s decisions remain totally inscrutable and but now it’s learning to lie to you. And as he says that may also be how our actual consciousness works.)
Anyway, a machine-generated text like I Waded in Clear Water (PDF link!) isn’t meant to be like something a person wrote, and if you read it that way it would be a massive failure. I’ve never read it all the way through (I have skipped to the end, which is great), and I’m not sure I could. But it’s beautiful and funny. The very strangeness of the results of an algorithm that doesn’t understand what it’s saying (look at the business about the ruler in the first chapter) create something that I don’t often get from authored prose. And of course it’s not a substitute for normal writing. It’s a different thing, which is the point.
As for Ryan’s simulated storyworlds, it’s clearly not meant to produce imitation novels. Emily Short’s initial post that Josh G. linked talks about how the appeal of simulated storyworlds is that things happen spontaneously, unlike novels where the story is crafted from the beginning, and that for Ryan an interesting storyworld is one that generates stories people want to tell–the curation is meant to find those stories, I think. Which doesn’t mean they’re like stories about real people. When people are touched by Alice and Kev, the story of the homeless Sims in Sims 3, or Emily Short’s Doofus and Delores, it’s not because they think they’re stories about actual people. But that doesn’t mean they’re meaningless, or fakes of stories about actual people.
About Ryan’s work, I liked the Sheldon County excerpts a lot, though I suspect the parts I liked most were largely hand-authored rather than generated. Unfortunately the biggest thing anyone seems to have written about a performance of Bad News has been deleted off the internet when Variety took over Glixel and took down its archives for some reason, which seems like an almost shocking act of vandalism.
Not in a narrow sense, no – but in a broad sense, yes. That is, it can certainly be helpful in interpreting or understanding a work to know not only general thrust of the author’s oeuvre but quite a bit about both the genre in which the author was working and the culture in which the author lived. As a simple example, one need not know what Fitzgerald intended in writing The Great Gatsby – he might never have said, or he might conceivably have said something but lied. However, having some knowledge of his work as a whole will certainly help one understand the novel; and in addition, having some knowledge of the culture of the times will enable one to overlook his casual racist mentions of Jews.
The idea that the text of a novel, any novel, can be understood simply by reading the text itself is shallow and silly. This applies with some force to the question of whether AI can ever write a novel, because by definition when one reads (or would read, if such a text is ever produced) a novel written by an AI, there would be NO context – no possibility of understanding the author’s human concerns (because there were none) or the author’s culture (because there was none). For that reason, any reading of a novel written by an AI would, indeed, be a shallow, silly reading.
Well, accepting for the moment that the AI has no culture, the reader also brings their own culture to the work and because of that can have meaningful interactions with the text.
I would also argue that AIs do indeed have culture, which comes from the training data (derived from human culture!) fed in and algorithms encoded by humans (with their own values/biases.) AI works are human artifacts.
By giving us a glimpse into the uncanny valley, AI works can help us explore what it means to be human.