Artificial Intelligence and Creative Works?

Haven’t the folks around here watched Star Trek Voyager? Both the doctor and lieutenant Paris were accomplished holo -novelists, apparently by issuing prompts to the computer.

1 Like

The economic system and driving forces of software development and artistic expression are extremely different in Star Trek: Voyager, compared to what we experience on our planet today, and we will continue experiencing this as these technologies are perfected.

The existence of this technology does not exist in a vacuum and is not what brings us to realizing the world of Star Trek. There are a lot of other systems interacting on this world.

2 Likes

Story #2 is a valid story, in that it has a beginning, middle and an end, with cause and effect. It’s just unsatisfying*.

    • For me, it is unsatisfying because it had a load of ideas and literaturelly threw them in a hole. Other people can and will find it unsatisfying on other grounds. None of these make it not a story.

Theme, conflict and the other tools of literature analysis don’t have to be followed in order to write a story, although they are common features in the stories that do well commerically. AI can’t make a distinction that doesn’t exist (unless coded to recognise such an artificial distinction), and of course it has its own opinion of what’s satisfying if someone’s coded a starting point for how to form it. The AI’s answer to what is and isn’t satisfying cannot, of course, be assumed to correspond with what any particular human would want, especially if no effort was made to limit the AI’s definition, but it is still an answer.

If there was a genuine valid and non-valid option among the two stories, it would have to be possible to write an algorithm to distinguish them, since that’s how humans distinguish between multiple possible states of things. Of course, people aren’t bothered about writing out such a complex nest of terms for the purposes of genuine literature analysis (as opposed to for pretending to analyse in order to cheat at coursework, of which more later), which is one of many reasons neural net AI is catching on: give it a framework and it’ll automate the fiddlier parts of programming - perhaps not as expected and perhaps not well, but it will do so.

My current biggest concern with AI is with the extent to which it is reliant on stolen material. This means it is unsafe to use it for any commercial concern (I’ve already seen a case in Norway where an artist lost copyright over their own work because a court decided an AI had taken copyright of it).

I’ve not heard as much about AI text relying on stolen material, due to a combination of a vast array of public domain work and numerous sources of non-commercial writing scraped from online sources (though some of the latter sources may themselves only be legal due to dubious contracts between writer and internet publisher). However, I’ve not heard of any AI source material writer who put significant effort into ensuring only legally “clean” sources of AI can be used.

It’s also important to note that text AI is not simply pattern-matching; it’s pattern-iterating. The primary uses for AI text I’ve seen are:

  • cheating in school/university assignments (16% of students in UK universities surveyed admitted to paying for an essay, nowadays it’s much cheaper for the essay mills to AI an answer and quickly finesse it than write one from scratch, and fewer than 1% of those essays get caught assuming students only did it once per academic career, making it very rewarding in a way that actual creative and intellectual effort often isn’t. It’s also an increasing problem in scientific journals, where dozens of AI-written papers have been found and many others suspected; there’s a crisis in confidence in some sectors of Alzheimer’s research as a combination of AI-generated papers and old-fashioned questionable human researcher practises may have sent research down a wrong alley. This is an example of serious consequences ensuing from people not being able to differentiate between the AI-written paper, the fraudulent human paper and the genuine human paper).

  • political trolling and marketeering, especially to get round filters (iteration-based text AI is particularly good at this).

  • near-snowclones of advice sites, which lead to the first two pages of many searches having identical advice in different, slightly awkward phrasing in order to chase advertiser money. The sheer number of them means they crowd out many of the more originally-written sites - which is a problem when the latter often have the actual correct answer and the AI-written ones sometimes do not! (I object to this, not because I expect the “AI in error” issue to persist, but because making the first page near-clones implies nothing more is to be said on the topic, which seriously damages the utility of search and decreases the degree to which people are informed).

  • having fun and experimenting with words (I like this reason for text AI, and this last one appears to be rather more prominent than for art AI).

AI is already capable of maintaining a context, provided it is sufficiently concrete and appropriate limitations have been placed on its domain (AI is much better at interpolation than extrapolation). It can definitely remember things (most of modern digital computing relies on memory) and it can definitely reason (again, most of modern digital computing relies on specific categories of calculation, which is the digital equivalent of logic-based reasoning). It can reasonably do certain types of problem-solving (that’s how researchers managed to find an AI of Civ II that improved the computer player’s performance by 47% by doing the equivalent of teaching it how to read the manual).

I will grant that the last 4 items on @jkj_yuio 's list (adaptation, origin, thinking and problem-solving) have a long way to go for AI.

2 Likes

I love your beetle example, and I can’t wait to see the new generation of beer bottle jewel beetles. :grinning:

This is a complicated topic, so for purposes of clarity, I shall confine my discussion to the preparation of fiction for commercial purposes. After all, that is the primary concern of readers here. My point here is that parrots cannot create stories, no matter how big their vocabulary. LLMs cannot create stories, either, for the same reason. Merely stringing words together in grammatically correct sentences is not good enough for storytelling. Even stringing sentences together to make a decent paragraph, while better than what many people can do, is nowhere near sufficient to create a commercially viable story.

Let’s consider the degree to which any sentence in any story must take into account the contents of every other sentence in the work. Let’s call this concept “literary consistency”. If Elizabeth in Pride and Prejudice declares to Mr. Darcy late in the story “Baby! Let’s get down and dirty!”, this would be highly inconsistent with most of the other information in the novel. Yet the algorithms used in LLMs are only aware of local consistencies; they are not capable of comparing sentences in Chapter 11 with sentences in Chapter 3. It is theoretically possible that an LLM capable of such a feat could be built, but such an LLM would require the entirety of human literature in its database and would need more RAM than could be manufactured with all the silicon in all the quartz in the world, and more machine cycles than could fit into the lifetime of the universe.

Hence, my case is not based on some foggy notion of “originality” – it is based on the fact that stories are immensely complex logical structures with myriads of unstated logical dependencies. LLMs succeed only because they are computationally brute-force techniques for which we have enough data, enough RAM, and enough computing power – but they succeed only with tasks requiring far less resource than required for creating valid stories. I refer you back to my earlier example of the two stories, one valid and one not valid. I challenge you to write a program capable of determining, from fundamental principles of storytelling, which of the two stories is valid. I’ve been working on similar problems for decades, and that task is completely beyond my capacity.

The fundamental question here turns on the use of data as a proxy for process. Human minds see reality as a collection of objects, not a system of processes, which is why they are attempting to use LLMs to create story structures using data rather than principles. We are only at the very beginning of the computer revolution; it will take us a long, long time before we learn the true lesson of computers: that we need to learn to think in terms of processes rather than data. Science and technology are the direct result of literacy, but look how long it took us to follow that evolutionary path to its current state.

For more on data versus process, see Object Versus Process | Interactive Storytelling Tools for Writers | Chris Crawford

2 Likes

Fair enough. Perhaps I shouldn’t have used the word “valid”. There are a host of other words that might fit, although I’m not sure about “satisfying”. Perhaps “adequate” or “acceptable” or “respectable”. In any case, your definition of what constitutes a story is extremely broad. “It rained.” has a beginning, a middle, and an end, and so, while it might be a valid story, I don’t think that many people (except perhaps some academics) would be willing to accept it as a story.

You appear to imply that conflict is merely an artificial dimension of evaluation in a story. Am I misinterpreting you statements?

What exactly do you mean by “stolen”? Normally we think of theft as the illegal transfer of possession of an item from one owner to another, but information can be copied without being stolen. Yes, we can clearly establish possession through copyright, but the use of LLMs to create material does not violate copyright – at least not yet – nor does it seem to violate copyright in principle. The output of the AI is most definitely NOT recognizable as the original work.

I was unable to locate the Norwegian court case to which you refer. Could you direct me to it?

Indeed, artists have always learned from each other and incorporated other artists’ ideas and techniques into their own work. What’s wrong with a computer doing exactly the same thing?

2 Likes

Yes, the assumption in this thread is that Machine Learning will automate authorship, and therefore kill off human Art.

But that’s not how ML is being used. The bots are there to give the illusion of concensus. They aren’t automated authors, they are scripted critics, artificial amplifiers. They are bogus fans.

1 Like

Why would that be true? Is it your belief that human writers need a catalog of the entirety of human literature in order to be “better” than a “parrot”?

Anyway, your argument appears to be that current AIs largely lack a world model. That’s true, in general, and I agree it’s one of the things that limits the “quality” of AI-generated art (both large language models like you’re talking about and elaborate denoising models like Stable Diffusion). But I don’t know why building a model of, for example, the structure of a novel like Pride and Prejudice should be understood to be more intractable than building a model of a sentence or paragraph on an arbitrary given subject. They’re both just elaborate state transition models. It so happens that it’s more convenient to do the latter than the former because the internet provides a conveniently large corpus of already-classified training material on one but not the other. But that just tells us what the low-hanging fruit currently are. I don’t think we can infer anything about the inherent complexity of early 19th Century novels or anything like that from it. Like I suspect if there was a publicly available archive of show bibles for television sitcoms, for example, then (because the actual scripts are invariably available) we’d already have arbitrarily-good generative models for sitcoms.

This is already starting to happen in image generation. ControlNet for Stable Diffusion was released in the last couple weeks, and it is already starting to bridge the gap between latent noise/diffusion generation and transient-but-durable modelling—that is, doing things like taking a source image, building a 3D model of the pose, and then using the model to create an image of a new subject in the same pose. Or generating a depth map from a subject image and using the depth map to produce a new image with similar geometry. This is not yet quite going from prompt to a reusable model, and being able to do that doesn’t by itself get you an art generator with a consistent world model. But each makes the next much more plausible, and at this point I find it difficult to imagine a compelling reason to believe we won’t get there in the foreseeable future.

3 Likes

We do, but they’re not good enough, even when human writers are involved.
Modern sitcoms suck, because the models are degenerate. Hollywood movies use entirely generic dialogue. “Let’s do this” and many other such cliches are recycled, not because they are powerful and emotive, but because of a superstitious fear that the screenplay would be deficient without them.

1 Like

My point is that the task of creating a proper novel of, say, 100,000 words is immensely greater than the task of creating a proper paragraph of, say, 100 words. That’s because, roughly speaking, each word must comport with every other word. Thus, the effort to create a work of literature is, VERY roughly speaking, proportional to perhaps the square of the number of words. Thus, the novel requires a billion times more effort than the paragraph. Again, I emphasize that this is very much a seat-of-the-pants calculation.

It’s the difference between building a paper airplane and building a Boeing 747. The 747 is a billion times more complicated than the paper airplane. To put it another way, a competent writer can write a proper paragraph in much less than an hour, but might need several years to write a proper novel.

There’s a crucial difference between an image and a novel that deserves our consideration. Let’s talk about the information content of each. At first glance, we are tempted to claim that the information content of each is directly measured by the size of the file. By this measure, a jpg image of ten million pixels might weigh in at a couple of megabytes, whereas a novel of 100,000 words would come down to a few hundred K at most.

But I don’t think that this metric is adequate to the needs of this discussion. Let us imagine changing a tiny fraction of the bits in each file. In the novel, let’s change a sentence in which the parson says “I’d certainly like to meet young Ms. Clark.” Let’s replace the word “meet” with the four-letter word for “fornicate”. That represents a change of just 0.001% of the raw content of the novel, yet has a huge impact upon the content of the novel.

Now let’s change just 0.001% of the pixels in the jpg image. That’s just 100 pixels. I doubt that changing 100 pixels in a ten megapixel image would even be noticeable to the viewer. This reasoning strongly suggests that the information content in the novel is immensely greater than the information content in the image. That’s because each byte of information in the novel is somehow connected to many of the other bytes in the novel, whereas a byte of information in the image is connected only to a few bytes adjacent to it.

If we really wanted to do this analysis properly, we’d use Shannon’s definition of information content, but that is not readily applicable to our problem.

4 Likes

I’m going to say again what I said in another thread. AI today is not just progressing rapidly, it’s routinely doing things that were unthinkable a decade ago. What was impossible yesterday is achieved today and will seem unremarkable tomorrow.

There’s a certain bias about AI which is very clearly on display in this thread, whereby it’s assumed that anything AI has actually achieved is by definition unimpressive. The goal posts can be moved endlessly while people recite the same lists of cognitive tasks that AI supposedly can’t do, remaining unconcerned even while the length of the list dwindles down towards zero.

However, the faith that AI will never do “creative” work is not just a faulty view of AI, but of the human brain and creativity itself. I’m not here to invalidate anyone’s religious beliefs, but I personally don’t believe that the soul or the “divine spark” or what you will exists above and external to the human brain. All the richness of our thought and creativity takes place in the brain, and the brain is a material object doing computations just as a computer is. Creativity involves recombining, pattern-making and pattern-matching, simulating—these are, fundamentally, things that AI can do.

Now, it’s true, as I understand it, that enlarging the context window of an LLM has greater than linear computational cost, so we don’t yet know how to generate coherent long texts. I would be astounded if, after breaking through the barriers of generating long complex sentences, coherent images on arbitrary prompts, or chatbots with as much contextual awareness as ChatGPT already has, producing something the length of a novel proves to be the rock that AI founders on, never to recover. Humans manage to write novels even though they can’t remember the whole thing at once; AIs will eventually do the same, possibly using basically similar strategies to compress information. (An LLM’s training data is much larger than the model itself, just as our “training data”—the information we take in throughout our lives—is more than we can store in our brains.)

5 Likes

Agreed, but this does not seem to connect back to your assertion that an LLM would require a full catalog of human literature to write a worthwhile novel, which @jbg was puzzled by. That assertion also suggests that novel-writing is simply a process of replication, and not a process of creativity (which you do not seem to be supporting here, as you outline the relative complexities of the creation process, and not the replication process).

Complexity is also a solvable problem. Since humans can write novels, then the complex creativity algorithm of our brains—which we pride ourselves on—must be attainable. After all, it’s not magic, and we are using it.

An LLM doesn’t need to parrot human creations to succeed; it just needs to parrot human creativity.

For anything else, see @FriendOfFred, who said the rest better than I could.

3 Likes

Under the condition that I’m well fed, have a roof over my head and some free time to read, yes. No doubt.

4 Likes

It would be nice if the last sentence was true, but I don’t believe it is, and even if it was it would be irrelevant to the question at hand.

If it were true, then it would actually make algorithmic generation easier, not harder. What you’re describing is, to a first order approximation, a hidden Markov model. And we’re pretty good with those these days.

But even it was true it seems apparent that readers don’t, as a rule, approach texts as if they’re flawless mosaics of perfectly-fitting words. Or at least textual analysis is full of fairly complicated efforts to reconcile alternate variants of a given text, and if there was some sort of mathematically perfect relationship between every word then it seems like this would be a more tractable problem. Shakespeare scholars struggle with over a thousand words in King Lear alone, for example, that don’t, as you say, comport with each other across the various versions of the source material. How convenient it would be if we could just do the math or whatever to evaluate whether a particular word, or line, or in fact several hundred lines is “really” part of King Lear or not instead of it remaining a centuries-old open question.

So even if your proposed structural interdependence of every word in a text did exist, then it wouldn’t be necessary (to produce one of our notional “non-parrot” texts) unless you could somehow or other prove that some “economically relevant” portion of novel-readers could do the Princess and the Pea thing with individual word displacements and the like. Which they very apparently do not. Anyway, so much for comporting with every other word.

As for the claim that a “proper novel” is more complex than a paragraph:
you’re confusing the complexity of the thing (e.g. an English sentence) and the complexity of the specification for constructing the thing (e.g. an algorithm for generating English sentences). If you’re talking about a model to produce, e.g., random English sentences or random English novels then (ignoring any “merely practical” problems, like the availability of adequately tagged training data) then it is trivially true that the latter is a smaller problem than the former, based entirely on the size of each respective corpus. That is, there are far more English sentences than there are English novels.

If we want to confine ourselves to, for example, just 19th Century English sentences and 19th Century English novels (which both have the advantage of being static datasets), then even if we want to believe, for some reason, that the structure of each 19th Century English novel is unique and irreducible…a palpable falsehood…then it’s still a smaller problem because the amount of complexity is necessarily no greater than the amount of information in the texts and, again, the size of the corpus of 19th Century English sentences in general is vast compared to the corpus of 19th Century English novels. So unless you can come up with some compelling argument why we should believe that the former corpus is just far more reducible (despite being broader in subject matter, diction, grammar, and so on) than the latter, then purely from information theory novels are a smaller problem.

So if you’re arguing that the complexity of any one specific English sentence or paragraph is less than any one specific English novel then yeah. But that’s not the relevant proposition to consider in this context.

We can also argue about how general a general language model is and the practical difficulties of producing sufficiently tagged and categorized training data and so on, but none of that really changes the fundamental point: building a method to solve a general problem is almost always harder than building a method to solve a specific problem or a specific subset of problems.

2 Likes

Van Gogh was financially successful. He just happened to be dead at the time. Not shifting units while he was alive was an admittedly innovative strategy, but look at the payoff!

-Wade

3 Likes

Wow! Lots of great points being made here, too many to respond to in detail. I will make an exception with jbg’s confident assertion that novels are simpler than sentences because there are fewer novels than sentences. I find the assertion that the whole is less than any single part, much less the sum of the parts, to be absurd. I suggest that jbg rephrase this claim, as it is obvious that jbg knows the subject well and is attempting to express a truth.

I can offer four general reactions to many of the points made here recently:

First
Some authors greatly underestimate the complexity of stories. Now, stories comprise, in my opinion, one of the basic data structures of human cognition. Most sentences are structured in a story-like form. Thus, it is certainly possible for a computer to compose a one-sentence story – that was accomplished long ago. But the kind of story that we are concerned with here – a novel – is immensely more complex than a single sentence.

I suspect that some of the underestimation of the intellectual content of a story arises from unfamiliarity. Since everybody experiences zillions of stories in their lifetime, it is all too easy to overestimate one’s own grasp of the medium. But experience with a construction does not confer knowledge of how to construct it; an office worker who uses the computer all day long does not automatically know how to program it. In the same fashion, consumers of stories are not automatically qualified to create stories. I was not trained in the writing of fiction, so it took me a long time to learn the subject. I’ve spent forty years learning about narrative theory, and I have the highest respect for good storytellers and a great deal of humility in discussing matters with such people.

I urge readers to avail themselves of the vast literature about narrative. I confess that a frustratingly large portion of this literature is dreck. There’s academic stuff that seems to me to address issues of concern only to other academics. There’s a mountain of “how to write a best-selling novel” crap out there. But there’s also some really good stuff.

Second
Some of the comments here grossly underestimate the complexity of the human mind, claiming that AI will surely equal its performance someday. They do not realize just how complicated the mind is. A neural net does not come close to mimicking the performance of the human mind. Note that I use the word “mind” instead of “brain”. That’s because the mind consists of much more than the brain. Almost everything that goes on in the body has some influence upon the mind. Subtle changes in the chemical composition of the blood can dramatically affect mental behavior. If you doubt this, try injecting some testosterone into the blood stream of any male and watch his behavior change.

The social and emotional universe in which the human mind operates is immensely complex. Consider, for example, what is required to understand the following sentence:

“I was so excited to see my wife while driving home at 3 AM after concluding a long business trip a few days early, but when I saw my best friend’s car parked in our driveway, I didn’t know whether to feel fury or grief.”

No AI can understand this without first grasping a great deal of knowledge about human sexuality, daily routines, and a myriad of other considerations. Language mirrors our perceived reality, and so in order to understand language, an AI must understand the entirety of human reality. Have you any idea of how big that reality is?

Third
Nearly fifty years ago, Joseph Weizenbaum reported on a program he wrote called “ELIZA”. This simple program executed the simple principles of Rogerian therapy in a conversation with the user. The effect on society was electric: the news media gushed over the prospect of computers replacing psychotherapists. We look back and laugh at the simple naïveté of those people – just as people fifty years from now will look back and laugh at our simple naïveté.

Fourth
A common error is to assume that a model that produces kinda-sorta good results will, with some tuning, produce perfect results. For example, the Ptolemaic model of the solar system, positioning the earth at the center of the solar system, had some serious flaws, but with just two basic modifications (the deferent and the equant) it was able to produce predictions that pretty closely matched observations. People spent the next thousand years fiddling around with additional patches to improve the model, but just couldn’t seem to get it right. (Ironically, the Copernican model generated results that were no better than those of the full Ptolemaic model, because Copernicus assumed orbits that were perfect circles.) So here we are with AI systems that produce kinda-sorta good results, and optimists are confident that, with a few tweaks here and a few adjustments there, we’ll nail it. Ptolemy would be cheering them on.

Conclusion: Some intellectual humility is called for.

3 Likes

I actually have been reading quite a few of these books lately. I’m certainly interested in getting a few more recommendations, if you don’t mind. [1]

A lot of gem in that post, but does AI must understand the entirety of human nature? There’s a whole of narrative works that I end up disliking because the author doesn’t seem to care.

This includes even TV/Movie shows, where the broad strokes of narrative beats may be acceptable (Blake Snyder’s Save the Cat ) but the implementation was absolute junk. For example: Ring of Power.

Or maybe the writing is fine, but the whole works feels episodic and disjointed. For example: Eragon.

If there’s an easy solution to these kind of problems, there’d be a lot more successful authors in this world, Racter style.

As far as weighting algorithm is concerned, IBM Watson and Google AlphaGo seems to be heading toward the right direction. The problem is Artistic style. “Can computer do art?” is a lot easier question to answer than “Is the resulting art any good?”

Yes, the computer can do art, but the resulting art isn’t any good, nor will it be anytime soon. But the claim that the AI must understand the whole human spectrum ignores the fact that even human authors specialize in various genres. [2]

[1]
My latest acquisition is a bunch of DM books of Random Encounters. I’ll probably layer it with narrative elements/techniques to enrich the templates.

[2]
I assume there are different level of difficulties according to genres. For example: Romance novels feels formulaic to me. Can a computer pick out patterns in works of that genre?

1 Like

There isn’t any single book that I can wholeheartedly recommend; the best I can do is point to some books that have useful ideas. However, most of these are rather old, as I did my most intense research during the 1990s.

Hamlet’s Hit Points, by Robin D. Laws. Has a lot of interesting ideas, but does get bogged down. I recommend.

Hero with a Thousand Faces. Something of a classic; a bit overdone, but still worth reading.

Hamlet on the Holodeck, by Janet Murray. A classic, Definitely worth reading.

The Literary Mind, by Mark Turner. Rather abstract. I don’t know whether to recommend.

Morphology of the Folktale, by Vladimir Propp. A classic, nearly a century old. illuminating but unnecessary.

Computers as Theatre, Brenda Laurel. Another classic, but short on technical content. Recommended.

On the Origin of Stories Brian Boyd. Evolution, cognition, and storytelling. Dense. I loved it, but other people may find it too indirect.

The Dynamics of Folklore by Token. More anthropology than narrative.

The Storytelling Animal by Gottschall. Primarily about the connection between stories and psychology. Probably worth reading.

The 36 Dramatic Situations, by Polti. Old, but offers some useful ideas. Probably worth reading.

From Topic To Tale, by Vance. Not recommended. About storytelling in the Middle Ages.

The Folktale, by Stith Thompson. About folktales. These are important because they constitute a form of “genuine” story, not clever art but reflective of natural impulses. Definitely recommended.

The Singer of Tales, by Lord. How bards are able to tell long stories without strict memorization. Not directly pertinent, but interesting.

On Narrative, bt Mitchell. A collection of essays, academic in style but containing some good ideas.

Narrative: A Critical Linguistic Introduction. Very academic and tedious. Almost worthwhile.

The Uses of Enchantment: The Meaning and Importance of Fairy Tales. Primarily about the psychological value of fairy tales for children, hence probably not of immediate value to you.

Interactive Storytelling Glassner. Meh
Tell Me a Story Schank. Meh
Character Development and Storytelling for Games. Sheldon. Meh.
Interactive Storytelling for Video Games. Lebowitz and Klug. Crap

I’m sure that I’ve left out some of the books in my library, but my cataloging system is a bit off.

3 Likes

Yes, the Gothic Romance is a highly standardized format for which hundreds of titles are ground out every year. Because it is so standardized, I would expect it to be one of the first genres to be tackled by AI. Soap operas are another likely genre, largely because so many of them have been created all over the world for more than sixty years. Mysteries might seem to be equally accessible, but I have doubts arising from the highly specialized nature of their solutions. How can a system figure out the significance of the fact that the dog didn’t bark (Sherlock Holmes)? Or that the lady mentioned that she doesn’t like horses (Poirot)?

2 Likes

Individual sentences are less complex than individual novels. But the set of all English sentences contains more irreducible complexity (or information, or entropy, or however you wish to speak about it) than the set of all English novels.

Imagine you have an exam tomorrow. Would it be easier if you were to be tested on: the contents of every novel by Jane Austen; the contents of every 19th Century English novel; or the contents of every statement made in English during the course of the 19th Century (including all the aforementioned novels, plays, poetry, song, personal letters, telegrams, military dispatches, penny dreadfuls, newspaper columns, execution broadsheets, &c., &c., &c.)?

That’s a different proposition than if you were given the choice of being tested on one specific paragraph versus, say, Northanger Abbey.

2 Likes

I thought about the two stories for a while and came up with the fact that the main difference between them (relating to the topic being discussed) is the level of sophistication and description used in their construction. And it is that that potentially makes one story more “satisfying” than the other.

However I believe that the levels of sophistication and/or description needed to tell a “satisfying” story greatly depend on whom the reader is.

eg. a 5 year isn’t likely to find <insert heavy literature masterpiece> very “satisfying”, nor is a lover of such heavy literature masterpieces likely to find titles aimed specifically at that 5 year that “satisfying” either. :slight_smile:

2 Likes