I’m very interested in Andrew Piper’s new work on the “conversional novel”:
My approach consisted of creating measures that tried to identify the degree of lexical binariness within a given text across two different, yet related dimensions. Drawing on the archetype of narrative conversion in Augustine’s Confessions, my belief was that “conversion” was something performed through lexical change — profound personal transformation required new ways of speaking. So my first measure looks at the degree of difference between the language of the first and second halves of a text and the second looks at the relative difference within those halves to each other (how much more heterogenous one half is than another). As I show, this does very well at accounting for Augustine’s source text and it interestingly also highlights the ways in which novels appear to be far more binarily structured than autobiographies over the same time period. Counter to my initial assumptions, the ostensibly true narrative of a life exhibits a greater degree of narrative continuity than its fictional counterpart (even when we take into account factors such as point of view, length, time period, and gender).
Now — and here’s the really noteworthy part — Piper defines “conversional” largely in terms of lexical binaries. So a novel in which a religious conversion takes place — e.g., Heidi — is no more conversional than a novel whose pivot is not religious but rather involves a move from nature to culture or vice versa — e.g., White Fang.
As I say, I’m interested, but I wonder whether Piper isn’t operating here at a level of generality too high to be genuinely useful. Computational humanities in this vein strikes me as a continuation — a very close continuation — of the Russian formalist tradition: Vladimir Propp’s Morphology of the Folk Tale — especially the section on the 31 “functions” of the folk tale — seems to be a clear predecessor here, though perhaps the more radical simplifications of Greimas’s semiotic square are even more direct a model.
Two thoughts come to my mind at this point:
- I would love to see what would happen if some intrepid computational humanists got to work on filtering some large existing of corpus of texts through the categories posited by Propp, Greimas, et al. Were the ideas that emerged from that tradition arbitrary and factitious, or did they draw on genuine, though empirically unsupported, insights into how stories tend to work?
- A good many people these days emphasize the differences between traditional and computational humanists, but the linkage I have just traced between 20th-century formalists and Piper’s current work makes me wonder if the more important distinction isn’t Darwin’s opposition of lumpers and splitters — those who, like Piper in this project, are primarily interested in what links many works to one another versus those who prefer to emphasize the haecceity of an individual thing … which of course takes us back to the ancient universals/particulars debate, but still….
UPDATE: via Ted Underwood on Twitter, I see some people are already moving towards Propp-inspired ways of doing DH: see here and here.