The Computational Literary Studies Debate: Thoughts from the Fall 2019 Class, Writing in Digital Humanities

Computational Literary Studies: Fresh Perspective or Potential Threat?

CLS, short for Computational Literary Studies, is, as aptly defined by Nan Z. Da, “the statistical representation of patterns discovered in text mining fitted to currently existing knowledge about literature, literary history, and textual production” (602). The field, despite its seemingly useful definition, has an overbearing stigma from the literary community. Most literary scholars assume that CLS only has the capacity to cause harm to the overall integrity of literature. However recent studies have shown that CLS has the ability to analyze enormous amounts of texts may benefit the field by providing additional perspectives, like predictive and topic modelling, into the subject. In this paper, I will prove that it is integral to the future of literary studies to consider CLS as a tool when analyzing works of literature.

Ted Underwood’s discussion in “The Impact of Computational Methods in Literary Studies” explores the seemingly endless capabilities of computational analysis. Underwood argues in his article that the evaluation of literature should shift to or at least explore additional methods in order to keep up with the times. Additionally, he illustrates the importance of computational analysis in terms of discovering widespread patterns across thousands of books, along with providing another perspective on classic literature. “These sorts of objective data models won’t capture the blurriness and ambiguity of literary concepts,” Underwood asserts, “I think these methods can be really good at blurriness and ambiguity” (6). One of the models he uses as an example is topic modelling, a method of text mining that dissects literary structure in order to discover hidden themes amongst a group of text. He also introduces predictive text modelling, a way to foresee themes or outcomes that may arise in a text based on the word patterns in the work. CLS’s statistical reasoning is just another form of presenting data which is essential to the integrity of a literary analysis. Throughout the article, his emphasis lies on the abilities of computational methods that the general public has yet to take advantage of.

Da’s essay “The Computational Case Against Computational Literary Studies” systematically analyzes the matter of CLS and expresses her thoughts on the overwhelming inefficacy of the use of computerized interpretation of literature she believes is present in the field.  Da states that CLS’s interpretational abilities are far too incompetent in terms of truly understanding the text, thereby invalidating any computational method’s legitimacy as it reviews literature. “CLS’s processing and visualization of data are not interpretations and readings in their own right,” she writes, “To believe that is to mistake basic data work” (Da 606). This quote illustrates that CLS does not have the ability to understand on its own as it is unauthentic and derivative. Her main argument against computational methods is the lack of originality surrounding CLS analysis as she considers it as mere regurgitation of the original text. Nan Z. Da reinforces this stance on CLS and repeatedly circles around the concept as her focal point, interspersed with data to back her claims.

One of the controversial facets of CLS is its ability to understand and authentically summarize a text. Nam Z. Da claims that computational methods are overtly inferior, stating, “Computational literary criticism is prone to fallacious overclaims or misinterpretations of statistical results…, making claims based purely on word frequencies without regard to position, syntax, context, and semantics” (Da 610). Her strong stance towards the sphere of CLS reflects a well-established stigma that is constantly circulated in literary community.

The most common method of computational studies that is referred to time and time again is the word counting model, which counts the repetition of words in an overall text. Underwood acknowledges the seeming oversimplification that comes with the field of computational studies. He contends the previous notions of CLS with its abilities today: “The methods we’re drawing from machine learning can model a phenomenon that is as complicated as say a literary genre. But in acknowledging the complexity of the problem you tend to lose your ability to draw simple causal conclusions” (Underwood 8). Underwood has found that topic modelling and predictive models, which are new to the world of CLS, have brought new meaning to how text is computationally analyzed. With these new modelling additions, however, comes the complexity of mastering a new type of literary comprehension, which poses an interesting question: has society, as literary consumers, overcomplicated the notions of texts and literature? The sphere of computational literary studies is attempting to broaden its ability to make connections and understand more complex data while keeping up with today’s technology. 

 A popular debate amongst users and critics of computational methods is the idea of man versus computer. Da writes, “The CLS papers I studied sort into two categories. The first are papers that present a statistical no-result finding as a finding; the second are papers that draw conclusions from its findings that are wrong” (Da 607). As reflected in this statement, Da finds computational methods to be wholly detrimental to the study of literature. Additionally, her slightly harsh words depict two common worries amongst literary experts: the fear of mechanical inadequacy to understand classic literature and the fear that, one day, it might replace human analysis. Ted Underwood disagrees with this concept, stating,“The point is not that computers are going to give us perfect knowledge but that we’ll discover how much we don’t know” (9). In order to remain current with the pace of today’s technology, the literary world needs to at least consider this method of understanding. The importance of computational literary studies is not to invalidate the human perspective, but, perhaps, to form another pathway of comprehension. CLS does not have to be the end be all end all of literary critical analysis yet its models can be respected as an additional lens.

There is still much work to be done in order to finetune the functionality of computational literary studies. Although it is able to connect thousands of texts at once in a way incapable of humans, the field is hindered by its inability to consider qualitative reasoning. Many argue that this point alone should preclude it when considering different perspectives on literary texts. Additionally, another claim against the use of computational methods is that the CLS models created, like word counting and data mining, oversimplify the understanding of literature and, in turn, threaten the future of classic literary analysis. Despite the stigma surrounding Computational Literary Studies, the field should still be considered as a useful extension when attempting to understand literature.














Works Cited

Da, Nan Z. “The Computational Case against Computational Literary Studies.” Critical Inquiry, vol. 45, no. 3, 2019, pp. 601–639., doi:10.1086/702594.

Ted Underwood. “‘They Have Completely Changed My Understanding of Literary History.’ Ted Underwood on the Impact of Computational Methods in Literary Studies.” Textpraxis, vol. 8, no. 3, 2017, doi:10.17879/00289544084.





 

This page references: