In the MarginsMain MenuWelcomeThe In the Margins home pageManualStart page for the Lexos ManualTopicsExplore this path to learn about the Lexomic methodsGlossaryGlossary of terms used in Lexos and In the MarginsBibliographyBeginning of bibliography pathLexos Install GuideInstall GuideScott Kleinman9a8f11284fbcd30816f25779706745a199e2813bMark D. LeBlanc23eecdfefefedd63f3c03839b2eb82298bb7b6acMichael Drout982893aaef23041e734606413d064fcc52ac209a
12015-08-16T00:11:22-07:00Scott Kleinman9a8f11284fbcd30816f25779706745a199e2813b53712The starting point for the Lexomics pathplain3145532016-08-16T20:22:36-07:00Scott Kleinman9a8f11284fbcd30816f25779706745a199e2813bThe term “lexomics” was originally used to describe the computer-assisted detection of “words” (short sequences of bases) in genomes,* but we have extended it to apply to literature, where lexomics is the analysis of the frequency, distribution, and arrangement of words in large-scale patterns. Using statistical methods and computer-based tools to analyze data retrieved from electronic corpora, lexomic analysis allows us to identify patterns of vocabulary use that are too subtle or diffuse to be perceived easily. We then use the results derived from statistical and computer-based analysis to augment traditional literary approaches including close reading, philological analysis, and source study. Lexomics thus combines information processing and analysis with methods developed by medievalists over the past two centuries. We can use traditional methods to identify problems that can be addressed in new ways by lexomics, and we also use the results of lexomic analysis to help us zero in on textual relationships or portions of texts that might not previously have received much attention.
More information can be found on the Lexomics website.