This page is intended to provide definitions for the terms used within the Lexos suite, as well as to disambiguate terms drawn from natural language, programming languages, and linguistic analysis. New entries are being added on an ongoing basis.
Agglomerative Hierarchical Clustering
Agglomerative Hierarchical Clustering is a method of bottom-up analysis wherein each document is its own cluster, after which the clusters are merged to form one cluster for all documents.
Also called UPGMA, this linkage is an un-weighted hybrid of complete and single linkages.
The Bray-Curtis dissimilarity is a standardized form of Manhattan distance. Not metric itself, it is instead the proportion of values not shared between points, or, equivalently, the sum of absolute differences over the sum of all instances. Points further from the origin are more impactful on the percentage.
Canberra distance is a weighted version of the Manhattan distance. Instead of the sum of differences, it measures the sum of the difference ratios. The weighting allows this metric to be very sensitive to differences between points near the origin.
A character is any individual symbol. The letters that make up the Roman alphabet are characters, as are non-alphabetic symbols such as the Hanzi used in Chinese writing. In Lexos, the term character generally refers to countable symbols.
The Chebyshev distance ignores all but the greatest component difference between two vectors. It is similar to Euclidean and Manhattan distances, in that instead of infinite (continuous) or square-grid (orthogonal) movement, it allows 8 directions of freedom (orthogonal and diagonal). It is used reliably in very niche circumstances.
Also called the ‘farthest neighbour’ algorithm, this linkage produces spherical clusters of similar diameter.
Correlation distance is equivalent to Cosine distance after vectors have been shifted by their means. Correlation distance metrics perform well in very high dimensions with few null values.
Cosine distance is the measure of the angle formed by two vectors from the origin; it only judges the orientation of points in space, not their magnitudes. This metric is related to Euclidean distance by factoring dot products. Cosine distance is best for working in very high dimensions, especially if there are many null values in the vectors.
The process of creating multiple documents/segments from a source file.
The dendrogram is a method of visualizing how closely related documents are via hierarchical clustering analysis. The name derives from ‘dendron’, Greek for ‘tree’, and dendrograms are indeed rooted trees (a type of mathematical object). Each document, or leaf, of the tree is connected to every other by a series of branches. The length of each branch is distance from the center of the leaves of that branch to the next closest branch. One popular use for dendrograms are so-called ‘trees of life’ which show how various species, genera, families, etc. are related.
The Distance Metric is the method used to compare two documents. Document data is stored in vectors, where each index contains the number of times a specific term appears. For example, the sentence, “The buffalo from Buffalo who buffalo buffalo from Buffalo buffalo buffalo from Buffalo” would have the vector <The:1, buffalo:5, from:3, Buffalo:3, who:1>. Comparing this again a ‘sentence’ with no terms is equivalent to finding the distance between the vectors <0,0,0,0,0> and <1,5,3,3,1>. The way the comparison is measured (distance between two points, number of words different, etc.) is the distance metric.
This top-down clustering method assumes all documents are in one cluster, then uses an algorithm to divide them until each document is in its own cluster.
In Lexos, a document is any collection of words (known as terms in Lexos) or characters collected together to form a single item within the Lexos tool. A document is distinct from a file in that the term document refers specifically to the items manipulated within the Lexos software suite, as opposed to file, which refers to the items that are either uploaded from or downloaded to a user’s device.
The Euclidean distance is the 'intuitive' way of measuring distance: the length of a straight line between two points. This metric is one of the most widely used due to its reliability and simplicity.
Exclusive Cluster Analysis
File refers to items that can be manipulated through the file manager on a user’s computer i.e. windows explorer, archive manager, etc. File is only used in the Lexos suite when referring to functions that involve the user’s file system, such as uploading or downloading.
Flat Cluster Analysis
A term occurring only once in a document or corpus.
Hamming distance is similar to Jaccard, but it also considers shared absences and it completely ignores abundance in lieu of existence. The primary function is to determine the number of differing null and valued components. This is the best test to determine similarity between vocabularies. In Lexos, the Hamming distance is treated as a proportion instead of a raw count.
Hierarchical Cluster Analysis
Hierarchical Clustering is a method of bottom-up analysis wherein the distance between each pair of documents is calculated and stored in a matrix that is reduced by iterating a linkage algorithm. This reduction yields the branch heights and divisions which are represented by a dendrogram. This method of analysis produces consistent results.
Derived from Bray-Curtis, the Jaccard distance is the ratio of the size of symmetric differences to the size of the union for the vector components of the points. Unlike the Bray-Curtis dissimilarity, Jaccard is metric. The primary use of Jaccard distance is to measure the dimensions unique to a vector.
Keepwords are the opposite of stopwords. When scrubbing with the keepwords option, all terms except keepwords will be deleted. See stopwords.
The dictionary headword form of a word. For instance, “cat” is the lemma for “cat”, “cats”, “cat’s”, and “cats’”. Lemmas are generally used to consolidate grammatical variations of the same word as a single term, but they may also be used for spelling variants.
The term “lexomics” was originally used to describe the computer-assisted detection of “words” (short sequences of bases) in genomes,* but we have extended it to apply to literature, where lexomics is the analysis of the frequency, distribution, and arrangement of words in large-scale patterns. Using statistical methods and computer-based tools to analyze data retrieved from electronic corpora, lexomic analysis allows us to identify patterns of vocabulary use that are too subtle or diffuse to be perceived easily. We then use the results derived from statistical and computer-based analysis to augment traditional literary approaches including close reading, philological analysis, and source study. Lexomics thus combines information processing and analysis with methods developed by medievalists over the past two centuries. We can use traditional methods to identify problems that can be addressed in new ways by lexomics, and we also use the results of lexomic analysis to help us zero in on textual relationships or portions of texts that might not previously have received much attention.
The Manhattan, or Taxicab, distance is so named because, unlike Euclidean distance, which 'goes as the crow flies', length is defined as the shortest path between two points on a grid - thus, it is more comparable to a taxicab's route in Manhattan. This metric is equivalent to measuring the area between two distribution curves (cf. Riemann sums). Manhattan distance is well-suited for points with fewer dimensions.
An n-gram is a string of one or more tokens delimited by length. N-grams can be characters or larger tokens (e.g. space-bounded strings typically equivalent to words in Western languages). A one-character n-gram is described as a 1-gram or uni-gram. There are also 2-grams (bi-grams), 3-grams (tri-grams), 4-grams, and 5-grams. Larger n-grams are rarely used. Using n-grams to create a sliding window of characters in a text is one method of counting terms in non-Western languages (or DNA sequences) where spaces or other markers are not used to delimit token boundaries.
Overlapping Cluster Analysis
Partitioning Cluster Analysis
Rolling Window Analysis
After cutting a text in Lexos, the separated pieces of the text are referred to as segments. However, segments are treated by Lexos as documents and they may be referred to as documents when the focus is not on their being a part of the entire text.