Systematic reviews were undertaken using key-word searches in two databases: Lafayette College’s electronic library database and Google Scholar. Key words were originally derived from pre-conceived notions of what was integral to the subject matter, and then iterated as I learned more.[ii] Articles were then included in reviews based on several criterion: being peer-reviewed, being a literature review of the sub-disciplines this project includes, or being a paradigmatic case of the types of theories these analyses will treat. High number of citations was also an important factor, as they are connected with impact in the academic field, and citation count acts as a significant consideration in Google Scholar’s PageRank algorithms (Orlikowski, 2007, pp. 1439); though low citation count was not sufficient in itself to disqualify a source.
Section One: Sentiment Analysis and Opinion Mining:
A Literature Review[iii]
Rigorous study of public moods by the Digital Humanities is intimately intertwined with study of public marketplaces (Medhat et al., 2014 pp. 1094; Chen & Chen, 2017 pp. 1-2; Gopaldas, 2014 pp. 1010). In recent years “sentiment analysis” or “opinion mining,” a form of natural language processing designed to extract and aggregate individual sentiments from large collections of source data, has been integral to the public market analysis processes due to a recently discovered reliable correlation between social media sentiment and marketplace sentiment (Kaur & Gupta, 2013 pp. 367; Liu and Zhang, 2012 pp. 415-416; Vinodhini & Chandrasekaran, 2012 pp. 282). This automated processing can take three forms: lexical, machine, or hybrid processing (Kaur & Gupta, 2013 pp. 368; Heimann et al., 2014 pp. 83-88; Chen & Chen, 2017 pp. 3).
Lexical analysis involves using a pre-generated dictionary of terms or text features with already established sentiment values as a baseline to interpret the meaning or sentiment of the subject media; by picking out and aggregating dictionary terms present in the social media data set one can begin to extract meaning from them (Liu & Zhang, 2012 pp. 429-431; Heimann et al., 2014 pp. 82-83). Machine learning approaches, on the other hand, involve a computer picking out and grouping words or features of a media source based on any number of grammatical/linguistic relations to other words or features; an algorithm or human then manually assigns these groups topics or sentiments and thereby interprets the social media (Medhat, 2014 pp. 1095; Vinodhini & Chandrasekaran, 2012 pp. 283-284). Hybrid models combine aspects of both these procedures (Heimann et al., 2014 pp. 85-86; Cambria et al., 2013 pp. 18-19; Medhat, 2014 pp. 1098-1100).
It is also worth noting that at any level—creating the pre-existing dictionary or topic algorithm, its application, and its interpretation—neural network learning algorithms can and are used to improve accuracy of overall results (Chen & Chen, 2017 pp. 3; Heiman et al., 2014 pp. 87-88). However, since neural networks are black boxes, and therefore the precepts they generate cannot be verified, they are difficult to apply towards distant reading techniques (Schwatrz-Ziv, 2017, pp. 1). This is because the incredible complexity—and therefore variability—of linguistic communication of affect makes it hard to train neural networks for generalized use. This difficulty is compounded by the difficulty of generating a concrete taxonomy for the very same subject matter.
Each of these methodologies has problems. The most important problem for lexical analysis is “sentiment classification”: getting a computer to realize which sentiment a particular piece, be it a word, group of words, sentence, or entire corpus, evidences (Vinodhini & Chandrasekaran, 2012 pp. 288; Kaur & Gupta, 2013 pp. 368-369; Liu & Zhang, 2012 pp. 418). The complimentary problem, realizing when combinations of words change the sentiment of one particular word is a similarly large problem (Kaur & Gupta, 2013 pp. 368-369; Heiman et al., 2014 pp. 66-67). Machine methods, on the other hand have problems moving beyond the so-called “bag-of-words” assumption. The crudest topic modeling algorithms simply treat words as unrelated things—it groups the words that appear with the greatest frequency and leaves you to do the classifying (Heimann et al., 2014 pp. 86; Chen & Chen, 2017 pp. 2-3). This is obviously sub-optimal; however due to the complexity of meaning, and the prevalence of widespread disagreement within semantics, accuracy can only be improved so far beyond bag-of-words in contemporary models (Vulic et al., 2015 pp. 136; Wallach et al., 2006 pp. 8; Arora et al., 2013 pp. 7-8). Contemporary hybrid models, in turn, must have at least one of these problems by logic assuming that a: lexical and probabilistic models cannot solve each other’s problems without begging the question and b: no spectacular application of neural nets exists to solve either of these problems outright. None of the literature surveyed disqualifies these assumptions.
Preliminary Motivation of Epistemology as a
Potential Contributor to Sentiment Analysis Problems
These problems are a permutation of a single problem: how to extract social mood from the target media without close reading given the complexity of language and human emotion. More specifically, they are both questions about how social mood might be instantiated, and how fine or coarse-grained this instantiation might be, given the medium of language and how humans use it. The answer to this question seems to beget two other questions: what is a social mood and how is it instantiated in social media? These are two questions we, as human observers, can answer readily in practice without appealing to explicit decision procedures. However, as noted in the introduction, computers do not seem to have an analogous capacity. Therefore, generating axiomatic answers to them will be of primary importance to researchers.
Said answers would also help researchers by potentially providing increased information different types of public moods (long term vs. short term, high amplitude vs. low amplitude for example) as well as how they are distributed and interact throughout different social medias (academic social medias vs. more popular social medias for example). Not only would it provide a way to program computers to accurately retrieve the information researchers desire, it would also provide a principled way to make explicit the questions of sentiment analysis research and cash out their implications by disambiguating previously polysemic terms. Plausibly, ontological analysis also provides complimentary utility to linguistic analysis by giving us the basis to discover information about idiosyncratic use of interesting terms within populations (for example, how different is usage of the same academic terms in academic circles vs. the news that reports them) and their variations of possible sentiment spectrums (for example, high affect may have a different implication on twitter or facebook vs. in business correspondence).
Section Two: The Ontology and Instantiation Questions Problematized
The problem with this analysis, of course, is that the axiomatizations we need are not immediately forthcoming. The ontology question seems to map roughly onto a subset of the sociology of groups. The instantiation, in turn, seems to have a home in linguistics and semantics. However, a brief literature review of these corpora will show that no such answers immediately present themselves.
Question One: Ontology of Public Moods
Indeed, there is a surprising dearth of materials that deal directly explicitly with the ontology of “public mood”, “social mood”, or “social sentiment.” The literature that was found has two apparent trends: social mood is universally taken to be a causal entity, and there is debate about to what extent social mood is affective, what factors most strongly influence public mood, and just how much causal power social mood has. Within the literature this assumption of causality is often motivated by the failures of modern economic doctrine to make accurate predictions (Olson, 2006 pp. 193; Baddeley, 2010 pp. 282-283; Nofsinger, 2005 pp. 7-8, 15, 18-19, 23, 28).
Nofsinger and Durr both support the idea that social mood is meaningfully causal. Nofsinger does this by providing two sets of empirical studies: one that supports the idea that individual emotions change microeconomic outcomes and a second that argues there are affective components to corporate maneuvers as well as stock market behavior (Nofsinger, 2003). Durr, on the other hand, creates his own domestic policy sentiment index and uses a longitudinal study to compare it with domestic policy outcomes, ultimately correlating certain types of affect with success in certain types of policies (Durr, 1993). However, while both studies label social mood as “important” their lack of interest in the constituents of social mood prohibit them from making any more precise claims.
Gopaldas provides a more in-depth theory. He argues that sentiments are collectively shared emotional dispositions that can “energize” practices and discourses (Gopaldas, 2014 pp. 1007, 1009). This, in turn, may legitimate practices or materialize sentiments (ibid.). He argues these sentiments are a synthesis of activist, brand, consumer, and “other” (of which he provides examples) expressions (Gopaldas, 2014 pp. 999). On his view this analysis of public sentiments, implies an argument that sentiment analyses must disambiguate between different good and bad sentiments (Gopaldas, 2014 pp. 1010).
Baddeley provides a similarly detailed analysis but from an evolutionary perspective. She argues that herding instincts meaningfully affect rational behavior and are an integral part of studying emotion in decision-making (Baddeley, 2010 pp. 281, 283). Further, she asserts study of affect in decision making should not be simply confined to so called “visceral” affective phenomena, but also to more subtle affective phenomena like herding which influences iterated decision making and observation (ibid.). Baddeley also provides a literature review for theories of the nature and motivation behind herding behavior, as well as suggestions for observable neurological correlates of these phenomena (Baddeley, 2010 pp. 283-285, 286). Finally, she provides a summary of the “rationale” for adopting herding as an evolutionary strategy—emphasizing increased social learning capabilities and cooperation—and posits that the fact that many of these evolutionary drives are now being used in non-primitive settings is a possible reason for observed departures from idealized rationality (Baddeley, 2010 pp. 286).
One might note at this point that these analyses aren’t exactly helpful. The previous analyses hint at sources for ontologies but provide nothing concrete. Indeed, Olson, who provides his own literature review of social mood, asserted that there was no rigorous social scientific formula for the study of public mood as recently as 2006 (Olson, 2006 pp. 193). The first eight paragraphs of his publication are supposed to provide evidence for his conclusion by listing the failures of modern economic predictions and implying that they came to pass because modern economics does not take into account social mood and emotion more generally (ibid.). Olson provides his own taxonomy of research questions for the study of public mood in an attempt to remedy this, and provides a literature review of each category enumerated; the categories are as follows: emotion as discrete entity, positive and negative affect, valence arousal dimensions, neural systems in emotion, emotional contagion, the function of emotions, motivation, and personality traits.
Rahn et al., a source not cited by Olson, provides a complimentary analysis. In Particular, Rahn asserts that public mood is not usefully conceived as the average affect of a population but instead as the parts of a person’s phenomenology that arise from being part of a group (Rahn et al., 1996 pp. 33). To this end Rahn lists several factors that they view give rise to public mood judgments: positive and negative affect, personality traits, and motivation (Rahn et al., 1996 pp. 33). They then endeavor to create a survey that investigates these factors (Rahn et al., 1996 pp. 35). Ultimately, they cash out several concrete factors psychologists could measure and conclude that mood, in particular negative mood, is a strong and independent source of collectively motivated judgments (Rahn et al., 1996 pp. 48).
Question Two: Instantiation of Public Moods in Social Media
Linguistics is a massive field. Consequently, a full review of its contents is impossible here. What I will instead attempt is a highlight of theories that may provide axiomatizations amenable to computational analysis. The most straightforward example of this is “psycholinguistics” which explicitly attempts to tie linguistic findings to their psychological underpinnings (Osgood & Sebeok, 1954, pp. 4). This discipline incorporates the paradigmatic linguistic datum of phonemes and morphemes, with psychological theories of learning such as classical and operant conditioning, sematic theories such as association and sign-gestalt theories, and probabilistic information theory (Osgood & Sebeok, 1954, pp. 10, 20, 22, 28, 29, 38). Due to the complexity of its inputs several theories for the proper psycholinguistic methodology exist. These theories are theories of terms; within them there are many ways to solve the problems of language acquisition and meaning. However since this is a preliminary investigation there will be no need to go into these particulars.
First among the ontological theories is the “synchronic” school, which argues that speech communities are knit together based a mutual interpretation of taxis from so-called “bands” (Osgood & Sebeok, 1954, pp. 74). One band, for example, is the vocal-auditory band which couples the semiotics of speaking with hearing (ibid.). Another example is the gestural-visual band which does the same for movement and visual stimuli (ibid.). In essence these bands are supposed to be blocs of information processing that the brain uses to derive semiotic types (Osgood & Sebeok, 1954, pp. 74). A second theory is sequential psycholinguistics. This is perhaps the most often heard of form of linguistics, which attempts to understand linguistic behavior in terms of the inter-relation between linguistic categories (Osgood & Sebeok, 1954, pp. 93). In other words, sequential psycholinguistics attempt to understand when and why one particular type of word or sign occurs in particular relation to another sign. This can be done from the top down or the bottom up. Top down analyses theorize using linguistic hierarchies, for example “in X language noun precedes verb because in X language the semiotics of nouns and verbs take the configuration of A and B,” (Osgood & Sobek, 1954, 94). Bottom up, on the other hand, reverses the process, attempting to analyze a large corpus and derive organizational semantics from empirically observed correlations (Osgood & Sobek, 1954, pp. 95). Finally, there is diachronic psycholinguistics, which attempts to derive semiotics and linguistic ontologies by comparison of subjects at different levels of language comprehension (Osgood & Sebeok, 1954, pp. 126). The goal of diachronic psycholinguistics is to understand language through the lens of a more general learning theory (Osgood & Sebeok, 1954, pp. 126). Ontologies and semiotics are therefore derived from a pre-existing set of ontologies taken from learning theory.
The literature reviewed, driven by an expectation of falsifiability, leans towards statistical sequential psycholinguistics, also known more simply as corpus linguistics (Gilquin & Gries, 2009, pp. 2; Arppe et al., 2011, pp. 1; Locke, 2009, pp. 37). Corpus linguistics treats data that is machine readable, representative, and has been produced in a “natural communicative setting” (Gilquin & Gries, 2009, pp. 6; Arppe et al., 2011, pp. 3). Unfortunately, little headway has been made towards unifying corpus linguistics with cognitive linguistics (Gilquin & Gries, 2009, pp. 14; Arppe et al., 2011, pp. 16, Locke, 2009, pp. 37). Indeed there appears to be active resistance towards this unification.
Stokhof and van Lambalgen, for example, argue that a “naturalization” of linguistic theory would cripple linguistics itself because a great number of important linguistic concepts have been generated through “idealization” (Stokhof & van Lambalgen, 2011, pp. 21, 25). This process of idealization, according to the authors, is at odds with the natural method because it incorporates hardly any experiments, interpretative explanation, and no strict laws (Stokhof & van Lambalgen, 2011, pp. 16). Therefore, converting linguistics to a natural theory would drastically reduce its explanatory power.
Poeppel & Embick also highlight arguments against unification. They reiterate the argument above, as well as introduce another problem they call the “granularity mismatch problem” (Poeppel & Embick, 2005, pp. 2-3). The granularity mismatch problem is this: there doesn’t seem to be any way to reconcile the very fine-grained linguistic analysis with the broad conceptual distinctions made by neuroscience (ibid.). In essence, neuroscience is currently unable to distinguish between the physiological traces of different but categorically the same taxis—for example a memory drug might be able to make you recall happy memories but it seems implausible it could force you to remember a specific happy memory (Poeppel & Embick, 2005, pp. 6). Though Poeppel & Embick hold out hope that reconciliation might be possible (Poeppel & Embick, 2005, pp. 12-14).
Despite this resistance, there is still no dearth of work towards unification. Locke provides the beginning of a program with his “Evolutionary Developmental Linguistics” which functions mostly around providing evolutionary explanations for language competency. He also emphasizes the need for more study of infant development and fine-grained neuroimaging. Additionally, Casey & Long provide a literature review of meaning making along this evolutionary bend, though stops short of drawing implications. However, the vast majority of these frameworks is speculation.
Semantics is dominated by metaphor theory which is, in turn, dominated by three theories. The first view is the analogy view. This view has a very strict definition of metaphor as something that discusses the relations between entities; essentially a colloquial permutation of a : b :: x : y (Holyoak & Stamenkovic, 2018 pp. 645; Glucksberg, 2011 pp. 7). The analogy theory is highly useful because of its amenability to different predicate and lambda calculi (Holyoak & Stamenkovic, 2018 pp. 646). It also provides for fairly easy third party interpretation of conversations (Moser, 2000 pp. 4) Further, there is some correlation between proficiency in analogical reasoning and metaphor comprehension (Holyoak & Stamenkovic, 2018 pp. 646).
The second position is the categorization view. This view also has a very narrow view of metaphor, arguing that metaphors are simply category statements (Glucksberg, 2011 pp. 3, 4; Holyoak & Stamenkovic, 2018 pp. 646-647). This is a more tenuously supported position. Evidence for this theory is most prominently based on the idea that metaphors are not reversible; for example “my job is like a jail” makes sense while “my jail is like a job” does not (Glucksberg, 1990 pp. 7). Therefore, on this view, metaphors are assertions of class-inclusion, not simple similarity, and therefore categorization (ibid.). Because support for this theory almost entirely rests on this debate the theory itself is tenuous.
If these analyses seem cursory it’s because the final view, mapping theory—pioneered by Lakoff—is by far the more commonly represented view. Of the twelve articles on metaphor surveyed nine dealt with it. Metaphor, on this view, creates a mapping between two disparate ontological categories—it allows for the picking out of similarities between things not obviously similar to sense experience (Lakoff, 1993 pp. 2, 41-42; Holyoak & Stamenkovic, 2018 pp. 648). This conclusion is at least provisionally evidenced by the popularity of the recently developed Zaltman Metaphor Elicitation Technique, which supposedly operates on these concepts (van Kleef et al., 2005 pp. 190). The ZMET method has seen much usage in marketing since it was developed in the 1990s but I was unable to find any evidence of its success in an ad campaign not authored by Zaltman himself, though it has been successful insofar as it has been adopted as a scholarly tool (for example Christensen & Olson (2002) as well as Coulter et al. (2001)).
Lakoff has, in recent years, updated his own theory, creating the “neural theory” of metaphor. This theory involves Lakoff taking his original theory and drawing analogies between processes of mapping he articulated and the processes of neurons as described by Spreading Activation Theory (Lakoff, 2009 pp. 4). In particular he argues that metaphor is instantiated in the brain as unidirectional linking between multiple circuits, and that these circuits are used to “run a simulation” of relations between two disparate objects (Lakoff, 2009 pp. 6-7). I was unable to find any empirical verification for his claims.
Gibbs provides a very recent (2011) evaluation of mapping theory. In it he asserts that mapping theory accommodates important insights from cognitive linguistics, particularly about the usage of polysemic words and conventional expressions (Gibbs, 2011 pp. 532-533). Further, Gibbs notes there is evidence coming to light within cogntivie linguistics for Lakoff’s “simulation” idea (Gibbs, 2011 pp. 551). On the other hand, a significant problem with mapping theory he points out is that its definition of “metaphor” is worryingly broad—not only is it possible that the things it calls “metaphor” are actually several different systems, but such breadth also leads to the possibility disparate usage of the same terms in academic literature (Gibbs, 2011 pp. 534, 552). Additionally, mapping theory makes no recommendations as to why certain words are more apt to metaphor than others (Gibbs, 2011 pp. 536).
Another, entirely separate, form of evidence for Lakoff’s theory is its vast utility as a scholarly tool (Gibbs, 2011 pp. 531). Goatly and Wallington et al. provide prime examples of this. Goatly is able to create a concept map that purports to explain the conceptual underpinnings of all of English emotive judgment. Along the same vein, Wallington et al. is able to create a framework for explaining the transfer of affective information between conversational agents and subsequently an automated program that will automatically record transfers if they happen. Each of these analyses is based on the domain mapping procedure Lakoff first set out (Wallington et al., 2012 pp. 55-56; Goatly, 2011 pp. 14, 15, 16, 18).
In fact, mapping theory has been so popular it’s rapidly spreading to other disciplines. Two frameworks that are particularly useful relative to public moods are Schmitt and Moser. Each attempts to create a psychological underpinning for mapping theory. Schmitt begins with a literature review of how metaphor is used in the social sciences and then subsequently attempts to use Lakoff’s understanding of metaphor to derive a rigorous methodology for metaphor analysis (Schmitt, 2005 pp. 359-366, 368-374). Ultimately, he purports to generate a pseudo-falsifiable account of the semantic efficacy of metaphor (Schmitt, 2005 pp. 383).
Moser, along the same vein as Schmitt, attempts to bridge the gap between cognition and metaphor. She draws very similar conclusions to Lakoff in his “neural theory” reiteration, though she precedes that iteration by eight years and is not cited by Lakoff. Ultimately, she argues for five connections between metaphors and cognition: metaphors influence information processing, metaphors are reliable and accessible operationalizations of tacit knowledge, metaphors are holistic representations of understanding and knowledge, conventional metaphors are examples of automated action, and finally metaphors reflect social and cultural processes of understanding (Moser, 2000 pp. 4, 5).
Section Three: Lack of Communication as a Potential Reason for the Lack of Answers
The Case for Lack of Communication
And Its Proxy Measure
As we can see there are significant shortcomings relative to our goals within the above theories. One possible explanation for a lack of axiomatic generalizations applicable to Digital Humanities models is that the experts within the Digital Humanities are not talking to the experts within linguistics/semantics and the sociology of groups. Further, sociologists and linguists/semanticists may not be talking to each other. Since ontology cannot be understood without recourse to instantiation Digital Humanists would then have no reason to talk to either discipline. To investigate this possibility I undertook a bibliometric analysis of publications in each corpus.
Communication was estimated through the proxy measure of how many journals include papers from more than one body of literature. The justification for this is twofold. First, there is a robust body of literature to support the idea that scholars regularly skim the entirety of the contents of journals when they pursue scholarly reading, therefore journals that incorporate multiple bodies of literature of likely to be fertile ground for knowledge transference (King et al., 2009, pp. 131, 136; Tenopir et al., 2009, pp. 19-20). And second, the majority of scholarly reading is done to support research and incorporating interdisciplinarity into one’s research is a viable strategy for generating productive conclusions, therefore interdisciplinary articles are viable for use (Tenopir et al., 2009, pp. 19; King et al., 2009, pp. 126, King et al., 2003, pp. 263).Procedure and Results The procedure for my investigation was then as follows: the above literature review was performed. Based on this literature review a second set of key-words for each sub-discipline was generated.[iv] These keywords were then used to search Web of Science and generate a representative data sample for each sub-discipline. Search results were manually checked for relevance using the same criterion as in the literature review. Subsequently, the bibliographic data for relevant pieces was collected in CSV files using Web of Scholar’s “citation report” function. 6648 citations were generated in total, 410 for the ontology aspect,[v] 2171 for instantiation, and 4076 for sentiment analysis.
All columns but title and publication the scholarly works appeared in (what Web of Scholar calls “source”) were then removed. Afterward, data was imported into Gephi to eliminate redundant entries. This was done in two passes. First, by creating adjacency matrices and summing parallel edges, and second by then manually combing the data. Redundancies within individual disciplines were analyzed first to remove redundant entries from key-word searches within the same corpora—this left 6594 entries. Those remaining 6594 were then checked for inter-corpora redundancies, ultimately yielding 6020 individual sources (or 574 redundant entries) published in 2975 unique journals.
Of these 574 redundant entries, 184 provided duplicate source entries.[vi] Therefore, of the 2975 unique journals retrieved, 390 contained works from two or more corpora. This amounts to 13.1% knowledge transfer between the three disciplines. Or, to put it another way, 13.1% of the retrieved journals for these disciplines incorporate knowledge from at least one of the other two disciplines mentioned. When compared to other publications that measured interdisciplinarity the difference is striking. According to a two-tailed single sample Z score this is a statistically significant difference p < .01, with a difference of 2.2 standard deviations from the mean. Raw data can be found in the appendix.
Implications, Further Research, and Limitations
Ultimately, then, if one accepts my proxy measure, we have a preliminary case for lack of communication as exacerbating the problems in creating a sentiment analysis program with human-like accuracy. The statistically significant dearth of communication implies that otherwise useful cross-pollination is not occurring and, therefore, current productivity towards human-like accuracy is sub-optimal. However, these conclusions should only be taken as preliminary since this study does have several limitations.
First, the characterization of sentiment analysis, the sociology of groups, and linguistics/semantics was fairly superficial. Though no literature was found on the compatibility of the three disciplines it is possible that upon deeper philosophical analysis they are, in fact, mutually incompatible or at least only partially compatible. In this case the lack of communication between disciplines would not be problematic because communication would be unproductive. Second, my proxy measure for communication is imperfect. The simple measure of appearance, while indicative of at least a baseline-level of knowledge transference, says nothing about the scholarly impact of particular cross-pollinating papers. For example, a cross-pollinating paper could have been vastly influential and yet not appeared in this analysis because it generated its own key-words that were not captured in my key-word list. This also exposes the third limitation: that key-word analysis only goes so far. Without iterating a very extensive list of key-words for each corpora it is hard to capture the complexity of the discourse within them, therefore my selection may not be entirely representative. Finally, I did not retrieve each citation individually. In checking for relevance before and after the retrieval of citations I was tolerant of a certain amount of noise which, though it appears to me to be only a few papers per thousand, has no rigorous measure.
Therefore, further research should aim towards generating a more thorough philosophical analysis of these three bodies of literature. Not only to investigate any in principle incompatibilities but to provide data for a more thorough key-word search. It should also be investigated whether disambiguating the percentage of cross-pollination has any conceptual utility. Personally, I believe that overcoming the shortcomings enumerated in the limitations section above is logically antecedent (i.e. any disambiguation couldn’t be informatively carried out without answering the limitations questions first) but I am not sure of this conclusion.
Arora, S., Ge, R., Halpern, Y., Mimno, D., Moitra, A., Sontag, D., Wu, Yichten, & Zhu, M. (2013). A Practical Algorithm for Topic Modeling with Provable Guarantees (p. 9). Presented at the International Conference on Machine Learning, Atlanta, Georgia.
Arppe, A., Gilquin, G., Glynn, D., Hilpert, M., & Zeschel, A. (2010). Cognitive Corpus Linguistics: five points of debate on current theory and methodology. Corpora, 5(1), 1–27. https://doi.org/10.3366/cor.2010.0001
Baddeley, M. (2010). Herding, social influence and economic decision-making: socio-psychological and neuroscientific analyses. Philosophical Transactions of the Royal Society B: Biological Sciences, 365(1538), 281–290. https://doi.org/10.1098/rstb.2009.0169
Cambria, E., Schuller, B., Xia, Y., & Havasi, C. (2013). New Avenues in Opinion Mining and Sentiment Analysis. IEEE Intelligent Systems, 2(28), 15–21. https://doi.org/10.1109/MIS.2013.30
Casey, B., & Long, A. (2003). Meanings of madness: a literature review. Journal of Psychiatric and Mental Health Nursing, 10(1), 89–99.
Chen, Mu-Yen., & Chen, Ting-Hsuan. (2017). Modeling public mood and emotion: Blog and news sentiment and socio-economic phenomena. Future Generation Computer Systems. https://doi.org/10.1016/j.future.2017.10.028
Christensen, G. L., & Olson, J. C. (2002). Mapping consumers’ mental models with ZMET. Psychology & Marketing, 19(6), 477–501. https://doi.org/10.1002/mar.10021
Coulter, R. A., Zaltman, G., & Coulter, K. S. (2001). Interpreting Consumer Perceptions of Advertising: An Application of the Zaltman Metaphor Elicitation Technique. Journal of Advertising, 30(4), 1–21. https://doi.org/10.1080/00913367.2001.10673648
Dey, L., & Haque, S. M. (2009). Opinion mining from noisy text data. International Journal on Document Analysis and Recognition (IJDAR), 12(3), 205–226. https://doi.org/10.1007/s10032-009-0090-z
Ding, X., & Liu, B. (2007). The Utility of Linguistic Rules in Opinion Mining. In Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 811–812). New York, NY, USA: ACM. https://doi.org/10.1145/1277741.1277921
Durr, R. H. (1993). What Moves Policy Sentiment? American Political Science Review, 87(1), 158–170. https://doi.org/10.2307/2938963
Gibbs, R. W. J. (2011). Evaluating conceptual metaphor theory. Discourse Processes, 48(8), 529–562. https://doi.org/10.1080/0163853X.2011.606103
Gilquin, G., & Gries, S. T. (2009). Corpora and experimental methods: A state-of-the-art review. Corpus Linguistics and Linguistic Theory, 5(1), 1–26. https://doi.org/10.1515/CLLT.2009.001
Goatly, A. (2011). Metaphor as Resource for the Conceptualisation and Expression of Emotion. In Affective Computing and Sentiment Analysis: Emotion, Metaphor and Terminology (Vol 45). Springer Science & Business Media.
Glucksberg, S., & Keysar, B. (1990). Understanding metaphorical comparisons: Beyond similarity. Psychological review, 97(1), 3.
Glucksberg, S. (2011). Understanding Metaphors: The Paradox of Unlike Things Compared. In Affective Computing and Sentiment Analysis: Emotion, Metaphor and Terminology (Vol 45). Springer Science & Business Media.
Gopaldas, A. (2014). Marketplace Sentiments. Journal of Consumer Research, 41(4), 995–1014. https://doi.org/10.1086/678034
Haddi, E., Liu, X., & Shi, Y. (2013). The Role of Text Pre-processing in Sentiment Analysis. Procedia Computer Science, 17, 26–32. https://doi.org/10.1016/j.procs.2013.05.005
Heimann, R., Danneman, N., & Wood, M. G. (2014). Social media mining with R : deploy cutting-edge sentiment analysis techniques to real-world social media data using R. Birmingham, England : Packt Publishing, 2014.
Holyoak, K. J., & Stamenković, D. (2018). Metaphor comprehension: A critical review of theories and evidence. Psychological Bulletin, 144(6), 641–671. https://doi.org/10.1037/bul0000145
Kaur, A., & Gupta, V. (2013). A Survey on Sentiment Analysis and Opinion Mining Techniques. Journal of Emerging Technologies in Web Intelligence, 5(4), 367–371. https://doi.org/10.4304/jetwi.5.4.367-371
King, D. W., Tenopir, C., Choemprayong, S., & Wu, L. (2009). Scholarly journal information-seeking and reading patterns of faculty at five US universities. Learned Publishing, 22(2), 126–144. https://doi.org/10.1087/2009208
King, D. W., Tenopir, C., Montgomery, C. H., & Aerni, S. E. (2003). Patterns of Journal Use by Faculty at Three Diverse Universities. D-Lib Magazine, 9(10). https://doi.org/10.1045/october2003-king
Lakoff, G. (1993). The contemporary theory of metaphor. In Metaphor and thought (pp. 202–241). Retrieved from https://ci.nii.ac.jp/naid/10029590582/
Lakoff, G. (2009). The Neural Theory of Metaphor (SSRN Scholarly Paper No. ID 1437794). Rochester, NY: Social Science Research Network. Retrieved from https://papers.ssrn.com/abstract=1437794
Locke, J. L. (2009). Evolutionary developmental linguistics: Naturalization of the faculty of language. Language Sciences, 31(1), 33–59. https://doi.org/10.1016/j.langsci.2007.09.008
Liu, B., & Zhang, L. (2012). A Survey of Opinion Mining and Sentiment Analysis. In Mining Text Data (pp. 415–463). Springer, Boston, MA. https://doi.org/10.1007/978-1-4614-3223-4_13
Medhat, W., Hassan, A., & Korashy, H. (2014). Sentiment analysis algorithms and applications: A survey. Ain Shams Engineering Journal, 5(4), 1093–1113. https://doi.org/10.1016/j.asej.2014.04.011
Moser, K. S. (2000). Metaphor Analysis in Psychology--Method, Theory, and Fields of Application. Forum: Qualitative Social Research, 1(2), 87–96.
Nicholas, D., Huntington, P., & Watkinson, A. (2005). Scholarly journal usage: the results of deep log analysis. Journal of Documentation, 61(2), 248–280. https://doi.org/10.1108/00220410510585214
Niu, X., Hemminger, B. M., Lown, C., Adams, S., Brown, C., Level, A., … Cataldo, T. (n.d.). National study of information seeking behavior of academic researchers in the United States. Journal of the American Society for Information Science and Technology, 61(5), 869–890. https://doi.org/10.1002/asi.21307
Nofsinger, J. R. (2005). Social Mood and Financial Economics. Journal of Behavioral Finance, 6(3), 144–160. https://doi.org/10.1207/s15427579jpfm0603_4
Olson, K. R. (2006). A Literature Review of Social Mood. Journal of Behavioral Finance, 7(4), 193–203. https://doi.org/10.1207/s15427579jpfm0704_2
Orlikowski, W. J. (2007). Sociomaterial practices: Exploring technology at work. Organization studies, 28(9), 1435-1448.
Osgood, C. E., Sebeok, T. A., Gardner, J. W., Carroll, J. B., Newmark, L. D., Ervin, S. M., ... & Wilson, K. (1954). Psycholinguistics: a survey of theory and research problems. The Journal of Abnormal and Social Psychology, 49(4p2), i.
Pang, B., Lee, L., & Vaithyanathan, S. (2002, July). Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10(pp. 79-86). Association for Computational Linguistics.
Poeppel, D., & Embick, D. (2005). Defining the relation between linguistics and neuroscience. In Twenty-first century psycholinguistics: Four cornerstones (pp. 103–118).
Porter, A. L., & Chubin, D. E. (1985). An indicator of cross-disciplinary research. Scientometrics, 8(3–4), 161–176. https://doi.org/10.1007/BF02016934
Porter, Alan L., Cohen, A. S., Roessner, J. D., & Perreault, M. (2007). Measuring researcher interdisciplinarity. Scientometrics, 72(1), 117–147. https://doi.org/10.1007/s11192-007-1700-5
Rahn, W. M., Kroeger, B., & Kite, C. M. (1996). A Framework for the Study of Public Mood. Political Psychology, 17(1), 29–58. https://doi.org/10.2307/3791942
Schmitt, R. (2005). Systematic Metaphor Analysis as a Method of Qualitative Research. Qualitative Report, 10(2), 358–394.
Shwartz-Ziv, R., & Tishby, N. (2017). Opening the black box of deep neural networks via information. arXiv preprint arXiv:1703.00810.
Stokhof, M., & van, L. M. (2011). Abstractions and idealisations: The construction of modern linguistics. Theoretical Linguistics, 37(1–2), 1–26. https://doi.org/10.1515/thli.2011.001
Tenopir, C., King, D. W., Edwards, S., & Wu, L. (2009). Electronic journals and changes in scholarly article seeking and reading patterns. Aslib Proceedings, 61(1), 5–32. https://doi.org/10.1108/00012530910932267
van Kleef, E., van Trijp, H. C. M., & Luning, P. (2005). Consumer research in the early stages of new product development: a critical review of methods and techniques. Food Quality and Preference, 16(3), 181–201. https://doi.org/10.1016/j.foodqual.2004.05.012
van Leeuwen, T., & Tijssen, R. (2000). Interdisciplinary dynamics of modern science: analysis of cross-disciplinary citation flows. Research Evaluation, 9(3), 183–187. https://doi.org/10.3152/147154400781777241
Vinodhini, G., & Chandrasekaran, R. (2012). Sentiment Analysis and Opinion Mining: A Survey. International Journal of Advanced Research in Computer Science and Software Engineering, 2(6), 282–292.
Vulić, I., De Smet, W., Tang, J., & Moens, Marie-Francine. (2015). Probabilistic topic modeling in multilingual settings: An overview of its methodology and applications. Information Processing & Management, 51(1), 111–147. https://doi.org/10.1016/j.ipm.2014.08.003
Wallach, H. M. (2006). Topic Modeling: Beyond Bag-of-words. In Proceedings of the 23rd International Conference on Machine Learning (pp. 977–984). New York, NY, USA: ACM. https://doi.org/10.1145/1143844.1143967
Wallington, A., Agerri, R., Barnden, J., Lee, M., & Rumbell, T. (2011). Affect Transfer by Metaphor for an Intelligent Conversational Agent. In Affective Computing and Sentiment Analysis: Emotion, Metaphor and Terminology (Vol 45). Springer Science & Business Media.