Sign in or register
for additional privileges

Using NVivo: An Unofficial and Unauthorized Primer

Shalin Hai-Jew, Author

You appear to be using an older verion of Internet Explorer. For the best experience please upgrade your IE version or switch to a another web browser.

What is NVivo?

What is...NVivo?




NVivo is a software tool that complements the work of human researchers working on qualitative, mixed methods (and multi-methodology), and quantitative research. As such, it enables the ingestion of various types of digital data, the coding of that data, and then various types of queries and analyses of that coding and data. The tool may be used for one part or a few discrete parts of a research project, an entire research project, or even multiple research projects (such as with the same dataset). The tool has a wide range of functionalities, but users may select what features they want to use. 
 





Different "use cases". The software may be deployed in various ways for different types of use cases. The classic use case would involve qualitative and mixed methods types of research, its coding, the analysis of that data, and the output of data visualizations. A highly complex project could be achieved with multiple datasets as different .nvp project files.  For this scenario, NVivo would be used for data management, source annotation, coding, analysis, and data visualization.  

Beyond traditional applications, there may be a use of the tool for parts of a project.  For example, NVivo (with NCapture) may be used to extract a Tweetstream from a Twitter account for content analysis.  It may be used to store all contents of a literature review for easy search and find.  It may be used to create some visualizations for "visual interest" in slideshows and presentations related to particular research. An .nvp (NVivo project) file may be created as a training set for students to practice coding. NVivo may be used to conduct basic exploratory text analyses for digital humanities projects.  [NCapture does not work on Internet Explorer anymore for Twitter or Facebook because the two platforms have not been optimized to work with IE, which is being phased out by Microsoft.] 



NCapture and Web and social media data.  NCapture, a browser plug-in to recent versions of the Google Chrome web browser, enables the extraction of social media platform data for analysis. This tool enables the capturing of external data from any website on the Surface Web in terms of text and imagery. In social media, it enables captures from Facebook, YouTube, and Twitter. NCapture is part of the NVivo feature set in the Windows and Mac installations.  (The Mac capability was released in Fall 2014.) 

NCapture only works on Google Chrome and Edge web browsers.  This means that social media platforms that are non-functioning in the later versions of the software will not be accessible. 

At present, Facebook is inaccessible using the NCapture web browser add-on...  NCapture still works on Twitter accessed via Chrome or Edge. There are many dependencies that exist with using application programming interfaces (APIs) for the "shadow datasets" made available in part by the social media companies.  (To acquire an N = all, there are commercial data providers, but the cost is not negligible for researchers.) 


Research approach "agnosticism". Ideally, software programs used in research should be theory- and research- "agnostic," which suggests that the tool may be applied in a range of research contexts. Some say that NVivo is reflective of a grounded-theory emergent approach (which begins with data, bottom-up coding of that data, and  informed observations, and ends with post hoc hypothesizing and theorizing and research); others suggest the opposite, suggesting that this software tool benefits an a priori or pre-defined theoretical research approach.  

No matter, how researchers deploy and wield the software tool will affect the research findings.  A software tool should not determine any research approach; rather, the researcher and / or research team should drive the work. The researcher(s) decide when and how software complements human capabilities. Also, the use of the software does not mean that the researchers somehow bypass theorizing or following the research practices in particular academic disciplines, professional fields, or over-arching domains. 


* Sometimes, beginning researchers will reach out and say, "Here is what I did, and I got this visual output. What does it mean?" As with other data analytics software programs, it is sufficiently easy to run a process and acquire a result...but not know what one is seeing, much less how to make a supportable claim from what one has seen.  The following is de rigueur
  1. Know your data intimately (how you acquired it in alignment with IRB support, where it came from/provenance; how it has been handled; how it has been cleaned; known gaps, etc.)  
  2. Know the computational analytics process that you are applying to that data (its strengths and weaknesses) 
  3. Know what is assertable from the findings (and assert no more than you can rationally and reasonably claim...and add considerations for the weaknesses of the assertions) 
  4. Know the reasonable variations on how the findings may be interpreted and viewed, and give other perspectives fair consideration

Without those fundamental elements, do not make the public assertions in presentations, in messaging, and in publications.   Go with trusted colleagues to review your data and your analyses.   






Going on-paper-and-pen manual or computer-assisted manual.  Some researchers suggest that the findings that are arrived at through manual coding using butcher paper and pens will differ from manual coding through a computer. This may be so to some degree, but it’s also true that manual coding on paper cannot afford the types of analytics that may be done by machine—which include sophisticated word frequency counts, text searches, and other tools. The computer-assisted version also enables a broad range of data management which is more efficient and accurate than manual-only methods.  

 
Going fully automated.  NVivo also enables various automated analytics, such as for sentiment analysis, topic modeling (topic and subtopic extraction from natural language documents), and so on.  


Human and machine coding.  There is a powerful approach that builds on human manual coding but extends that through automation.  Said another way, a human-made codebook (built to saturation) may be expanded to apply to uncoded text. 


NVivo and… NVivo may be used as a complementary tool to a range of other analytical tools. For example, if more sophisticated text analysis, social network analysis, quantitative analysis, or other types of work are required, then additional software should be brought to bear. Some common software tools include the following, some of which are open-source and free and others which are commercial:  
  • Microsoft Visio Professional for diagramming 
  • Adobe Photoshop for image editing and handling 
  • Microsoft EXCEL for statistical analysis and evocative data visualizations 
  • Pajek for graph visualizations 
  • UCINET for graph analysis and basic graph visualizations
  • AutoMap for the extraction of relational data from texts and ORA NetScenes for the visualization of that data 
  • Tableau Public for mapping and the creation of data-rich Web dashboards (there is a for-cost commercial version) 
  • Maltego for Surface Web exploration and data visualizations 
  • SPSS (Statistical Package for Social Science) for statistical analysis 
(Note:  Please read the fine print before making any commitments to any software program. Those in higher education or any sort of accredited educational institution may have access to some pretty impressive discounts.) 


There is nothing to suggest that any research work done in NVivo has to be done exclusively within this tool. (All text, datasets, node queries, and other reports are easily downloadable from NVivo in common file format types.  There is an integration with SPSS, for both input .sav data and output .sav data.)

Of course, SAS, R, Python, RapidMiner Studio, and other software tools can be brought to bear on the research data...for various analytics...and if done very cleanly, data from the various processes may be re-analyzed in NVivo for additional other insights. 

There are other software tools that provide some of the similar functions to NVivo but not all and not of a piece. It is beyond the purview of this e-book to cite them. Also, unless the author has had direct experiences with a tool, it would not be fair to elaborate on other tools based on second- or third-hand information.
Comment on this page
 

Discussion of "What is NVivo?"

Add your voice to this discussion.

Checking your signed in status ...


Related:  "Using NVivo" CoverStarting a New NVivo ProjectDownloading, Installing, and Registering the SoftwareUsing Demographics to Further Explore Interview, Survey, or Focus Group DataA Simplified Timeline of Qualitative and Mixed Methods Research (as a semi-recursive process)Data Query: Text Search QueryCreating Codebooks in NVivo (through Reports...through Share)Analyzing Social Media Data in NVivoThe NVivo User InterfaceConducting Data Queries in NVivo (Part 1 of 2)Ingesting "Internal" Source ContentsComplementary ToolsA Research Workflow with NVivo IntegrationsSetting up a Qualitative or Mixed Methods Research Project...ScreenshotManual Coding in NVivoData Query: Coding Comparison (Advanced) and Cohen's Kappa CoefficientSome Types of Data Visualizations in NVivosocial media platformDisambiguating Data VisualizationsIngesting "External" Source Contents (Think "Proxy")Future Look: Data Repositories for NVivo-based Data Sets?Citing Published and Unpublished Sources in NVivoNCapture and the Web, YouTube, and Social MediaResearch Journaling in NVivoNCapture Icon and WindowIntro"Autocoding" through Machine Learning from Existing (Human) Coding PatternsNCapture Collection of the White House Facebook PageData Query: Compound Query (Advanced)"Autocoding" through Styled or Sequentially-Structured Textual DataCreating Relationships between Project Items (Nodes, Sources, and Other Entities)Data Query: Matrix Coding QueryNCapture of Google News SiteCoding (Social) Imagery in NVivoNCapture Capture of a Twitter TweetStreamConducting Data Queries... (Part 2 of 2)Team or Group Coding in NVivoTo Go with NVivo or Not? (and Partial Checklist for Stopping Usage of NVivo)Data Query: Group Query (Advanced)Copy Project Window in NVivo 10 (Windows)NCapture Collection of the YouTube Video LinkData Query: Word Frequency Count QueryFrom NCaptureData Query: Coding Querymixed methods research