By Tamara Fudge, Purdue University Global
Figure 1. Percent Trouble Board (by Tamara Fudge)
Professors often grapple with student plagiarism, spending an inordinate amount of time cross-checking sources and determining courses of action, which may include submitting a statement to an honor board or other administrative office. Computers have brought new cheating opportunities, as it is easier to locate and copy work quickly (Heckler, Rice, & Bryan, 2013). Fortunately, computers have also brought new detection tools. Programs such as Turnitin produce a report that includes a percentage of wording found in the student's work common to other sources. This quantifiable value can be used to help determine if plagiarism has occurred, but brings up the question as to whether or not there is a percentage threshold that should be set for reportable offenses.
Background: This is Not New
While it might be considered mostly anecdotal, it is helpful to know that the idea of plagiarism – literary theft – has long been a topic of debate. It has its roots in ancient Rome during the first century A.D., when the poet Martial accused another man of stealing his work. Interestingly, the term plagiarius indicated a person engaging in the enslavement of another, or in the theft of a slave (Posner, 2007). The problem of plagiarism is not new, but the methods of cheating and of detecting literary theft have, of course, drastically changed.
Penalties for plagiarizing in today's universities are determined by each institution, as it is an ethical problem, not a legal one; potential sanctions include receiving a zero grade, being required to rewrite the assignment, flunking the class, or even being dismissed from the college or university (Mozgovoy, Kakkonen, & Cosma, 2010). It is how these charges are determined that presents the question of percentages.
Problem 1: Too Many Types of Plagiarism
In determining whether plagiarism has occurred, it is first important to understand there are many kinds of illicit writing practices. In addition to direct copying, Bradford University's website identifies the purchase or sharing of papers and the reuse of work previously submitted to another course as plagiarism ("Types of Plagiarism," n.d.). Turnitin mentions "mosaic" copying, also known as patchwork, in which a student has copied from multiple sources; other types include citations that do not lead to the correct source, lack of quotation marks or attributions for quotes, and proper citations for quotes but little of the student's own work in a paper. The practice of replacing only selected words (called "synonymizing" by some sources) but retaining the original sentence structure is also considered a dishonest practice. In all, Turnitin considers ten kinds of plagiarism along a spectrum ("The Plagiarism Spectrum," 2016)
There may be patterns to consider as well, and Eret and Ok (2014) suggest that students in different countries might even plagiarize more commonly in one way or another.
These facts make detection more difficult, as not all of the writing-fraudulent procedures will appear on a computerized plagiarism report.
Problem 2: Intended or Not
Related to the methods students use to plagiarize is the intent, or lack thereof. There are many reasons students might intentionally cheat, including the strength of institutional policies and attitudes of the individual professors as well as poor prior training, time constraints, and social pressures; even fear their own work will signal failure can serve to push a student towards cheating (Eret & Ok, 2014). Students may weigh the potential of getting caught with the time and effort saved by cheating, although a significant deterrent is said to be knowing submitted work would be checked by a plagiarism detection tool (Heckler et al. 2013).
Intent is sometimes considered but presents its own problems. Stuhmcke, Booth, and Wangmann state that "most Australian universities adopt penalties seen as appropriate to the level of intention" (2016, p. 984). While this may be the case in some universities, intent may not always be verifiable; it would be impossible, for example, to determine if a student's patchworked paper was written poorly as a result of cheating on purpose or simply because the student did not know how to adequately paraphrase.
It can also be argued that a rule once broken is indeed broken, no matter the intent. Bowdoin University's website clearly states, "Lack of intent does not absolve the student of responsibility for plagiarism" ("The Common Types of Plagiarism," n.d., para. 5), a sentiment echoed at many other American schools' websites and in student handbooks.
Whether officially supported by institutional rules or not, faculty may perceive various plagiarism practices differently. For example, buying a paper, sabotaging another student's work, and theft of test answers (all which would be clearly intentional) rank high as "serious" offenses, whereas failing to include copied words in quotes (which might be unintentional or uninformed) is seen as a less serious offense according to a study published by the Journal of Higher Education (Pincus & Schmelkin, 2003).
Intent and the methods of plagiarizing then aside, the issue of percentage as a marker is still unresolved.
Figure 2. A Plagiarism "Cocktail" (by Tamara Fudge)
Problem 3: Inaccuracy of Computer-Generated Reports
Another concern in considering the potential for a numeric threshold is that the percentages generated by plagiarism software are unreliable. There have been several studies about the precision of scores, but without a consensus regarding true accuracy (Heckler et al. 2013). This author has had student work with low Turnitin scores that, after careful investigation, ultimately were determined to be in the 80-90% range. Similarly, some false positives have also been noted.
Some courses are designed for students to input solutions in a template; plagiarism detection tools will then identify the professor's original template content as copied. Lukashenko, Anohina, and Grundspenkis (2007) affirms that this "distorts" the verification of illicit student content and "makes difficult to reason about [the] real amount of plagiarism in the document" (p. 52).
Mozgovoy et al. (2010) explained the power of a plagiarism detection program lies in its ability to detect different kinds of plagiarism, and "not all types of plagiarism are equally challenging" (p. 514), an allusion to the aforementioned varieties. They further divide programs into categories: hermetic programs, which look for commonalities within a stored database, and web searches, which check wording posted on the internet; the stronger programs will have elements of both (Mozgovoy et al. 2010).
The table below provides an example of the variety of sources checked by a detection program and the growth in the detection industry.
Sources for comparison
Books and periodicals
170 million journal articles
Sources for this information: (Oseland, 2012) ("Turnitin Overview," n.d.)
Table 1. TurnItIn growth
Oseland (2012) also indicated that there were many partnerships with book and periodical publishers. In October 2018, the company announced that Gradescope had been acquired, allowing additional focus in STEM (Science, Technology, Engineering, and Math) areas as well as business and related topics (Hand, 2018).
Since each detection program relies on its own methods and compares work to potentially quite different sets of sources, the results will vary from one program to another. A report will not include copied material missing from its program's database or search methods, but at the same time could tag common phrases and templated content, or misidentify a source (Jaschik, 2009). The databases and comparison methods change over time as well, so plagiarism reports can be wildly inconsistent.
Problem 4: Determining How Much is Too Much
The companies developing detection programs are hesitant to declare their products faultless. Turnitin is considered by many to be one of the strongest programs (Mozgovoy et al. 2010), but despite this and their color indexing of numeric results, the software company plainly states "there is no score that is inherently 'good' or 'bad'" ("Does Turnitin Detect Plagiarism," n.d., para. 3)
. A numeric threshold does not seem to be recommended even by the programs producing these percentages.
The publishing world has the same concerns as the academic world. The publisher Elsevier reminds authors and editors that high scores are not necessarily an indication of illicit work, and the numeric value could mean different things: the words could be all copied from one source or many small bits from a number of sources ("Plagiarism Detection," n.d.). Lykkesfeldt (2016) reminds that these reports must be interpreted, and even after examination of gathered plagiarism data, does not recommend a set percentage when considering professional articles for publication.
Numeric principles already in existence are difficult if not impossible to apply to cases of plagiarism. If the Pareto Principle was loosely applied, for example, it might be argued that at least 80% of a paper needs to be original. Equally arguable, however, is that 20% of a paper stolen from sources is far too high, thus negating Pareto as a potential marker. The method of plagiarizing, the amount of quotes (even those correctly cited), and – even though not formally sanctioned by many schools – the intent are factors which may be considered beyond any numeric value.
Problem 5: Potential for a Learning Event
In some cases, faculty may see an instance of plagiarism as a learning opportunity for the student. Perhaps the student cited incorrectly, but at least attempted to do so; perhaps the course is a first-term introductory class and the student's previous background did not instill good writing practices.
If, as suggested by Heckler et al. (2013), "prevention is about changing behavior" (p. 243), then knowing a plagiarism detection tool would be used, while deterring some from copying, may not be sufficient in changing student plagiarism behaviors. Allowing a student to rework a poorly written paper in some circumstances may be the key to changing plagiaristic behaviors.
Notably, Bruton and Childers (2016) indicate the faculty they surveyed largely reviewed potential plagiarism issues on an individual basis. By doing so, they make choices regarding plagiarism charges versus a learning event by weighing the many factors explained above instead of relying on an automated score.
There are far too many ways for students to plagiarize, detection tools are not always accurate, and the professor must take the time to carefully review and interpret any computer-generated reports. While a designated percentage may make plagiarism charge decisions far easier, such decisions would still not be fair considering the many variables. In the best interest of the student, some instances can be used as learning events and others are more appropriate for applying punitive point losses or official charges. This requires a judgment call that cannot be quantified by a predetermined "magic" percentage.
Bruton, S. & Childers, D. (2016). The ethics and politics of policing plagiarism: a qualitative study of faculty views on student plagiarism and Turnitin. Assessment & Evaluation In Higher Education, 41(2), 316-330.
Eret, E. & Ok, A. (2014). Internet plagiarism in higher education: tendencies, triggering factors and reasons among teacher candidates. Assessment & Evaluation In Higher Education, 39(8), 1002-1016.
Heckler, N. C., Rice, M., & Bryan, C. H. (2013). Turnitin systems: A deterrent to plagiarism in college classrooms. Journal Of Research On Technology In Education, 45(3), 229-248.
Lukashenko, R., Anohina, A., & Grundspenkis, J. (2007). A conception of a plagiarism detection tool for processing template-based documents. ICTE in Regional Development: 2007 Annual Proceedings, 51–57.
Lykkesfeldt, J. (2016). Strategies for using plagiarism software in the screening of incoming journal manuscripts: Recommendations based on a recent literature survey. Basic & Clinical Pharmacology & Toxicology, 119(2), 161-164. doi:10.1111/bcpt.12568
Mozgovoy, M., Kakkonen, T., & Cosma, G. (2010). Automatic student plagiarism detection: Future perspectives. Journal Of Educational Computing Research, 43(4), 511-531.
Pincus, H.S. & Schmelkin, L.P. (2003, March/April). Faculty perceptions of academic dishonesty. The Journal of Higher Education, 74(2), 196-209.
Posner, R.A. (2007). The Little Book of Plagiarism. New York: Pantheon Books.
Stuhmcke, A., Booth, T., & Wangmann, J. (2016). The illusory dichotomy of plagiarism. Assessment & Evaluation In Higher Education, 41(7), 982-995.
About the Author
Dr. Tamara Fudge is a graduate of the Indiana University School of Music, where she earned a bachelor's and two master's degrees. She subsequently earned a doctorate in music from Florida State University. As a lyric mezzo-soprano, she sang opera, oratorio, and in recital, preferring the latter and specializing for a while in music of the Americas. Her music compositions (mostly chamber music, songs, and choral pieces) have been heard on Public Radio, featured at a state choral convention, and performed at several colleges and universities. Fudge has taught over two dozen different courses at the college/university level in vocal and choral music, foreign language diction, vocal pedagogy, song and choral literature, opera techniques, theory and aural skills, and music composition and arranging.
Fudge's career then saw several changes. She survived a brief stint as an agent and registered representative for major insurance/investment companies, selling life and health insurance and variable products. Her writing was put to the test working as a weekend correspondent for seven years with the Quad-City Times (Davenport, IA), writing articles that covered a myriad of topics including Civil War reenactments, small town festivals, symphony reviews, tractor shows, fundraising events, and other varied local topics. While still writing for the newspaper on the weekends, Fudge joined the staff at a local college to teach writing, critical thinking, culture and diversity, and communication classes.
Ultimately, a certificate in Web Development from Black Hawk College and an MSIT from Kaplan University led her to online teaching in the realm of Information Technology. Now teaching fully online for Purdue University Global, she has taught many courses for technology students, from first-term undergraduate experience to end-of-graduate-program courses. Web development, interface design, systems analysis and design, and communication and organizational skills are some of her teaching specialties. Fudge has received outstanding professor awards and won fellowships for innovation and teaching, and is a frequent collaborator, writer, and presenter