Sign in or register
for additional privileges

C2C Digital Magazine (Fall 2018 / Winter 2019)

Colleague 2 Colleague, Author
Cover, page 15 of 22

 

You appear to be using an older verion of Internet Explorer. For the best experience please upgrade your IE version or switch to a another web browser.

Providing Peer Review for Top-Tier Academic Publishers

By Shalin Hai-Jew, Kansas State University 




Recently, I was invited to review a research paper for a top-tier academic publisher.  I logged on to their website, read the work, and about an hour and a half later, provided feedback.  During the submittal of the review, I noticed that I was the third reviewer, which could indicate that either one or both of the initial reviewers did not respond, or that I was brought in to break a tie.  A day or two later, I received a follow-on automated email from the dedicated publication system with an update on the editor’s final decision, as a courtesy:  outright rejection of the reviewed work.  (My recommendation was for a major revision based on the comments made, but it is clear that the editor was responding to other information in addition to what I submitted.)  

This experience inspired me to think about what goes into invited double-blind peer reviews:  the expertise expected of reviewers (the skills, the mindset), the stresses of the work, and the expectations of researchers, editors, and publishers.  

What is a Top-Tier Publication?  

Generally speaking, a top-tier publication has an earned reputation for quality, and as such, it can attract big name talent for its editorial advisory boards, its editors, and its contributors.  

A top-tier publication is elitist and only accepts a small number of submitted works for publication.  In terms of publication metrics, it has works which are highly cited [a high “impact factor” (IF), eigenfactor scores, article influence, and altmetrics—which include references in social media).  In various fields, the idea is to be able to attract and identify novel works that contribute powerfully to a field and beyond.  Another important metric is the acceptance rate of a publication, with the lower the rate the better and the more elitist.  And, the higher the numbers of readers, particularly “important” readers, the better.  

The popular saying goes:  “If you have to ask (the price), you can’t afford it.”  A similar corollary exists in publication.  If a publication looks to commissioning a paid work to stand in for open submissions, there really is no way the publication can afford the work; they would be out of the game.  The reason is this:  most research underlying the research articles cost in the tens to hundreds of thousands of dollars to actualize.  This is before the actual work of writing the paper and creating the data visualizations.  The time investments of the researcher and the research teams means much higher costs as well.  The publication has to be a desired destination site, and its only currency is its integrity, its earned reputation, its leadership, and its real-world contributions to the field.  

The work of publications then is to be competitively acquisitive of works to publish, within the limits of the field, but also be protective of its reputation (avoiding accepting lower-quality works, avoiding accepting works that had already been published before, avoiding fraudulent works, avoiding machine-generated text, and so on).  A well led publication has leadership which can identify “wheat from chaff” and which can see the present with clarity but also understand (a little of) what may be coming in the future in the field.  The limits of space per issue are not technological but more editorial, more convention-based, and more reader-based.  Going with lesser works dilutes the reputation and the brand.  

Another quality feature involves the inputs that the publication invests into the publications, and by extension, all who contribute to it:  the editors, the authors, the reviewers, the graphic artists, the typographers, the printing press operators, and all the rest.  A quality publication will hire professional editors to go through the work.  They will ensure that the data look correct.  They will ensure that typographical errors have been removed.  

For Potential Reviewers

How do you get invited to peer review?  There is no fixed formula about how to be appealing to editors and publishers. In a practical sense, it helps to have peer reviewers with a wide range of deep expertise, so different constructive critiques may be made of the submitted draft manuscripts.  Peer reviewers do well to be read-up on the field (and closely related domains), research methods, data analytics methods, statistical analyses methods, and the named publication.  At a minimum, peer reviews have to be ethical in how they handle manuscripts (and not ever mis-use them outside of their role as peer reviewer).  They should have the ability to make hard decisions (critiques and suggestions) and to fully articulate their stances while being respectful and polite (no ad hominem attacks, for example).  Ideally, a peer reviewer would have a “default setting” of “yes” or “maybe” and not “no”; they should give the researchers-authors the benefit of the doubt.  They should be able to handle the stress of making a case for acceptance, acceptance with major revision, resubmittal, or outright rejection, and they should have the adaptability of accepting the editorial decision even if it does not align with their own point-of-view. 

Initially, those who are practiced researchers and authors…and editors…with name recognition…will likely be more easily identifiable as a possible recruit for serving as peer reviewers.  Many publishers also put out open calls for those who may be interested in reviewing manuscripts from particular domain areas, and once they get signed up on a system, the book system itself helps make the peer reviewers more easily findable to the editors and publishers.   

There may be ineffable elements of the person as well, such as the “personality frame” and maybe how the peer reviewer may carry the brand forward (even when they review from the shadows in a double-blind peer review context).  

Then, after the first few invited reviews, the editors (or editorial board) may get a sense of the person’s responsiveness, thoroughness, professionalism, and other features, and those features may determine whether the reviewer is re-invited for future draft manuscripts.  After all, serving as a peer reviewer the first time (as a one-off) does not mean that there will be other asks.  And don’t count on being told what the editors or publishers want per se.  If a reviewer understands the space, the expectations become clear over time.  If editors and peer reviewers can form a constructive and respectful working relationship, it is quite likely that such arrangements can last for many years.  

For Researchers-Writers

What does it take for a manuscript to pass muster?  A common strategy in academic publishing is to start high (with high-prestige publications and publishers) and move lower if rejections mount.  All researchers-authors work against time since they are not supposed to engage in multiple submissions (having one manuscript out for consideration by multiple publishers).  Works do age out.  If researchers over-estimate the value of their work, they can be left holding an unpublished work that may never see the light of publication day…or they may end up giving it up in an un-edited open-publishing context.  Or if they publish with a lower-level publisher, they may have “left cash on the table,” so to speak.   (They could have gotten more mileage out of it than they ultimately did.)  

Beyond getting works into publication, researchers-writers should benefit from the discipline of going through peer review and receiving relevant and substantive feedback.  Researchers-writers can toughen up in this process and de-personalize the critiques of their works, in order to hone their skills more objectively.  There is something of a writer's ego to writers, but the point is to have less ego and more talent and more skill.  The peer review process helps with that.  (While "ego" is generally negative and off-putting, it is somewhat necessary to achievement...to think that one may have something of value for others' consideration and possible learning.)  

Respective publishers usually have templates for the feedback. These include a fixed set of options for recommendations for the reception of the manuscript.  There may be directive questions to address.  To get a sense of what the peer review looks like, some of the common questions and considerations follow—in the order of a basic read-through.  Typically, the reviewer is taking notes along the way for later reference.  (Of course, a second read-through is de rigueur.)  

  • Does the title point to a relevant topic based on the terms used in the field?  Is it grammatical?  
  • Does the abstract describe the research clearly?  Does it convey the findings with clarity?  Does the make sense?  
  • Are the keywords relevant ones?  Are they listed in descending order (from most important to least important)?  
  • Is the introduction substantive, and does it introduce the topic well?  Is the introduction engaging?  Does the introduction relate to the paper’s title and the abstract?  Is the introduction right-sized, with the proper amount of focus on the particular issues?  
  • Are there logical transitions?  
  • Is the research methodology fully described?  Is it practicable?  Have the biases of the research method been acknowledged and worked out?  
  • Is the data accurately captured and recorded?  
  • Is the data properly analyzed?  
  • Does the discussion relate the new research to the existing literature in logical ways?  In practical ways?  
  • In terms of the Future Research section, are the ideas reasonable?  Do the ideas advance the field?  
  • Is there a sense of an author hand?  The sense of multiple authors’ hands?  
  • What is the quality of the writing?  The revision?  The editing?  
  • What sorts of assertions are being made?  How well supported are these assertions?  How tight are the logic chains?  If there are gaps, where are the gaps, and what do these gaps suggest about the author’s(s’) research work?  
  • How accurately are sources cited?  Has the author read the original sources, or are the sources merely cited in a pro forma way (in the body text)?  
  • How is the research designed?  What research methodologies are used?  Are there biases to the research methods?  In terms of how the research is operationalized, was it done in an unbiased way?  
  • If there is human subjects research, what steps did the researcher or research team to ensure ethical approaches?  How ethically aligned is the research?  Or is the human subjects review not addressed at all?   
  • What sorts of data are collected, and is the data credible or not?  Why or why not?  
  • How are the data analyzed?  Is data analysis done in alignment with standards and conventions?  When claims are made from the data, is there over-reach?  Under-reach?  Or an accurate level of assertions?  
  • If data visualizations are included, when are they included and to what ends? Do the data visualizations follow proper conventions—such as with proper naming, proper data labeling, proper alignment of data visualizations with the underlying data?  Are the data visualizations properly integrated with the paper? Are they used judiciously to add value to the overall paper?  
  • If technologies are mentioned, how accurately are the technologies cited?  Has the researcher used the technologies appropriately?  How deep or superficial are the descriptions?  
  • What is the analytical depth of the work?  Does the author(s) consider a wide range of possible interpretations of the data?  Is there over-reach (incaution, ambition) or under-reach (timidity, lack of interpretive range)?  Does the work suggest that the author(s) have preconceptions to the research topic?  Are biases addressed and neutralized?  
  • What is the apparent relevance of the research?  Is the research derivative of others’ research approaches?  
  • What is the quality of the research source citations?  How thorough is the bibliography?  How recent are the most recent sources? Are there source citations from before the contemporary time period?  How closely do the citations adhere to the required citation method?  [In terms of in-text source citations, how accurately and skillfully are the sources integrated into the prose?  Does the author(s) show that he/she/they actually read the original works thoroughly?]  

The heuristics can be helpful in approaching a draft, but it can help to create unique heuristics for a particular work based on the draft.  In addition to the sequential read-through, it helps to understand what the most critical elements are in a research article.  These are the can't fail elements that have to make for a work to advance.  

Theoretically, if research was conducted correctly and rigorously, it is possible to address gaps in the writing; it is possible to redraw data visualizations; it is possible to re-write interpretations of the research findings and data.  Some aspects of research are not fixable, though.  Indeed, there are risks from which a work cannot recover, with all the revision and editing in the world, such as if the original research work was done incorrectly or incompletely.  

Dealbreakers:
  • For example, if research is done on humans but did not align with human subjects research standards, that would be a deal breaker.  No self-respecting editor or publication will take on a work that comes with ethical lapses and legal liabilities.  (Research works can be the bases for others' emulation, so if the work is unethical, it should not be held up as something that is the basis for anyone's work.)  
  • If a research work includes over-assertions, that is often a deal breaker.  Over-reach can come from reaching beyond expertise, which is risky. In some cases, the novice researcher may not realize he or she is in over his / her head.  Overreach can come from running data through a software program and not truly understanding the data or the data analytics processes or the results.  Overreach can come from a misinterpretation of the research findings.  If the researchers-writers do not see that they are over-asserting, that is a problem of expertise and / or training.  If the researchers do see the mistake but do not address it, then that is a problem of apathy, laziness, and / or unprofessionalism.  If they are purposefully covering over their own research weaknesses, that is potentially a problem of deception—which is fatal to not only the work but the researcher(s) and his / her reputation. People are constantly having to work against their own gullibility and cognitive biases. (When an author or authoring team tends to overreach, that tendency shows up in terms of the types of publications they will submit a work to…  Over-confidence often means that the author or team does not perceive the mismatch between the target publication and their own work.  This situation can resolve positively if they know to take the peer reviews, sift for what is valuable, and move forward to revise the work and submit again or submit elsewhere; however, if the individual or team becomes defensive and angry, he / she / they will burn time without making progress.)  
And what is not there...

Peer reviewers need to see what is present in the work and also what is not present.  They have to be able to see non-obvious potential.  They need to read into a work and try to help the researchers get to a place where their best work can be burnished and brought forth, and where that is not possible, to have them know when to retract a work (or have the work rejected).  

Finally, do peer reviewers try to read a work for the author hand and the person(s) behind the writing?  Simply, yes.  After all, all researcher-writers have tells.  They have favored turns of phrase. They will cite themselves, in lead-up research.  They will show their hand in terms of research obsessions.  That said, in a field as large as educational technology, for example, unless an author or authoring team self-cites extensively, it is rare to be able to identify authorship to a person or persons.  




About the Author

Shalin Hai-Jew works as an instructional designer at Kansas State University.  She reviews for a number of publishers, including Wiley, Routledge, Elsevier, and others.  Her Publons profile is available at https://publons.com/author/1268346/shalin-hai-jew#profile. This profile includes partial review information only, without any book chapter reviews included currently.  

Dr. Hai-Jew is editing a book titled "Form, Function, and Style in Instructional Design," due out in mid-2019.  She is also editing a text titled "Profiling Target Learners for the Development of Effective Learning Strategies," due out in 2020.  She is inviting chapter proposals for both works. 

She has an ORCID identifier for her verified publications:  

ORCID iD iconorcid.org/0000-0002-8863-0175

(ORCID identifiers are free.)  
Join this page's discussion (1 comment)
 

Discussion of "Providing Peer Review for Top-Tier Academic Publishers"

Prestige vs. Open-Access Open-Source Publishing

Today at my university, the head of the libraries and some librarian administrators presented on an article titled “Death by 1,000 Cuts | Periodicals Price Survey 2018” (https://www.libraryjournal.com/?detailStory=death-1000-cuts-periodicals-price-survey-2018). The survey results apparently suggested that serial subscriptions are rising in cost at about 6% annually, which in combination with budget cuts at institutions of higher education, mean much less purchase power for content at libraries.

Why is it that academics provide their research work and writing and datasets for free to publishers that are making off with double-digit profit?

To address this, academic libraries are looking to freeze content subscriptions, collaborate in entities pushing for open access publishing, and getting into the publishing game themselves (and disintermediating the third-party content creators).

I had a few thoughts:

(1) Given how much of academia is about prestige, how can such open-access publishers win over a tough audience of researchers? Prestige comes from earned elitism and actual performance. Why would anyone give away advantage by going with lesser publishers than they can afford?
(2) Given how much academia expects to be paid for work, how can open-access keep low-cost? Or if such endeavors are kept low cost, how can the publishers attract researchers and data analysts? (I’m thinking of grants in which I’ve been co-PI…and even for open-access books, academics expect multiple years of pay…and perks. The costs are going up, not down, by having universities be their own publishers.
(3) Then, given people’s cognitive biases and their ego protections, how can up-and-coming researchers and authors hone their skills to the level needed with open-access publishing?

What I’m seeing is a two-track evolution…with high-end prestige publishers and low-end open-access open-source publishers that will take anything (virtually) and use free peer reviewers…and the problems will still continue with the rising costs of academic research contents. (In actuality, the costs of paying for contents is a small percentage of what the original research itself may have cost…because the U.S. government and private industry subsidize the research through grants.)

Posted on 12 February 2019, 8:44 pm by Shalin Hai-Jew  |  Permalink

Add your voice to this discussion.

Checking your signed in status ...

Previous page on path Cover, page 15 of 22 Next page on path

Related:  (No related content)