Sign in or register
for additional privileges

The Art of Academic Peer Reviewing

Shalin Hai-Jew, Author
Cover, page 10 of 12

 

You appear to be using an older verion of Internet Explorer. For the best experience please upgrade your IE version or switch to a another web browser.

Case 6 William H. Hsu


Q: You recently edited a book titled “Emerging Methods in Predictive Analytics: Risk Management and Decision-making” (2014). In this book, you worked with a number of colleagues who not only authored chapters but also served as peer reviewers. What sort of insights were the most helpful for your authors in this text? Why?

A: The main advice that authors got in preparing chapters had to do with issues of clarity: additional bibliographic references needed, explanations that assumed too much specialized background, glossaries or tables of notation needed, figures and equations whose captions could be improved, and certain figures and tables that ought to be added. This was a very technical book, consisting of chapters relating research methods and key novel findings. These were intended as a reference for more experienced readers or a tutorial for newer researchers seeking to enter the field. As a result, clarity and accessibility were challenging issues for an audience with diverse background and disparate levels of expertise. The kinds of insights that were most helpful dealt with this by guiding authors in writing book chapters that were more inclusive of the less experienced readers and incorporated more citations of seminal work and presented some rudimentary background material to attain the right level of accessibility.


Q: You’ve said that it can be quite difficult eliciting peer reviews from busy colleagues. What tactics did you use to acquire the critiques?


A: In general, I used multiple channels of communication and modalities of reminders. Repeat reminders on an irregular schedule (versus at the same time every week) sometimes helped. Direct phone calls and instant messages were also helpful. Explicitly stating revised and updated deadlines in case of schedule slippage helped underscore to reviewers that they should not consider their review to be no longer consequential. Maintaining an "estimated date of review" spreadsheet helped me to track reviews and remind me (and sometimes a reviewer) how many weeks a review was late, past an originally scheduled date. Putting dates in titles of e-mails instead of just "URGENT" helped get these e-mails read and responded to.

Finally, sometimes reviewers will not return a review, so I tried to have at least one backup reviewer for each one, sometimes two backups. This made it hard to resolve split decisions, especially if I myself had a conflict of interest, so I planned in advance what editorial advisory board member would break ties if needed (even if he or she did not write a review).


Q: One of your areas of specialty is in data visualizations. It can be hard to get these right. What are some common critiques of data visualizations that are submitted in draft articles and chapters? What advice would you give to authors who are including data visualizations in their work?

A: I got some visualizations that had typos or were unclearly motivated in the text. Beyond this, there were critiques that fall under Tufte's rubric of "graphical excellence" in _The Visual Display of Quantitative Information_: labeling axes, using the minimum dimension needed to convey information, maximizing data-ink (or the density ratio of meaningful data), and showing variation in the data rather than the presentation method. Some ideas from Tufte's _Envisioning Information_, _Visual Explanations_, and _Beautiful Evidence_ and Wilkinson's books on statistical graphics that helped were to: make visualizations self-explanatory; capture the story of processes and systems (for predictive analytics) in the form of system block diagrams that were more than just data models; and using black-and-white to convey data separation and layering that the author(s) were accustomed to depicting in color.


Q: It can be very hard to elicit full chapters from researchers in the field. There is a lot of publisher competition over talent. It takes a lot of work to create a full chapter that meets requirements for publishing. How do you work to acquire new works from researchers? What strategies work? Which ones don’t?

A: Many chapters that I solicited were from colleagues whose work I have read and know is fresh and not previously published, but which I know they are eager to get in print in a timely way. Thus, I sometimes consider what the obsolescence curve is for the work and solicit work that might otherwise be stuck in the reviewing pipelines of good but slower-moving journals for a couple of years; this is the fast-track niche. Other chapters are based on core parts of theses and dissertations that are amenable to conversion to book chapter form but less suitable as standalone monographs; this is the thesis section niche. Still others are expanded from conference papers that are mature enough to be journal papers, but contain more background and survey material than a journal readership desires; this is the "half-tutorial" niche. In all of these cases, the work should cover cutting-edge or late-breaking research and have sufficient technical merit in its own right, e.g., more publishable results than a technical report, but is much more suitable as a book chapter than as a paper or entire book on its own.


Q: When you assess a piece of writing, what are deal breakers for you? Why? How do you communicate the rejection without “burning bridges,” so to speak?

A: Deal-breakers include, but are not limited to, work that is: completely off-topic; too short by a vast factor (less than half the length of any other chapter); too long by a vast factor (more than double the length of any other chapter); not sufficiently mature for publication at all (e.g., still in progress); or too poor in writing quality for publication as a special issue article of any reputable journal.

I try to decline articles that are clearly off-topic as early as possible without wasting my time, that of multiple reviewers, or that of the author. Some chapters are obviously off-topic to anyone in the field who reads the call for papers and compares it to the abstract of the submission. I may get a second opinion for these, but it tends to be confirmatory.

For chapters that are too short, I ask the authors to consider adding to them if time permits, but also make clear to them why brevity does not always correlate positively with quality and that they can feel free to withdraw the submission if the brevity is due to actually having less meaningful material. In the one case where this happened, the author added some good quality, relevant background material. By a similar token, I had an author split a chapter submission that had two main coherent themes and was more than 20000 words long. These were then separately reviewed.

As for communicating rejections, I have two types of rejection letters: outright "deal-breaker" letters, which are rare; and "revise and resubmit, possibly to another venue" letters, which amount to letters that would be outright rejection letters for conferences (because of the short revision time frame) and are equivalent to revise and resubmit letters for journals (below the "accept with major revision" line). As an editor, I am willing to talk with both reviewers and authors of rejected papers on the phone, preserving reviewer anonymity but clarifying conversations between myself and the reviewer.


Q: I know you encourage your graduate students to publish. What sorts of advice do you give them to enhance their chances of passing muster with reviewers and editors? What has their success rate been like?


A: I encourage my graduate students to aim for the best conference papers they can get their work into (conferences count in my field, and conference papers are more numerous for starting researchers than journal papers). Oftentimes they either end up submitting to conferences that are too competitive (< 20% accept rate) and their papers are rejected, or they become intimidated about submitting at all. In both cases I try to work with them to submit to a more suitable conference (e.g., 20-40% accept rate, or a special issue of a journal if the work is mature). They will sometimes prefer to keep resubmitting to top conferences, which I allow; however, I also remind them to submit to specialized workshops, with much higher accept rates but whose participants are good to excellent researchers and which will give their paper greater visibility and more citiations.

I work with my master's and Ph.D. students to look at examples of good (and sometimes bad) publications, giving them papers to read and review, encouraging them to participate in both local and international student poster sessions, presenting work to peers, discussing current writing, and editing work interactively with them. I also encourage them to attend workshops on improving their writing, defenses to get an idea of how to present their work (and find out about published theses and dissertations they can read and cite), and conferences they can attend even without a publication. The latter tends to dominate one-on-one advising meetings by the time a student is near graduation. My students have a mixed success rate on submissions depending on the quality of the target publication; a bit less than half for top-tier conferences but better than half for second-tier ones.


Q: You’ve said that it’s important to be at the right place at the right time to have a deeply cited work. You yourself have some papers that have been cited hundreds of times. Is there a way to position oneself to be highly cited, or is this more a factor of talents, education, interests, and workplace contexts?

A: Getting cited for me is half content and half everything else, including positioning, knowing the state of the field to choose good dissemination channels, and self-promotion. By "positioning", I mean attending conferences to talk with people who are doing similar work and sharing enough information to get an idea of what has been done and what has yet to be done. Our field is a pretty open one, and although one still has to be careful how much work-in-progress to share, most people in the field are highly scrupulous, and by contrast with natural sciences, the ideas and theory are as significant or sometimes more significant than the execution. Thus, owning and funding a lab, and even staffing it with research associates, graduate research assistants, and programmers, is secondary to getting a proof of concept done, getting the idea written down, and presenting it coherently to others who can give it the recognition it deserves. I don't know how true this is for other branches of computer science, but it is probably the case throughout artificial intelligence, machine learning, and data mining.

Credentials matter, but talent and actual knowledge matter more. In my field, reviewers care about results, and especially about impact: scientific significance trumps everything. Reproducibility is key, but a cool application is just a novelty for this year, and incremental improvements may well be obsolete in two years. Aside from lasting forever, an algorithm or mathematical theorem will be relevant or important for much longer, and even a mathematical or statistical model that is a fundamental quantum leap is much more significant than many or most fielded systems. The exception in computer science and engineering is for industry giants, who have both, but may publish only one of these, or neither, in order to keep their competitive edge. Interest and diligence matter more than credentials as well, for these same reasons.


Q: Anything else you may want to add?


A: I think a lot of student researchers underestimate the importance of visibility and what it can do for them. Visibility is a two-way street; it can help you get noticed and earn face time with prominent researchers and some who are just very helpful people, who can supplement the advice you are used to hearing on a weekly basis from your advisor. Almost all of my students whom I've sent to conferences have thanked me for the chance to get fresh perspectives and talk to new people about ideas, and come back with new ideas that eventually lead to new publishable work. As Mark Twain wrote in _The Innocents Abroad_, "Broad, wholesome, charitable views of men and things cannot be acquired by vegetating in one little corner of the earth all one's lifetime".

It's also important for students to be bold about doing research. Paulo Coelho says, "If you think adventure is dangerous, try routine. It is lethal." The best advice I ever got on this was from Robert Hecht-Neilsen, who told me that a good dissertation doesn't get one airborne; it launches the career of the graduate researcher into orbit.

 

William H. Hsu Professional Biography

William H. Hsu is an associate professor of Computing and Information Sciences at Kansas State University. He received a B.S. in Mathematical Sciences and Computer Science and an M.S.Eng. in Computer Science from Johns Hopkins University in 1993, and a Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in 1998. His dissertation explored the optimization of inductive bias in supervised machine learning for predictive analytics. At the National Center for Supercomputing Applications (NCSA) he was a co-recipient of an Industrial Grand Challenge Award for visual analytics of text corpora. His research interests include machine learning, probabilistic reasoning, and information visualization, with applications to cybersecurity, education, digital humanities, geoinformatics, and biomedical informatics. Published applications of his research include structured information extraction; spatiotemporal event detection for veterinary epidemiology, crime mapping, and opinion mining; analysis of heterogeneous information networks. Current work in his lab deals with: data mining and visualization in education research; graphical models of probability and utility for information security; developing domain-adaptive models of large natural language corpora and social media for text mining, link mining, sentiment analysis, and recommender systems. Dr. Hsu has over 50 refereed publications in conferences, journals, and books, plus over 35 additional publications.
Comment on this page
 

Discussion of "Case 6 William H. Hsu"

Add your voice to this discussion.

Checking your signed in status ...

Previous page on path Cover, page 10 of 12 Next page on path