Sign in or register
for additional privileges

ENGL665: Teaching Writing with Technology

Shelley Rodrigo, Author

You appear to be using an older verion of Internet Explorer. For the best experience please upgrade your IE version or switch to a another web browser.

Amy Reading / Thinking Notes Week 4 (9/17)

This week’s readings are all about the "research on research" qualitative studies of student composing and research practices and the faculty who teach them. The key to all seems to be assessment. Even the Scholarship Assessed, assigned to the 800-level readers, continues this trend, outlining in very specific ways how varying types of faculty scholarship can be assessed by discovering common elements. All of these selections ponder the difficulties and shortcomings of past methods of assessment, all the while commenting on how circumstances and definitions have “broadened” or shifted with time, necessitating we (our field) change to adapt.



But first: the student work.



Park & Stapleton “How the Views of Faculty Can Inform Undergraduate Web-Based Research: Implications for Academic Writing.” Computers and Composition 23 (2006): 444-461.



The article’s authors outline their research of faculty perspectives of research sources commonly used by students. Park and Stapleton argue there is a real need for a uniform assessment system with which to evaluate web sources, for both students and faculty. The premise of this is that the web and related technologies have broadened what we think of as literacy (444). They do point out that there are some tools available, but they have largely been directed at k-12 audiences, not for higher education needs. Further they assert that part of the dilemma is that instructions regarding source research methods and qualifications are, at best, vague, contributing to persistent practices by students of resorting to their own evaluation tools (rather than those that meet the standards of college faculty). The authors offer several suggestions for dealing with these circumstances, although there is a pointed observation that faculty are heavily biased toward print-based texts, both in the evaluation criteria employed and their assessment of acceptability.



The article outlines the methodologies of the survey conducted, including actual survey tools and the results. Their conclusions point to two “gaps” that are open to additional research: (1) one gap is the perspectives toward sources held by faculty vs. students, and (2) the second gap is the identified “idiosyncrasies of conducting research through new electronic media” and how – by 2006 – very little scholarship has been published on the use of web text and graphics as a means by which to assess scholarly worth. Many of their observations reflect a disconnect between what faculty expect and value from a research project and what students DO when they research. Interestingly, one of their student respondents makes a remark that is reminiscent of the Citation Project’s findings: “[s]tudents do not know how to properly utilize the material” (451), and therefore resort to sentence-level mining of material.

The authors point to the types of web sources students often resort to using (like blogs or commercial sites) as an illustration of the need to help students assess “the quality of Web sources” with some “formal mechanism” (452). Thus, they provide a “prototype of a Web checklist” (455) that is very much like that suggested by the article in Digital Scholar.



McClure, Randall and James P. Purdy, eds. The New Digital Scholar. Medford, NJ: American Society for Information Science and Technology, 2013. 109-87.

Jamieson, Sandra and Rebecca Moore Howard. “Sentence-Mining: Uncovering the Amount of Reading and Reading Comprehension in College Writers’ Researched Writing” (Chapter 5); Silva, Mary Lourdes. “Can I Google That? Research Strategies of Undergraduate Students” (Chapter 7).



Reading Chapter 7, “Can I Google That? Research Strategies of Undergraduate Students” by Mary Lourdes Silva, and Chapter 5, “Sentence-Mining: Uncovering the Amount of Reading and Reading Comprehension in College Writers’ Researched Writing” led me to Amazon to purchase this book in its entirety. Chapter 5 is written by two of the key researchers responsible for the Citation Project (CP), Sandra Jamieson and Rebecca Moore Howard (an interest of mine because my institution is one of the 16 that contributed student papers). Their opening anecdote recalls a mid-20th century composition textbook that emphasized the importance of conversation’s role in research, an observation which the two authors build on throughout the chapter’s emphasis on their conclusions reached as a result of the CP. The primary focus seems to be the ability of students to summarize – to “digest” as one would in a conversation – the ideas gleaned from sources. They make a distinction between composing vs copying from sources. Too often, their research found that student versions of summary were more sentence-level quote mining. They point to three practices that became the focus of their research (paraphrase, summary, patchwriting) meant to study the “student understanding of texts” by exploring “what they do with their sources” (111).



The chapter outlines their methodology and results meant to illustrate “patterns across institutions” which contributed student papers (including AUM). They also raise a concern that faculty are too quick to charge “plagiarism” for student writing that is, more realistically speaking, simply “misus[ing]…sources” (123). The key to this is a measure of student engagement with a source, and their findings may suggest “that students do not know how to read academic sources or how to work with them to create an insightful paper” (126).



Silva’s chapter builds on this but, like Park and Stapleton, focuses on the “how” of student research in terms of Information Literacy (IL) (161). They make several observations on factors that seem to contribute to their conclusions, including the assumptions made by teachers about the students’ internalization of navigational skills when using electronic source search technologies.

She also points out that there are gaps in our field’s research on IL and writing, gaps which may be successfully filled by turning to research in fields like “educational psychology…and cognitive science,” which she suggests are finding that “conducting research online is a far more complex sociocognitive technical activity than suggested by existing methodological approaches” (163). In point – she calls for a “multiliteracies approach as a pedagogical framework for research-writing instruction” to make up for what we currently overlook (163). Most interesting was the distinction between “navigational technique” (how we search) and “information literacy” (how to fulfill an informational need) (163-64). She wonders if we are successfully making this distinction part of our instruction. I wonder too.


Much of the chapter is devoted to explaining the research design and methodology for a study focused on three students’ research practices and the instructor’s interventional activities. Several recommendations emerged, several in particular of interest to me as a teacher. One in particular interests me: “training on how to interpret a citation” from a web source (173) and “mining a reference” (175). They surmise that “[t]hese findings suggest that students have a limited understanding of the relational, hierarchical, and technological structure of electronic information systems and require further instructional support.” (176).



The Appendix offers several key activities/handouts used in the course of the study, several of which I hope to implement in my classroom this semester!



(800 level) Glassick, James E., Mary Taylor Huber, and Gene I. Maeroff. Scholarship Assessed: Evaluation of the Professoriate. San Francisco: Jossey-Bass, 1997.



The book is founded on / a continuation of Boyer’s Scholarship Reconsidered, just seven years later. This time, the focus is on assessment. The table on page 36 offers a useful summary of the six key standards recommended  for such assessment, whether by institutions, disciplines, or faculty themselves when developing their plans for academic engagement. The authors state that “the new hierarchy of academic tasks” produces “a crisis of purpose,” a crisis that not only results in “costs to undergraduate education” but a “weakening of general education” as well (viii).



The book draws from Boyer the four types of scholarship defined as a new paradigm in his earlier report: (1) discovery, (2) integration, (3) applying knowledge, (4) the scholarship of teaching. These four categories create the framework upon which these authors create key questions of assessment designed to address many of the concerns raised in Boyer’s earlier report about academia and scholarship. These authors point out that Boyer’s report was widely well received, even prompting changes. Despite this acceptance of expanded notions of scholarship, there remained no clear guidelines on how to assess it (12). They provide a brief overview of past assessment means, from the “good ol’ boy network method used through the 1970s, where evaluations were based more on personal recommendations than proof of scholarly endeavors. By the 1980s, they write, a more structured system had begun to take hold, one which seemed more “obsessed with numbers,” thereby “shortchang[ing] teaching and service and research” (20).  This approach led, in their words, to an “undervaluation of professorial service” because no one knew how to quantify that” (20).



Their second chapter gets into specifics, suggesting methods for defining standards. They argue that we need the same standards for all four types of scholarship, which is difficult because they are inherently different. Further, there is the quandary of what “quality” means (22). The “fragmented paradigm” that marked the earlier 1980-era model “helps perpetuate the hierarchy” that places research over all else, and to the detriment of students and teaching (22).


Their recommendations for change start with establishing a “vocabulary to define the common dimensions of scholarship” (24). To create this, they gathered documents from several institutions on the subject of assessment criteria, and found they have a lot in common. They present an outline of six (6) criteria:


o   Clear goals

o   Preparation

o   Methods,

o   Results

o   Presentation

o   Critique/reflection



For each criteria, they offer a list of three questions that can provide a more concrete evaluative approach.



As I finished this second chapter, I could see the merit of the reflective nature of these questions, a quality they argue leads to the type of creativity lauded in our recent readings from Jenkins, New Learning, and Barr and Tagg, as well as those visible in the standards promoted by our discipline (such as the presence of creativity in the 8 habits of mind - WPA’s Framework For Success). But I wondered: does the system currently allow for these? Or does disciplinary territoriality prove too heavy a counterweight? I’d like to explore this in our class discussion.
This page is a tag of:
Reading & Thinking Notes  View all tags
Join this page's discussion (1 comment)
 

Discussion of "Amy Reading / Thinking Notes Week 4 (9/17)"

sorry

I was cheap and swiped up ODU's copy of Purdy & McClure's book (of course, after I told them to purchase it). ;-)

I am happy to see you weaving a story together with the readings over the week.

Posted on 25 September 2014, 5:28 am by Shelley Rodrigo  |  Permalink

Add your voice to this discussion.

Checking your signed in status ...

Previous page on path Amy Locklear's Mini-Bio, page 7 of 20 Next page on path