Andrea Lunsford
As best I can remember, I got my first computer in the winter of 1985. The big grey IBM squatted on my desk like a huge toad, taking up almost every square inch; I felt it was looking at me, reproachfully but also expectantly. What would I do with it?
Well, I did what I could: I studied the manual, I taped a little guide onto the keyboard, I memorized commands. Eventually I managed to produce a word-processed letter (it would be months before I could use the computer to type out an address and print an envelope to put it in). Slowly, I began to be less intimidated by this newcomer, and I even imagined sometime in the spring that perhaps here was the answer to writing teachers’ prayers. Maybe we wouldn’t need to work so hard at teaching writing anymore: The computer would do it for us. Maybe computers could even evaluate student writing, using our comments and responses to arrive at a letter or number grade. I should have remembered to be careful of what you wish for. Some 30 years later, we are in the midst of an international debate over the efficacy of computer scoring of student writing, even as we are in the midst of a fundamental revolution in what writing is and what it means to write.
During these 30 years, scholars of composition and rhetoric have led the way, first in introducing computers into classrooms and developing robust pedagogies for computer-enhanced instruction, and second in learning how best to assess writing development. This early work helped me understand that the computer, no matter how new and revolutionary it seemed, was simply the latest communication technology in a lineage that stretched all the way back to writing itself—which I think of as one of the Western world’s oldest technologies. Just as we had used other tools (pencils, typewriters, tape recorders) to help us write and to assess and respond to writing, so we would use this one. And computers have been extraordinarily useful in assessment: They can help us aggregate large bodies of student writing and parse it for quantitative differences. Indeed, some computer programs—I’m thinking specifically of DocuScope, which was very helpful to me and my colleagues in analyzing data from the Stanford Study of Writing—can identify qualitative differences. (For a moment in the late 1970s and early 1980s, literary critic E. D. Hirsch argued that computers could help establish what he called “relative readability” in student texts and thus speed evaluation along: I wonder if any readers of this book remember Hirsch’s 1977 highly controversial The Philosophy of Composition?)
Computers have not only been useful in assessment; they have ushered in new and revolutionary understandings of writing itself. About a year after I got my first computer, when I was completely immersed in learning to use it and in trying to theorize its place in our curriculum, I wrote about what this new tool might offer. The use of computers, I suggested in a chapter for Karen Greenberg, Harvey Wiener, and Richard Donovan’s (1986) Writing Assessment: Issues and Strategies, has the potential for
...helping us understand the connections among speaking, reading, and writing; for encouraging interaction and dialogue; for bringing distant group authors together; for fostering a constructive view of error; and for speeding the transmission of information. (p. 9)
In the years since these musings, we have indeed come to reexamine our concept of writing, as words leapt off the page and fairly danced before our eyes, in dazzling color, on our computer screens. We have also come to redefine the relationship among the communicative arts, realizing that reading, writing, and speaking are completely intertwined, shading in and out of one another in contemporary scenes of discourse. As for “encouraging interaction and dialogue” and “bringing distant group authors together,” well... I couldn’t have imagined just how true this would be or the degree to which computer-related technologies would eventuate in social media, crowdsourcing, new understandings of textual production, and, especially, transformations in textual ownership. As I write these words, I am sitting at my desktop, opening and closing windows, checking email and Facebook postings, looking up information (what year was E. D. Hirsch’s book published?), my iPad and iPhone at the ready. In my rear view mirror, 1986 might as well have been 1906, or 1876. Computers and accompanying digital technologies have surely fulfilled my long-ago expectations; indeed they have done so beyond my wildest dreams. But, at least to date, they have not answered my early prayer for help in evaluating student writing. In the same article quoted above, I wrote as much, saying that although computers “will not be the answer to our prayers in terms of assessing student writers,” what they may do instead is “alter our concept of assessment” (1986, p. 10).
So now, in 2013, I have an opportunity to read and contribute to a volume that aims to do just that—to alter our concept of assessment and to provide guidance as we negotiate the latest wave of changes in writing. Heidi McKee and Dànielle DeVoss have gathered here extraordinarily prescient, provocative, and practical explorations of the kinds of digital writing students are doing today and, moreover, of responding to and assessing such writing in ways that do justice to it and to its expanding and evolving communicative goals. Best of all, they have done so in a Web-based publication that allows the authors not only to talk the talk but walk the walk of digital writing.
In my view, the need for this collection of essays could not be greater: As college and universities are rushing to embrace MOOCs and other forms of online learning, and as writing teachers across the country invite students to produce digital texts of all kinds, we are very much in need of understanding theses texts, thinking through their characteristics (especially those that move far beyond print), and of developing commensurately appropriate forms of assessment and evaluation.
Toward that end, McKee and DeVoss offer their readers 14 essays organized into four sections: Part I, on Equity and Assessment, comprises essays by Maya Poe and Angela Crow, each of which addresses the ethics of assessment, a topic that has been of ongoing concern to McKee and DeVoss. While Poe very helpfully considers the “fairness guidelines” in the 1999 Standards for Educational and Psychological Testing in relation to evaluating digital writing, Crow takes on the issue of how huge databases and “text mining” may pose dangers to privacy as well as access, arguing that because we are inevitably participating in systems of surveillance, we must “find our way to strategically press the surveillance system toward the care end of the continuum.”
In Part II, on Classroom Evaluation and Assessment, the essays move from broad ethical concerns to specific assessment of multimodal texts, opening with Charlie Moran and Anne Herrington’s investigation of the difficulty and complexity of assessing composing in mixed modes and providing practical advice derived from four sources, the most helpful of which comes in the case studies of two thoroughly contextualized examples of the assessment of digital projects (a digital picture book and an online blogging project). Colleen Reilly and Anthony Atkins describe what they call an “aspirational assessment process,” and Emily Wierszewski reports on her study of eight teachers’ response (through the use of oral protocols) to the multimodal work of their students. Ben McCorkle, Catherine Braun, and Susan Delagrange close this section with three takes on assessing digital projects, including McCorkle’s fascinating “fair use integrated assessment process.”
Part III focuses on Multimodal Evaluation and Assessment, beginning with an essay by the National Writing Project Multimodal Group that identifies five domains (context, artifact, substance, process management and technique, and habits of mind) that the authors see as linking the assessment of multimodal writing to the acts that create it, with a focus on the importance of contextualization. You’ll just have to read ahead to learn about assessment in relation to the Google Earth Project or the Digital Mirror Camp! The three other essays in this section provide additional valuable insights, from Kathleen Yancey, Stephen McElroy, and Elizabeth Powers’ overview of the use of ePortfolios for assessment and analysis of three distinct ways of reading such work, to Crystal VanKooten’s development of three criteria for assessing multimodal projects and her application of these criteria to student projects, to Meredith Zoetewey, Jeffrey Grabill, and W. Michele Simmons’ analysis of civic web sites and how best to assess their usefulness. By the time I finished reading this section alone, I had amassed a pile of notes I plan to bring to my own teaching and assessment in the future. I felt fired up and ready to go!
Part IV, on Program Revisioning and Program Assessment, brought me down to earth again as I read about attempts to develop computer programs that would make the teaching and assessing of writing more efficient and cost effective. Beth Brunk-Chavez and Judith Fourzan-Rice describe the development of UTEP’s “Miner Writer,” a digital distribution system that they show as having a positive effect on student feedback, professional development, and programmatic assessment. In Tiffany Bourelle, Sherry Rankins-Robertson, Andrew Bourelle, and Duane Roen’s "Assessing Learning in Redesigned Online First-Year Composition Courses,” we learn of the work these scholars and their colleagues have done, in the face of fairly massive budget cuts, to engage larger numbers of students through “multimodal instruction and learner-centered pedagogies.” Karen Langbehn, Megan McIntyre, and Joe Moxley gave me more food for thought in their “The Value Add: Re-Mediating Writing Program Assessment.” At the University of South Florida, these scholars have developed another digital assessment tool, My Reviewer, that allows them to carry out evaluation of and response to student writing and programmatic assessment at the same time, thus creating a feedback loop that allows for ongoing program assessment based on what they learn from aggregating teacher responses to individual students. The result is a very efficient, streamlined system of assessment. Finally, Anne Zanzucchi and Michael Truong share the insights gained from studying how ePortfolio assessment can lead to faculty development as well as to faculty engagement in refining/redesigning assignments and even entire curricula.
I found this collection of essays so provocative that I’ll admit to reading it straight through, and then to picking and choosing essays and parts of essays to re-read, and re-read again. Engaging with the voices in this volume was like taking my own special seminar in digital writing assessment. The authors and editors of this text here provide lessons that can be savored slowly—or used as a crash course for thinking through a pressing assessment issue; in short, they have given an enormously rich and useful gift to those of us grappling with logistical, pedagogical, and ethical issues of assessing digital writing.
As I looked at the table of contents for this volume, I was also struck by the deeply collaborative nature of the work reported on here; 11 of the 14 chapters are collaboratively authored, and the four single-authored essays touch on collaborations with students and/or other teachers. It strikes me that this is a simple but important lesson in terms of assessment, and especially digital writing assessment. Indeed, it is increasingly difficult to conceive of such assessment as the task for an individual teacher or scholar, and increasingly evident that exemplary assessment, whether of individual student work or of writing programs, calls for the kind of careful, thorough-going, meticulous—sometimes even tedious—work that is best accomplished by a good, strong team. In Digital Writing: Assessment and Evaluation, Heidi McKee and Dànielle DeVoss have gathered such a team, and readers will benefit from the lessons they share with us here.