PREFACE
Heidi A. McKee and Dànielle Nicole DeVoss
Web sites, blogs, tweets, text chats in massively multiplayer online games, status updates, remix videos, collaborative mashups, cloud computing, and content crowdsourced across continents: digital tools, technologies, and networks have transformed the processes and products of writing. So, too, have the contexts in which we engage in the teaching and learning of writing. Within this dynamic landscape—where established and emerging practices and processes interanimate and remediate—we as writing instructors and administrators face some challenging questions, particularly around the assessment and evaluation of student writing, including, but of course not limited to, questions like these:
- How do different approaches to assessing traditional writing (8 1/2” x 11” word-centric texts) port—or not—to the assessment of digital writing? What challenges and opportunities for assessment do multimodal, networked texts present?
- What heuristics might we use to help new instructors learn to evaluate and grade multimodal texts?
- What heuristics might we use to help new instructors learn to evaluate and grade multimodal texts?
- How might the multimodal, networked affordances of digital writing affect issues of equity and access? How might groups often disenfranchised by more traditional assessment be impacted by digital writing assessment?
- How might assessors of large eportfolios make sense of and find pathways for reading such complex texts?
- By what criteria should program administrators and instructors assess and select course-management and/or eportfolio systems?
- How might digital technologies be used to deliver and assess first-year composition in ways other than one-instructor to 20-some-student models and in ways that directly connect classroom evaluation and program assessment?
These are just some of the questions contributors to Digital Writing Assessment and Evaluation address.
There are, of course, many ways to consider the intersections of the terms digital, writing, assessment, and evaluation. See, for instance, the work of Cheryl Ball (2012), Patricia Ericsson and Richard Haswell (2006), and that of Susanmarie Herrington, Kevin Hodgson, and Charlie Moran (2009). See, too, the National Writing Project book Because Digital Writing Matters (NWP, DeVoss, Eidman-Aadahl, & Hicks, 2010), Michael Neal (2011), Diane Penrod (2005), Jody Shipka (2009), and Madeline Sorapure (2006). See the forthcoming special issue of Computers and Composition on multimodal assessment practices, listen to the conversations happening in hallways and faculty lounges, and on professional email lists and at conferences.
In this collection, we have two primary emphases. First, we emphasize the assessment and evaluation of digital writing—the pedagogical, methodological, technological, and ethical approaches for and issues involved with assessing and evaluating multimodal, networked texts and the student learning they represent. Of course, nearly all writing today—even word‐ processed, alphabetic text—is digital, because it exists as pixels and bits on a computer at some point in the composing process. Our focus with digital writing is on multimodal and/or networked texts for which essayistic (Hesse, 1999) assessment and evaluation of writing cannot necessarily port over seamlessly. Second, we emphasize the use of digital technologies to change how writing (both digital and traditional) and writing instruction in large-scale programs is delivered and assessed. As we discuss below, we do not focus on the computer-scoring/machine-scoring of writing, but rather on digital distribution and collections systems like MinerWriter and My Reviewers.
Contributors to Digital Writing Assessment and Evaluation provide research- and practice-based discussions of these aspects of digital writing assessment from diverse perspectives, including, as mentioned, local and broader considerations of massive program revisioning, the economic press on writing classes and programs, classroom-based evaluation practices, national standards, multimodal service projects, equity and fairness, and the ever-changing digital landscape of eportfolios. As the first comprehensive collection to focus on digital writing assessment and evaluation at primarily the post-secondary level, Digital Writing Assessment and Evaluation will be, we hope, a significant resource for teachers and administrators, helping to strengthen practice and to generate further scholarship.
No matter the assessment questions—whether classroom‐based or program‐level; whether in first‐year writing, technical communication, or writing‐across‐the‐curriculum; whether formative or summative; and whether for purposes of placement, grading, self‐study, or external reporting—we hope that you will find guidance and inspiration from the contributors to this collection.
OVERVIEW OF THE COLLECTION
We present to readers 14 chapters authored by 38 contributors from 20 different institutions. Chapter contributors include curators of the National Writing Project’s Digital Is web site; members of national committees on assessment; directors of first-year, technical, and professional writing programs; podcasters and creators of online resources; and new and experienced researchers and instructors—all of whom approach digital writing assessment and evaluation with keen eyes to work already done in the field and toward where the field is heading. As editors, it has been thrilling and thought-provoking (to say the least!) to engage with the depth and richness of perspectives shared in this collection, and we hope readers will feel the same.
We have structured the book to begin with a discussion of some broad fundamental considerations for digital writing assessment, including issues of fairness and concerns related to surveillance and data mining. We then move to two sections that focus on classroom and multimodal assessment and evaluation, where chapter authors provide detailed approaches for assessing student digital writing projects. Chapters in this section also provide sample student projects for analysis and discussion. In the last section, we move to consider program-level assessment and approaches programs have taken for both including digital writing assessment and using digital technologies in the assessment process.
Even in this somewhat lengthy collection, there is, of course, much about digital writing assessment and evaluation that is not discussed—or not discussed in-depth. After we overview the chapters below, we will address these omissions in hopes of encouraging important research and scholarship in those areas.
Part I: Equity and Assessment
In the opening chapter, “Making Digital Writing Assessment Fair for Diverse Writers,” Mya Poe discusses the issue of fairness in relation to digital writing assessment. Drawing from the Standards for Education and Psychological Testing, she directs readers to consider several of the 12 fairness guidelines and argues that “digital writing assessment must go beyond the inductive design of rubrics to include contemporary, theoretically informed assessment inquiries of fairness if we are to ensure that we are working to make digital writing assessment equitable for all students.”
From discussions of fairness we move to considerations of online surveillance and privacy issues in Angela Crow’s chapter on “Managing Datacloud Decisions and ‘Big Data’: Understanding Privacy Choices in Terms of Surveillant Assemblages.” As Crow notes, WPAs have always needed to address privacy concerns in relation to assessment, but, in the era of “big data, these issues become more complex.” Crow provides a useful heuristic for program administrators to apply when considering various online assessment systems, and she suggests that the field of writing studies may be best served by creating its own digital platform as a means of protecting student writers and the writing they produce.
Part II: Classroom Evaluation and Assessment
In “Seeking Guidance for Assessing Digital Compositions/Composing,” Charles Moran and Anne Herrington open the section by posing a key question: Where are the criteria to be found for assessing digital texts?” They then explore resources where instructors may turn for guidance, including trade articles, resource-oriented web sites, and organizational standards. They conclude by arguing that “teachers’ actual practice, directly observed in context and in tandem with specific classroom materials, is the best source for that guidance,” and they provide detailed discussion of two teachers’ classroom practices and materials.
This close focus on classroom practice continues in the subsequent chapters. In “Rewarding Risk: Designing Aspirational Assessment Processes for Digital Writing Projects,” Colleen Reilly and Tony Atkins consider the paralysis that comes over some writers when they work with new technologies, and describe how some students choose to play it safe: doing what they know rather than aspiring to push their writing and their use of digital writing technologies further. Reilly and Atkins propose and provide rich artifacts for an aspirational assessment process “designed to motivate and reward student willingness to grow in their use of digital media by providing them with instruction in assessment and involving them in development of the assessment criteria in order to create a classroom environment in which students maximize their acquisition of expertise while engaged in the production of digital compositions.”
Whereas Reilly and Atkins are experienced instructors of digital writing—as are many of the contributors to this collection—much of the assessment and evaluation occurring in classrooms and institutions today is engaged by instructors with little or no formal experience with digital writing technologies. So how do instructors more familiar with traditional writing approach the assessment of multimodal texts? This is a question Emily Wierszewski takes up in a detailed research study reported on in her chapter, “Something Old, Something New”: Evaluative Criteria in Teachers’ Responses to Student Multimodal Texts.” She presents the results of a study of eight teachers responding to their students’ multimodal work.
Of course, even experienced digital writing instructors face the new when assessing digital work, as is the case for Susan Delagrange, Ben McCorkle, and Catherine Braun, who examine various approaches for evaluating student remix projects. In “Stirred, Not Shaken: An Assessment Remixology,” they provide extensive case studies of responses to student remixes to illustrate possible frames (rhetorical, cultural criticism, design-based, legal) for the evaluation and assessment of emerging digital genres. Specifically, Delagrange offers a process of collaborative rubric design; McCorkle discusses how Fair Use guidelines can be deployed to inform a student remix assignment from inception to final assessment; and Braun considers “the broader institutional and programmatic contexts within which our courses are embedded and which often put forth their own assessment criteria that may or may not encourage remix assignments.”
Part III: Multimodal Assessment and Evaluation
Multimodality—the integration of audio, video, and still images in texts—is one of the key features of some digital writing genres, and we are pleased to be able to offer an array of chapters addressing multimodality. We open Part III with a chapter co-authored by the Multimodal Assessment Project (MAP) Group, a group charged by the National Writing Project and funded by the MacArthur Foundation to develop approaches for evaluating multimodal writing. In “Developing Domains for Multimodal Writing Assessment: The Language of Evaluation, the Language of Instruction,” the authors present five domains—context, artifact, substance, process management and technique, and habits of mind—“that link the language of assessing multimodal writing with acts that drive the creation and reception of digital texts.” Significantly, the authors created these domains from the bottom up, building them from observation and engagement with how writers approach composing multimodal texts.
In the next chapter, “Composing, Networks, and Electronic Portfolios: Notes toward a Theory of Assessing ePortfolios,” Kathleen Blake Yancey, Stephen McElroy, and Elizabeth Powers note that eportfolios have evolved from collections of individual writings to complex, networked artifacts that offer writers multiple pathways for composing and readers (and assessors) multiple pathways for reading. Through detailed examination of one student eportfolio, they offer new criteria, new practices, and new theories for eportfolio assessment.
Crystal VanKooten also provides new approaches for assessing multimodal texts. In “Toward a Rhetorically Sensitive Assessment Model for New Media Composition,” she draws from the assessment frameworks of Paul Allison, Eve Bearne, and Michael Neal to offer a model that addresses the process and product of new media compositions and reflective engagement of composers. She provides several sample student videos and accompanying reflective writings to illustrate the use of her approach for assessment.
Community-based projects are often an important part of multimodal writing classrooms—where students compose web sites, videos, and other mediated content for clients, often with the aim of engaging the public around issues and resources. Figuring out how to assess this public work can be tricky, as Meredith Zoetewey, Michele Simmons, and Jeffrey Grabill show in “Assessing Civic Engagement: Responding to Online Spaces for Public Deliberation.” As they explain, “an important component of civic web sites is their usefulness. However, the need to evaluate civic Web sites designed for usefulness comes prior to instructors having the information required to gauge how audiences makes use of the web site.” To address this complexity—how to grade something before you know whether it achieves its purpose—they draw from the concepts of productive usability and catalytic validity to offer an evaluation framework that can scale in time and space to enable course-based and community-based judgments.
Part IV: Program Revisioning and Program Assessment
Assessment doesn’t, of course, just happen in classrooms. Engaging in program-level assessment is both an essential and often required component of our work as writing instructors and administrators. In this last section of the collection, authors provide compelling institutional narratives of how and why and to what effect they used digital technologies to change program-wide curriculum, instructional practice, and assessment. In particular, a number of authors offer changes to classroom structures so that student work is no longer read by just one faculty member, but rather by committee, a change in distribution enabled by digital technologies.
Beth Brunk-Chavez and Judith Fourzan-Rice offer “The Evolution of Digital Writing Assessment in Action: Integrated Programmatic Assessment,” in which they describe the University of Texas at El Paso’s move to integrate digital writing in first-year composition and the program’s implementation of MinerWriter, a digital distribution system that enables student feedback and evaluation by scoring committees.
Evaluation by committee is part of the massive revision of first-year composition at the University of Arizona, as reported by Tiffany Bourelle, Sherry Rankins-Robertson, Andrew Bourelle, and Duane Roen. In response to intense budget pressures, they revised the two-semester sequence of first-year writing to be delivered entirely online in a studio model where students work at their own pace among a cohort of 20 to 150 students to produce multiple drafts of multimodal writing projects and a final eportfolio. Students receive feedback and evaluation from multiple instructors and peer instructional assistants. In addition to detailing how their online Writer’s Studio model works, one of their goals is to encourage scholarly conversations about alternative models for first-year composition, where digital technologies enable different approaches for evaluation and assessment of student writing.
The use of digital technologies for linking classroom assessment and large-scale program assessment is the focus of Karen Langbehn, Megan McIntryre, and Joe Moxley’s chapter “Re-Mediating Writing Program Assessment.” They explore how an online writing collection, distribution, and feedback system—My Reviewers—closes the assessment loop. Composition students in all sections of first-year composition use the online system to turn in their work to their instructor, who then evaluates it and comments on it in My Reviewers. My Reviewers thus becomes a living, real-time repository of student work and instructor evaluations that can be analyzed by both classroom instructors and program administrators. Langbehn, McIntyre, and Moxley argue that, “by using My Reviewers to analyze how students and teachers are responding to a curriculum in real time, the WPA and teachers work with one another to fine-tune assignments and crowdsource effective teacher responses, thus improving program assessment results at the same time as we clarify teacher responses to student writers.”
In the final chapter of the collection we return to eportfolios, but from a faculty development perspective. In “Thinking Like a Program: How Electronic Portfolio Assessment Shapes Faculty Development Practices,” Anne Zanzucchi and Michael Truong focus on the process and impact of their program’s transition from paper to digital repositories and from alphabetic text-based to multimodal curriculum. Particularly useful for programs at the cusp of change, they offer guiding principles and example best practices for moving toward more multimodal approaches in eportfolio development and assessment and for providing faculty support during transitions.
CONTINUING THE CONVERSATION
From experienced digital writing instructors and administrators to those new to the field or new to a particular aspect of digital writing, this collection offers research and experience-based analyses on a wide variety of topics. However, as with any collection, there is, obviously, a great deal not included or not discussed in enough depth.
A hugely important area to consider for digital writing is disability studies and issues of access. We need to design our writing projects, systems, and assessments for all students. Originally, in Part I of the collection on Equity and Access, we planned to include a chapter addressing Universal Design in the design of digital writing assessment, but, as happens to us all as writers and researchers at some point in our careers, situations arise and timelines change, and we were not able to include such a chapter. This is a gap that must be addressed by further research and future publications. Although the broad field of writing studies has a rich and growing body of literature on disability studies, the field of computers and writing needs much more work in this area, particularly in relation to assessment and evaluation. Pioneers in the field include Terrance Collins (1990), who wrote “The Impact of Microcomputer Word Processing on the Performance of Learning Disabled Students in a Required First-Year Writing Course,” and John Slatin (2001), who authored “The Art of ALT: Toward a More Accessible Web.” These two articles, however, were published one and two decades ago and are just two of the few articles on disability in Computers and Composition. Other important work in relation to disability and digital writing includes the Kairos special issue on “Disability—Demonstrated By and Mediated Through Technology,” edited by Beth Hewett and Cheryl Ball (2002), but that was also published more than a decade ago. Melanie Yergeau’s work (e.g., her 2011 dissertation) is well-situated to transform our field and our approaches to abilities. Our field does have some single chapters and articles here and there where authors address disability and writing assessment and evaluation (e.g., Carmichael & Alden, 2006; Meyer, 2013; Walters, 2011), but our field needs much more extended and comprehensive scholarship on disability and digital writing assessment.
Whereas we had hoped for more discussion of disability in Digital Writing Assessment and Evaluation, when designing the call for this collection, we intentionally excluded computer scoring/machine scoring of writing. We chose to focus on human readers’ experiences assessing and evaluating digital writing. However, the challenges—and, yes, dangers—of machine-scoring to writing, writers, and writing instruction are so real and so wide-spread that we hope more extended work, like Patricia Ericsson and Richard Haswell’s (2006) Machine Scoring of Student Essays: Truth and Consequences, is forthcoming. As scholars, teachers, and citizens who care about writing and all that writing does in our society, we need to educate ourselves and others about machine scoring and we need to take action to defend the rhetorical process of people writing to people. To that end, the petition “Professionals Against Machine Scoring of Student Essays in High-Stakes Assessment“ (Haswell & Wilson, 2013) is a good start. If you haven’t had a chance to check out the petition and review the list of references provided, we suggest you do.
Pairing—perhaps, in the future—with machine scoring is the rise of massive open online courses (MOOCs). The assessment and evaluation of MOOCs and the writing pedagogies, processes, and products in those courses are other issues not addressed in this collection, in part because writing MOOCs were just emerging in 2011 when we started this project. At the time of this writing, there are many MOOCs in process, including a first-year composition course offered by Duke University that started with over 60,000 students in it: “English Composition I: Achieving Expertise” (a rather problematic title, as “expertise” is certainly not gained in a one-semester, first-year course). As is the case with all technological and pedagogical innovations, the deployment and use of a writing MOOC is ahead of the existing scholarship on MOOCs. Key questions around assessment that MOOC researchers, instructors, and administrators are just beginning to address include: How do our understandings of what it means to teach and evaluate writing need to change, especially when many institutions seek to implement MOOCs with the lowest investment of faculty resources as possible? What sort of MOOC models are emerging that reflect strong institutional, infrastructural, and intellectual investments? How do instructors of writing assess the compositions of tens of thousands of students in an online course, especially when most of the participants are not enrolled for credit?1
From issues of access and considerations of student abilities, to questions raised about assessment within MOOCs, there is clearly much, much more to be researched and learned about digital writing assessment and evaluation. As we develop new technologies for writing and for writing evaluation and assessment and as these new technologies evolve; as our pedagogies mediate and remediate; and as writers creatively integrate, challenge, and invent anew writing in digital spaces, we need to be just as flexible and adaptable in our approaches to assessment and evaluation. The specific technologies and digital writing discussed in this collection will some day seem dated—snapshots from another era, like DOS prompts or Hypercard constructions. But what we hope will endure from this collection are the dynamic approaches for thinking about and engaging with digital writing assessment and evaluation. Amid the ever-changing landscape of digital writing, we hope Digital Writing Assessment and Evaluation will serve as a resource, a touchstone and, importantly, a springboard for further research and conversations.
NOTES
1. Interestingly, a form of credit is an option available for $190 in Coursera’s (Duke’s MOOC provider) “Signature Track.” Yet, as Steve Krause (2013) astutely noted, because Duke itself will not recognize that credit but offers it to other institutions, “it seems a little shady to me that places like Duke are perfectly happy to offer their MOOC courses for credit at other institutions but not at Duke.”↩
REFERENCES
Ball, Cheryl E. (2012). Assessing scholarly multimedia: A rhetorical genre studies approach. Technical Communication Quarterly, 21, 61–77.
Collins, Terrance. (1990). The impact of microcomputer word processing on the performance of learning disabled students in a required first-year writing course. Computers and Composition, 8 (1), 49–67.
Ericsson, Patricia, & Haswell, Richard Haswell. (Eds.). (2006). Machine scoring of student essays: Truth and consequences. Logan: Utah State University Press.
Haswell, Richard, & Wilson, Maja. (2013, March 12). Professionals against machine scoring of student essays in high-stakes assessment. Retrieved from http://humanreaders.org/petition/
Herrington, Susanmarie; Hodgson, Kevin; & Moran, Charles. (2009). Teaching the new writing: Technology, change, and assessment in the 21st-century classroom. New York: Teachers College Press.
Hewett, Beth L., & Ball, Cheryl E. (2002). Disability and technology [special issue]. Kairos: Rhetoric, Technology, Pedagogy, 7 (1). Retrieved from http://english.ttu.edu/kairos/7.1/binder2.html?coverweb/bridge.html
Hesse, Doug. (1999). Saving a place for essayistic literacy. In Gail Hawisher & Cynthia L. Selfe (Eds.), Passions, politics, and 21st century technologies (pp. 34–48). Logan: Utah State University Press.
Krause, Steve. (2013, February 9). A few thoughts on MOOC credit (and life-credit). Retrieved from http://stevendkrause.com/2013/02/09/a-few-thoughts-on-mooc-credit-and-life-credit/
Meyer, Craig A. (2013). Disability and accessibility: Is there an app for that? Computers and Composition Online. Retrieved from http://www.bgsu.edu/departments/english/cconline/spring2013_special_issue/Meyer/index.html
National Writing Project; DeVoss, Dànielle Nicole; Eidman-Aadahl, Elyse; & Hicks, Troy. (2010). Because digital writing matters. San Francisco: Jossey-Bass.
Neal, Michael. (2011). Writing assessment and the revolution in digital texts and technologies. New York: Teachers College Press.
Penrod, Diane. (2005). Composition in convergence: The impact of new media on writing assessment. New York: Routledge.
Shipka, Jody. (2009). Toward a composition made whole. Pittsburgh: University of Pittsburgh Press.
Sorapure, Madeline. (2006). Between modes: Assessing student new media compositions. Kairos: A Journal of Rhetoric, Technology, and Pedagogy, 10 (2). Retrieved from http://english.ttu.edu/kairos/10.2/binder2.html?coverweb/sorapure/index.html
Slatin, John. (2001). The art of ALT: Toward a more accessible web. Computers and Composition, 18 (1), 73–81.
Walters, Shannon. (2011). Autistic ethos at work: Writing on the spectrum in contexts of professional and technical communication. 31.3
Yergeau, Melanie. (2011). Disabling composition: Toward a 21st-century, synaesthetic theory of writing. Unpublished doctoral dissertation, Ohio State University, Columbus, OH.
Return to Top