Chapter 4
Rewarding Risk: Designing Aspirational Assessment Processes for Digital Writing Projects
Colleen A. Reilly and Anthony T. Atkins
ABSTRACT
We propose an aspirational assessment process designed to motivate and reward student willingness to grow in their use of digital media by providing them with instruction in assessment and involving them in developing assessment criteria. Doing so creates a classroom environment in which students can maximize their acquisition of expertise while engaging in the production of digital compositions. We piloted this assessment process in the fall of 2011 and found useful evidence that it successfully aids students in setting goals and engaging in work while exceeding the minimum standards and allowing them to gain new proficiencies. By embedding instruction in assessment into each phase of the digital composing process, we found that we can break down the distinction between formative and summative assessment, prompting students to use the assessment criteria that they develop to guide the iterative process of creating their digital compositions.
The challenges of incorporating digital writing projects into classes at all levels is often linked to the anxiety experienced by instructors and students alike regarding the processes for assessing those projects. Instructors point to the difficulties involved in assessing work whose form lies outside of the traditional paper essays to which they have been taught to respond, while students may feel daunted by learning to work with unfamiliar digital applications to produce products worthy of the high marks they seek. Focusing specifically on classroom assessment, our chapter addresses the tension between the play and experimentation needed to learn to work productively with digital media and the potentially chilling effects that assessment processes and criteria can have on student willingness to reach beyond their current expertise. Although our chapter centers on responding to student work within classroom contexts rather than in larger, programmatic contexts, we, like Brian Huot (2002b) use the term assessment because it better describes our proposed process—one that seeks to blur boundaries between formative and summative feedback and between instruction and evaluation. To these ends, our chapter proposes assessment processes designed to motivate and reward student willingness to grow in their use of digital media, risking imperfection and even failure, by teaching them about assessment and involving them in the assessment process to create a classroom environment in which students are motivated to maximize their acquisition of expertise while engaging in the production of digital compositions.
In their article about assessing visual compositions, Lee Odell and Susan Katz (2009) called for assessment language to be both “generalizable and generative” (p. 204), applicable to a range of media productions, and designed to assist students to cultivate expertise that is useful in a range of contexts. To generalizable and generative, we add aspirational, prompting students to move past the skills they have already learned to bravely take on unfamiliar tasks and work with new tools and applications that may cause them to re-vision their composing practices. We advocate approaching the development of expertise when working with unfamiliar digital media through a process of deliberate practice, which involves engaging in extended periods of learning and attempting increasingly difficult tasks. Such an approach to learning incorporates trial and error and may result, especially at first, in less-than-expertly constructed products. Our proposed aspirational assessment processes attempt to create a space for the risk-taking that is a part of engaging in deliberate practice by involving students in the process of creating assessment criteria for the projects that they design, applying that criteria to example and peer projects, and using it formatively to guide the production of their projects, all of which, as we demonstrate below, serves to incorporate assessment into writing instruction (Huot, 2002b) and to break down the distinction between formative and summative assessment.
Involving students in creating an assessment process for a particular project also makes the process and criteria more localized and contextual (Broad, 2003; Huot, 2002a; Sorapure, 2006), and, we would argue, aspirational. Like Odell and Katz (2009), our recommendations are focused on classroom-based assessments of digital writing projects: “the assessment that serves the day-to-day work of helping students improve their ability to compose” (p. 199; see also Huot, 2002b). Throughout our time teaching in the same program, we have discussed our frustrations in attempting to motivate students while providing them with clear and appropriate assessment criteria. Our conversations and classroom experiences have helped us to recognize that there is no easy answer or predetermined rubric that can be deployed to motivate students to engage in the uncomfortable work of learning through composing with digital media. Thus in this chapter, we outline a flexible process for aspirational assessment that can be adapted to different classroom and institutional contexts. In explaining this process, we first demonstrate the importance of aligning our assessment processes with our other pedagogical values that, for example, emphasize process over product. We also explore the importance of deliberative practice as a means of conceptualizing learning new digital composing proficiencies. Our proposed process maintains existing valued elements, such as the use of student reflections as central artifacts for assessment, and we demonstrate how we incorporate those elements into our revisioned approach. Finally, we close with our recommended assessment process, which adapts primary trait scoring through a Freirian approach, to the design of an assessment architecture created in conjunction with students. This process carves out a space in which students can be motivated to take risks while learning new proficiencies necessary for effective digital composing.
ALIGNING ASSESSMENT PROCESSES AND PEDAGOGY: CULTIVATING EXPERIMENTATION AND RISK
Scholarship on assessment of all types of writing, including digital compositions, stresses that assessment practices should be directly supportive of the other pedagogical strategies employed in a particular course or experience. This alignment, while seemingly obvious, is difficult to achieve even in relation to courses focused on traditional forms of print-based writing (Wilson, 2006). As Huot (2002a) explained, writing assessment at both the programmatic and classroom levels is often treated separately from writing instruction, and “in fact, assessment has often been seen as a negative, disruptive feature for the teaching of writing” (p. 9). Elsewhere and with specific reference to classroom-based assessment and evaluation, Huot (2002b) has explained that “assessing student writing is often framed as the worst aspect of the job of teaching student writers” (p. 166). Such attitudes highlight the need to revision assessment practices to better reflect our values and shape those we want to cultivate: “We can, by changing assessment, change what we will ultimately value” (Huot, 2002a, p. 8). If we as writing teachers value constructivist practices in our teaching, these values should be obviously discernible in our assessment practices (Wilson, 2006).
To maximize the usefulness of classroom assessment practices and best align them with our values as writing teachers, many scholars argue—through the use of slightly different terminologies—that we need to focus on the instructive aspects of our assessment processes, practices, and discourse. Huot (2002a; 2002b) proposed instructive assessment—where “all procedures used to assess writing would also contain properties that work toward the improvement of the teaching and learning of writing” (2002a, p. 18). Huot (2002b) advocated accomplishing this by teaching students about and engaging them in the practice of assessment within writing classes, particularly in terms of evaluating texts for rhetorical and not merely conventional elements. Similarly, Michael Neal (2011) argued that we can best “assess our assessments by continuing our rich dialogue about the purposes of composition and what students should be able to know, think, and accomplish after taking our classes” (p. 133). Additionally, Odell and Katz (2009) highlighted the role that assessment should play in providing students with feedback that will prepare them for future endeavors, arguing that “the language of assessment must be both generative and generalizable” (p. 200). Their perspective is echoed in Huot’s (2002a) understanding of validity in assessment, which, for him, emphasizes the importance of considering the results delivered by assessment processes: “Validity centers not on the measurement itself but on the ‘adequacy’ of the decisions and ‘actions’ that are taken based on the assessment” (p. 50).
Digital compositions further complicate the process of assessment for both students and instructors, posing problems that bring into relief conflicts between approaches to assessment and instructional methods. The National Writing Project’s (2010) Because Digital Writing Matters provides a definition of digital writing that emphasizes both how texts are produced as well as how they are disseminated: “we define digital writing as compositions created with, and oftentimes for reading or viewing on, a computer or other device that is connected to the Internet” (p. 7). This definition usefully encompasses the ever-expanding range of media that people now have with which to produce and disseminate content. In writing classes of all sorts, students have access through online and often free applications to multiple digital tools and spaces for responding to assignments that, in turn, prompt instructors to institute novel instructional methods and locate relevant approaches to assessment.
WRITING ASSESSMENT AND TECHNOLOGIES
As a number of scholars have noted, approaching the assessment of digital compositions is often daunting for instructors. In fact, two recent articles assert that some instructors elect not to assess the digital or multimodal portions of assigned projects; instead these instructors consider only the textual essay or reflective portion of the project in their evaluations (Murray, Sheets, Hailey, & Williams, 2010, Sorapure, 2006). Furthermore, as Elizabeth Murray et al. (2010) explained, some instructors are required to use standard rubrics for classroom assessment of all writing projects; instructors are often uncertain how to adapt such rubrics to digital compositions, which may discourage them from assigning such projects or evaluating the digital portions if they do. Other obstacles to the assessment of digital compositions include the lack of confidence of instructors, especially those outside of professional and technical writing, in their ability to evaluate a design that requires them to focus on multiple modes at once (Murray et al.; Sorapure, 2006), particularly when they lack training in design and feel that they thus also lack the vocabulary necessary to formulate a cogent assessment. Finally, as the National Writing Project (2010) explained, as new technologies are developed or become accessible, “some elements of accepted standards of performance are being reinvented” (p. 106). This poses a particular problem for formulating large-scale assessment standards for digital writing at all levels of the educational system; however, shifts in appropriate performance standards present difficulties for classroom assessment as well, especially when instructors lack the proficiencies themselves and/or are teaching students to compose with applications and in spaces with which they are relatively unfamiliar.
In the spring of 2010, for example, Reilly was asked by a colleague, Diana Ashe, to include a book design project in her ENG 319: Document Design class. The book would contain the reflective and creative writing completed by students in linked English and philosophy courses about the art of living taught in the previous semester and would be entitled With Our Words, We Make the World: Reflections on the Art of Living by UNCW Students. As we had access to a new version of Adobe InDesign (an incredibly robust, industry-standard page-layout and document-design application), we elected to use this application for the book design. However, this meant that Reilly not only had to quickly learn to use an updated version of InDesign, but also had to be able to teach students to effectively use master pages to create consistent layouts for their collaborative designs. While the clients (Ashe and several of her students) selected the winning design, Reilly still had to create appropriate assessment criteria for use in evaluating the results within the context of her course. Determining appropriate expectations was a challenge as Reilly was unsure about what she could expect of her students based on the risks she was asking them to take in using complex features of a powerful and multilayered application with which she was not expert. Furthermore, the students were responsible for creating a design to encompass texts written by others, some of which needed significant editing. As a result, her criteria had to reward them for the design skills they had acquired, their increased ability to function within InDesign, and their attention to the editorial details involved in producing a clean text originally written by others. A version of the winning design may be seen here:
Figure 1. With Our Thoughts, We Make the World cover design
As Reilly’s experience in this document design class illustrates, approaches to assessing digital writing have to be nimble, able to adapt to the varieties of work students do when designing and distributing digital compositions, and account for their production with unfamiliar applications. These new types of work challenge instructors to compare the abilities of students to effectively deploy the affordances of digital compositions to the types of milestones assessed in print-based writing. As the authors of Because Digital Writing Matters (2010) astutely wondered, “what is, for instance, the technical as well as rhetorical value in being able to include a hyperlink as compared with the ability to craft an effective thesis statement?” (p. 107). Such questions inevitably lead to complicated negotiations for instructors in terms of what should be valued in a particular project. For example, in a document design class, should Reilly have weighed the visual rhetoric more than the ability of students to edit the texts or effectively use InDesign? Or are all of these proficiencies necessary for the process of developing successful compositions and, therefore, equally relevant for assessment?
Newness and New Opportunities
While digital compositions, particularly those incorporating multiple media or those created through applications unfamiliar to both instructors and students, present challenges to our current assessment strategies, as Neal (2011) argued, their newness also provides us with unique opportunities to revision our assessment processes in productive ways. Neal reminds us that as technologies become more familiar, they grow more invisible or transparent; therefore, he advises us to embrace the current “kairotic” moment to develop generative assessment practices:
We have this opportunity, while the texts and technologies are relatively new, to reframe our approaches to writing assessment so that they promote a rich and robust understanding of language and literacy. If we do not, we will follow the path to assessment technologies that was mapped out in the 20th century, in which assessment largely promoted reductive views of language in favor of the modernist agenda of efficiency, mechanization, and cost effectiveness. (p. 5)
Useful recommendations for connecting the assessment of digital compositions productively to the fostering of student learning are found in earlier scholarship as well as in contemporary work, indicating that the moment for this reconsideration of digital text assessment began some time ago and is still ongoing. Writing in the late 1990s, for example, Kristine Blair (1997) recommended responses to digital composition that “acknowledge collaborative writing, revising, and responding within an electronic medium as well as the ability to integrate visuals, texts, and sound, in order to address the shifting definitions of literacy fostered by electronic writing classrooms” (p. 3). The National Writing Project (2010) likewise asserted that “the act of writing has changed with the introduction of digital tools and standards that ask for collaboration, creativity, and effective design” (p. 105). Blair (1997) also highlighted the importance of using the peer review of digital texts to enhance the authority of student voices and downplay the role of the instructor as the “ultimate reviewer” (p. 11). Finally, Blair advocated the use of revision plans to help students concentrate on process and usability in responding productively to feedback from both peers and instructors regarding their digital compositions.
Much more recently, Neal (2011) outlined four pertinent guidelines tailored to enhance any assessment processes used to respond to digital compositions or, as he termed them, hypermedia: “develop criteria for hypermedia that communicate meaningful formative and evaluative feedback, allow for flexibility and multiple ways to successfully combine media and modalities, (re)center hypermedia on rhetorical principles, and direct students toward meaningful reflection at multiple stages in the composing process” (p. 99). In explaining how to develop assessment language that is generative and generalizable, Odell and Katz (2009) gave earlier voice to some of the rhetorical principles to which Neal referred when they recommended that assessments be designed to aid students to understand “four basic conceptual processes: moving from given information to new, creating and fulfilling expectations, selecting and encoding, and identifying logical/perceptual relationships” (pp. 204–205). These processes highlight student abilities to account for the needs of their audiences and to concentrate on usability issues when creating effective and visually appealing digital compositions.
Although these suggestions for improving our approaches to assessing digital compositions are certainly useful and productive, they do not directly address a significant area of misalignment between the pedagogical approaches to instruction versus the approaches to assessment in courses that include digital compositions: namely how to encourage risk-taking and experimentation in conjunction with or through assessment processes.
Deliberate Practice as Process
Creating digital texts often requires that students learn new skills, which simultaneously requires that they take risks and even experience failure. Deliberate practice—one manner in which the acquisition of expertise has been theorized—overtly requires a process that includes trial and error, the experience of which leads to expanding proficiencies and developing expertise. We first learned of the concept of deliberate practice through reading Moe Folk’s (2009) excellent dissertation, in which he highlighted the concept of deliberate practice as a productive approach to realistically calculating the level of sustained effort required to cultivate expertise. K. Anders Ericsson (2003) explained that deliberate practice involves engaging in highly focused and targeted activities designed to aid “aspiring expert performers. . . to avoid the arrested development associated with automaticity that is seen with everyday activities and instead acquire cognitive skills to support continued learning and improvement” (p. 113). These focused activities must take place over an extended period of time to result in expertise, which often requires 10 years or more (Ericsson, 2003; Ericsson, Krampe, & Tesch-Römer, 1993). Thus, within a semester-long course, students can only expect to begin a process of lifelong learning in the methods and applications important for digital composing, especially if they are new to the practices required. As the authors from the National Writing Project (2010) astutely observe, composing in digital spaces requires truly novel approaches to composition: “Digital writing is not simply a matter of learning about and integrating new digital tools into an unchanged repertoire of writing processes, practices, skills, and habits of mind. Digital writing is about the dramatic changes in the ecology of writing and communication and, indeed, what it means to write—to create and compose and share” (p. 4). Assessment practices, therefore, must help to inspire students to move beyond their current level of skill in writing with technologies and embark on the necessary processes that deliberate practice recommends for the development of expertise. This will be even more important for students who seek careers related to writing and design and who must learn to work in an environment where the tools and distribution mechanisms related to the composing process are continually changing. As Folk (2009) explained, “deliberate practice involves reaching impasses, constantly re-negotiating what your comfort zone is” (p. 133). Creating digital compositions certainly requires this ability to adapt to rapidly evolving environments for the production and publication of texts.
Significantly, scholars studying deliberate practice from a psychological standpoint highlight the importance of assessment in supplying motivation for and directionality to the efforts of those pursuing expertise: “The subjects should receive immediate informative feedback and knowledge of results of their performance” (Ericsson et al., 1993, p. 367); such feedback shapes future efforts. Because deliberate practice involves reaching beyond the current level of understanding, it provides a useful way to conceptualize working with constantly evolving digital media and motivating risk-taking; setbacks, hurdles, and even momentary failures are an integral part of deliberate practice, as K. Anders Ericsson (2003) foregrounded in his definition:
The principle challenge for attaining expert performance is that further improvements require continuously increased challenges that raise performance beyond its current level. The engagement in these selected activities designed to improve one’s current performance is referred to as deliberate practice. Given that these practice activities are designed to be outside the aspiring experts’ current performance, these activities create mistakes and failures in spite of the performers’ full concentration and effort—at least when practice on a new training task is initiated. Failing in spite of full concentration is not viewed as enjoyable and creates a motivational challenge. (p. 116)
Based on this definition, deliberate practice is a long-term activity requiring effort and motivation to overcome obstacles. This approach to the acquisition of new skills reminds instructors to use assessment not only motivationally but also realistically. Assessment processes should challenge students to go beyond what they already know while stressing the acceptability or even the expectation of imperfection. The language and processes of assessment should emphasize the attainment of additional knowledge and proficiencies over the production of technically perfect products. Finally, the process of assessment should prompt students to embark on the path of lifetime learning required for the true acquisition of expertise.
While some critics of deliberate practice fault this perspective for downplaying innate abilities and for blaming individuals for their lack of success by attributing it to a lack of effort (when that may not be the case), we highlight its potential as democratizing and aspirational. We have all had students who claim an innate deficiency when working with digital applications to communicate or design texts. Deliberate practice emphasizes that sustained and directed efforts matter more than innate talent in developing expertise. As Ericsson (2003) explained:
Hence, the old assumption that expert performance is acquired virtually automatically by “talented” individuals has been replaced by the recognition of the complex structure of expert performance and the complexity of the necessary learning activities that build the required mediating mechanisms to support expert performance. (pp. 117–118)
Additionally, deliberate practice helps to put the pursuit of expertise in a realistic framework, emphasizing the great amount of effort required for its attainment. Thus, although most undergraduate students may not become experts in digital composing, each can make strides in advancing their current level of proficiency within a given semester.
From Rules to Risk-taking
As noted above, assessment practices play a significant role in the development of proficiencies from the perspective of deliberate practice. However, as Maja Wilson (2010) argued, many of the elements of classroom-level assessment, including basic assignment criteria, function to discourage any risk-taking and experimentation, instead providing a set of rules to which students conform in order to achieve high marks. When viewed as a checklist (such in this sample that Reilly used in the document design class) assessment criteria discourage the deviation and innovation essential to engaging in deliberate practice and embarking on the process of developing expertise, a point Reilly realized after the course was over (see grading criteria below, along with audio commentary by the authors).
GRADING CRITERIA
The parameters of your design, and hence the criteria for grading, are as follows:
Content
- The book must include all articles contained in the zip file.
- The images in the image zip should be used as necessary.
- The book should include a cover and title page.
- A table of contents should be included at the beginning.
- Credits for all writers and images should be incorporated into the design.
Format
- The book should be in a multi-column format that includes side bars, images, pull-quotes, and headers and footers.
- A consistent color scheme should be used.
- The design should be readable online and in print.
- The design should include an identifying title/logo (Tidelines) and the correct UNCW logo in appropriate places (for approved UNCW logos, see http://www.uncw.edu/ba/campus_services/licensing-logos.htm)
Audio reflection (transcript)
In the remainder of our chapter, we provide strategies based on our classroom experiences in teaching writing courses for correcting the misalignment between our assessment processes and our pedagogies—for correcting how our assessments too often focus on product while our pedagogies focus on process. We offer an assessment approach aligned with our pedagogy, which prompts students to focus on process over products and to grow as designers of digital compositions.
PREVIOUS ATTEMPTS TO DEVELOP ASPIRATIONAL PROCESSES OF ASSESSMENT
As both of us teach many of the same courses within our department, we have traded ideas over the years about how to deal with conflicts between the processes, language, and tools of assessment and the need to encourage students to take risks that would help them to develop increased proficiencies when creating digital compositions. We are aware that students seek assignment criteria when engaged in digital composing to help them to focus their efforts, but we also have each experienced teaching projects in which these same criteria serve to constrain student activities, operating as restrictions rather than as a framework within which a variety of responses are possible. As a result, we have employed a number of strategies in an attempt to encourage risk-taking and make learning through imperfection palatable or even appealing.
Creating Open-ended Assignment Criteria that Require Independent and Collaborative Exploration
As Ericsson et al. (1993) argued, deliberate practice works best within an instructional context in which teachers guide students through activities of increasing complexity and difficulty that challenge them to grow in their abilities and acquire new proficiencies. In our previous attempts to bring the assessment process for a particular project in line with our pedagogical practices, we composed assignment criteria designed to motivate students to play with the applications used to create digital compositions and to learn some of their more complex features. This accomplished two things from our perspective: It encouraged the experimentation we saw as essential in learning to compose using digital media and it provided an explicit motivation for students to move beyond the basic activities necessary to produce the digital compositions.
One way we used these aspiration-oriented assessment criteria in conjunction with previous projects was to provide general guidelines that specify the features of a digital composition that would qualify as more advanced or expansive, demonstrating a higher level of proficiency. For example, when Reilly asked her senior seminar students to design game interfaces in Macromedia Director (now Adobe Director, an industry-standard interactive-application development tool), she gave them two levels of assessment criteria in the assignment description: required elements and attributes of more sophisticated effort. The required criteria included guidelines such as:
- provide a sample interface that has visual appeal;
- incorporate elements that reflect how users would interact with the interface; and
- create a rich environment through the integration of graphics and sound.
Accomplishing these goals in completing the group project assured the students of earning a solid score, equivalent to a grade of B, on the project. However, to move beyond the baseline, students had to demonstrate that they had attempted but not necessarily succeeded completely in doing several of the following types of activities:
- Allow users to experience some portion of game play (be interactive);
- provide a non-interactive video simulation of some portion of game play;
- demonstrate some proficiency with using Lingo to script interactivity/movement;
- demonstrate that the game contains an obvious application of James Gee’s (2003) learning and literacy principles; and/or
- go beyond the basic requirements in some other way not mentioned.
These criteria suggest features that could be included in the game designs, but avoid specifying precisely how these features should materialize. They also prompt students within the groups to experiment with the application they are using and teach their peers how to work with unfamiliar affordances.
In contrast, in some assignments, Reilly provided more concrete suggestions within the assessment criteria of what would constitute a more sophisticated effort. In her recent assignment for ENG 501: Introduction to Research Methods, which asked students to represent their research processes using Prezi, her aspirational criteria specified features such as developing a complex and layered visual design, embedding links to videos, and/or editing the Prezi’s default theme. Students were encouraged to take on one or more of these challenges but were not expected to attempt them all.
While both of these approaches have resulted in students developing digital compositions for which they had to go beyond their current knowledge of the applications, the instructor’s vision still determines what the resulting production should include. In our discussion below we reflect on our most recent efforts to make assessment processes aspirational using a more organic approach that involves students in writing the criteria for assessment and determining their goals for growth in expertise.
Using Reflection to Incorporate Student Voices
Recommendations to use student reflections about digital writing assignments as artifacts that inform or are factored into the assessment of their projects have become commonplace (Hess, 2007; Huot, 2002b; Odell & Katz, 2009; Remley, 2012; Shipka, 2009; Yancey, 2004). For student project reflections to contribute productively to the assessment of a project, they need to be broad in scope and afforded significant weight by instructors as part of project submissions. As Jody Shipka (2009) noted, prompts for student project reflections can encourage substantive and rhetorically sophisticated responses, allowing students to demonstrate their knowledge of course concepts and ability to articulate project goals, discuss rhetorical choices, and constructively evaluate their work. Similarly, Michael Neal (2011) asserted that student reflections about their digital compositions should involve rhetorically oriented rationales of content and design choices: “The why questions suggest to students that the choices they make should have a reason that can be articulated within a larger vision for the project, whatever it may be. I find this type of reflection to be more useful with high-tech writing than pure self-assessment” (p. 87). The body of scholarship on critical reflection makes similar recommendations about the content of student reflections.
Sarah Ash and Patti Clayton (2009) argued, like Neal, that well-designed critical reflections provide opportunities for students to demonstrate an understanding of course concepts and deepen their learning through analyzing their experiences:
When understood in this light and designed accordingly, reflection becomes ‘critical reflection.’ It generates learning (articulating questions, confronting bias, examining causality, contrasting theory with practice, pointing to systemic issues), deepens learning (challenging simplistic conclusions, inviting alternative perspectives, asking “why” iteratively), and documents learning (producing tangible expressions of new understandings for evaluation). (p. 27)
When submitting their game designs for the senior seminar project described above, Reilly’s students also submitted 300–400 word individual reflections that detailed the ways in which the project’s content and structure reflected the acquisition of transferable writing and design skills, explained what the student might do differently if completing the project again, and evaluated the collaborative efforts of the group. The last two elements of the reflection relate to Huot’s (2002a) recommendation that we teach students to assess their own writing. In the reflection paper that accompanied the submission of web-based learning modules created for high school science students in fall of 2012, the students were asked to detail the design principles that they employed and explain how they used course concepts to inform the development of their projects.
Excerpts from student reflections on the game interface
Reflections on students’ Web‐based learning modules
In Atkins’s ENG 314: Writing and Technology class, students also used reflection pieces in the assignment described below. Reflections for Atkins’s assignment required that students write 3–4 pages about the kinds of software they learned, the challenges they encountered, and how they addressed them. Students were also prompted to think critically about the assignment in a manner similar to Reilly’s students; asking students to analyze the assignment and explain how they might change their responses if they had the opportunity to complete it again provides students with a space to evaluate the assignment while illustrating awareness of the ways in which they might have fallen short of their own expectations. Student reflexive examinations of their work can earn them credit and enhance their ethos with their instructor, mitigating the deleterious effects of imperfect responses to the project.
In addition to using reflections to provide a space for students to demonstrate their knowledge of course concepts and rhetorical skills, we also afforded such student reflections significant weight in assessing digital writing projects, following Odell & Katz’s (2009) advice that student reflective statements about their projects should “figure into the overall judgment of students’ writing” (p. 202). In writing reflections, students work in the more familiar medium of prose; therefore, written reflections provide them with a space in the context of a digital project to justify their choices and evaluate their efforts in a more-familiar and perhaps more-comfortable form. This allows more space for risk-taking to occur in the production of digital compositions as the writing of a successful reflection can offset imperfections in the digital portions of the assignment.
REVISED APPROACHES: ADAPTING PRIMARY TRAIT SCORING FOR DIGITAL WRITING ASSESSMENT
Keeping in mind the calls of Huot (200b) and Neal (2011) to develop locally relevant assessment processes and incorporate the practice of assessment into our writing instruction, during the fall of 2011, we sought to pilot an assessment process that we hoped would operate aspirationally. In this process, we adapted primary trait scoring to the assessment of digital writing and grounded our approach in Freirian pedagogy. We chose to pilot this process in Atkins’s ENG 314: Writing and Technology class because of the focus on digital composition and because Atkins had already begun developing an extensive service-learning project that we determined would provide an appropriate context for exploring our revised assessment approach. This project had several features useful for piloting our assessment process:
- the project involved students in service learning for a local client (a tavern owner);
- the project asked students to create a variety of texts using digital applications, including traditional print forms like letterhead, envelopes, table tent cards, placards, menus, and flyers, and electronic texts with text, images and videos, including a web site for the business, a Facebook page, and a Twitter account;
- because so many different types of applications needed to be used to create the texts, all students were likely to work in at least one unfamiliar digital environment, and;
- not all student groups would develop the same materials for the client as the work would be dispersed among groups.
What we offer below is a framework for revisioning our assessment practices to foster student motivation; this framework can be customized for a variety of courses and institutional contexts. We first discuss why we based our process on primary trait scoring informed by Freire. We then describe how Atkins incorporated the development of assessment criteria into the instruction and work of the project, and, lastly, we demonstrate how well this pilot process worked to create the space and environment for student risk-taking and engagement in deliberative practice.
Primary Trait Scoring and Aspirational Assessment Criteria
Primary trait scoring—when used in conjunction with a Freirian pedagogy—can serve as a way to involve students in building assessment criteria that account for the risks they need to take to complete a project successfully while simultaneously blurring distinctions between formative and summative assessment and making assessment part of the writing process, informing the development, production, and revision of digital compositions. Primary trait scoring can aid students in analyzing a writing project and parsing out primary traits of successful responses to it. Students can also be prompted during this process to imagine a hierarchy of traits, including those characterizing an acceptable response to the project and those characterizing an outstanding response. The latter category of traits constitutes the aspirational criteria, and these criteria, while capturing elements that reflect excellence in a response to the project, exceed the basic requirements. Therefore, students might select one subset of aspirational criteria to guide their efforts, while other students work to achieve a different subset within the context of the same project.
The primary trait scoring method was developed by Richard Lloyd-Jones for the National Assessment of Educational Progress (NAEP) more than 40 years ago and involves the development of a rubric used for writing assessment that addresses the primary traits of a given writing assignment. When used to evaluate traditional writing assignments, raters score the essays according to the primary traits of the assignment, noting where they appear in each essay. For example, according to Odell and Cooper (1980), the assessment procedure of primary trait scoring begins with an analysis of the assignment rather than with an analysis of specific papers. Raters ask the following sorts of questions:
What is the rhetorical context for this writing? What assumptions can we make about the knowledge/values/personality of the reader for whom it is intended? Even more important: What is the purpose that the writing is supposed to accomplish? Is it to persuade, to influence the reader’s thoughts and actions? Is it simply to express the writer’s own thoughts without attempting to change the audience’s thoughts? Or is the purpose to explain, to present comprehensive, reliable information about a topic? (Odell & Cooper, p. 39)
Put simply, the primary trait scoring procedure includes analyzing the writing task, analyzing the writing performance, and formulating primary traits (Saunders, 1999, p. 4).
A primary trait scoring guide is meant to represent “the over-riding features that enable the writer to meet the purpose of the specific writing task. For example, if ‘coherence’ were the primary trait, a coherent paper would achieve a high score despite major problems with grammar, mechanics, and usage” (Saunders, 1999, p. 3). If the assignment calls for the design of a newsletter, for instance, one primary trait of this assignment might be the “integration of images.” If the student makes effective use of images and formats the images properly, the student would get a high score even if other components of the newsletter were less than perfectly formatted—contrast, repetition, alignment, and proximity, for instance, would matter less in this case because they are not the primary traits of the assignment. Of course, assignments can have multiple primary traits as well as secondary traits. The scoring scale, importantly, focuses on the central rhetorical tasks that the assignment has asked the student to perform. Primary trait scoring also has a focus on audience and avoids measuring student papers against one another; instead, writing is measured against specific criteria.
These attributes of primary trait scoring make it a useful approach for our purposes. We like that primary trait scoring help us to focus on the assignment and the rhetorical situation to which students respond. We also appreciate that each assignment could be approached separately, allowing new criteria to be developed and making those criteria paramount in the assessment. However, to make this approach function aspirationally, we developed our application of it in line with Paulo Freire’s concept of critical pedagogy. This problem-posing pedagogy requires that instructors involve the students in the development and organization of the course, asking them to take ownership of their own educations, a position echoed by scholars such as Huot (2002b), who encourage student involvement in instructional aspects of the course by making assessment something they learn and practice as part of the process of learning to write. In our case, the problem posed to students was to identify the primary basic and aspirational traits of particular responses to writing projects and to learn to use the language of assessment to articulate these criteria as goals to which they could aspire. For Freire (1970), “problem-posing education bases itself in creativity and stimulates true reflection and action upon reality, thereby responding to the vocation of men [people] as beings who are authentic only when engaged in inquiry and creative transformation” (p. 71). All problem posing begins with a dialogue about aspects of the communities of which the citizens are a vital part. As a result, service-learning projects, like the one that Atkins’s students had taken on, prove to be an especially good fit for this assessment process because determining the primary traits of the project necessitates considering the needs of the client and the community of which the client is a part.
However, one challenge of primary trait scoring is that it operates on the assumption “that we may reasonably speak of types of writing, that diverse responses to a given assignment have more in common with each other than with responses to apparently different types of assignments” (Odell & Cooper, 1980, p. 42). Therefore, identifying primary traits for an assignment as a whole may seem impossible if students can respond to it through different and, perhaps, unanticipated genres. As we demonstrate below, however, Atkins negotiated this by encouraging students to consider traits that related not only to the products composed for the client but also to the process of creating those products as documented in student reflections and blogs.
Primary trait scoring was more appropriate for our purposes than holistic forms of assessment for several reasons. First, as the title of a recent article written by Bruno Latour and his colleagues (2012) suggests, “the whole is always smaller than its parts.” Although they are talking about sociological analyses, we found this concept to be generative when conceiving of writing assessment as well. In the case of our assessment process, we attempt to use the baseline and aspirational assessment criteria to motivate students to take risks and experiment with digital compositions. Their success—as measured by the criteria they develop—may not be reflected in the product they create, which may fall short of even some basic criteria while achieving some number of the aspirational items. Their accomplishments may be much greater than the product they submit and may include learning to edit video or audio effectively for the first time or using an image editor to touch up and resize an image. So much of what is really important in their process of pursuing expertise cannot be captured in an analysis of the submitted project; it can only be captured through a review of other project-related texts such as reflections or blogs. Furthermore, within our schema, all of the criteria for a project may not be equally relevant to all students. With regard to the aspirational criteria, students or groups can select several on which to focus and these will be different for each of them, based on where they wish to concentrate their growth in expertise. Thus, the project outcomes as described by the criteria are not fixed; they shift from one student or group to another, making identification of a whole difficult. Holistic assessment would have difficulty accounting for the incremental and variable nature of the process we propose. As Nancy Schullery (2003) succinctly explained it, holistic assessment centers on examining student projects based upon success in terms of general characteristics as determined by the instructor, including things such as context, audience, language, and organization. Such concepts are too connected to an examination of the final product and too general for our purposes, as we seek to have students set defined goals for their work that are contextually determined.
Making Assessment Local: Assessment as Part of the Pedagogy
When students participate in or develop their own assessment criteria, this process brings the assessment into the instructional part of the course and blurs the distinction between formative and summative assessment by encouraging students to determine that their work meets the goals and standards they have set for themselves before submitting their work to us and to their clients. This brings us closer to the instructional assessment practices outlined by Huot (2002b): “Instructive evaluation requires that students and teachers connect the ability to assess with the necessity to revise, creating a motivation for revision that is often difficult for students to feel” (p. 171). The remainder of this section discusses how our process of assessment was enacted through Atkins’s service-learning project.
As noted above, the assignment that Atkins presented to his students required them to participate in assisting a community-based client, a local business owner who had just purchased a tavern, thus presenting them with an opportunity to complete work for the class that would have an external impact on their community. To provide context for the project, Atkins invited the business owner to the class to discuss her writing and communication needs and to provide information about the context in which the tavern operated. After meeting with the owner, the students and Atkins brainstormed about the best ways to assist the client; they began by listing all of the things that the owner wanted and discussed what could be accomplished, what could not, and the best ways to proceed. Because the owner wanted both print documents and electronic publicity, Atkins decided to create two assignment descriptions, put students into groups, and separate the workload. The next step was to articulate assessment processes for the assignment. Atkins initially presented students with assessment language that he had used in the past:
For each item you are missing, you will lose one letter grade. If the package is not consistent in color, logo, and design, you may also lose points. It is a package—everything should work together. The last issue is working in your groups. Since there are so many items to complete, each member should be able to contribute appropriately. If your group complains about you, you miss lots of class, or simply do not contribute, your grade may be different from the rest of your group. Please take this seriously and put forth a genuine effort. We have an opportunity to help a real business and a real client in our local area.
Atkins’s previously used assessment criteria account mainly for the end product and are focused on negative consequences rather than positive rewards for excellent performance. After presenting this sort of assessment scheme to the class, Atkins asked students directly about how to develop a better way to assess their projects. Students wanted credit for trying out new things. Many of them had never used Photoshop, for example, and they felt uncomfortable about their work being assessed in light of the steep learning curve they faced. As noted above, our writing pedagogy values process; therefore, the process of developing, planning, and executing a digital project should be weighed as part of the final product. In the case of completing digital writing projects, this process generally involves risk-taking and experimentation, for which our assessment practices should also account.
Approaching the assignment from a Freirian perspective, Atkins involved the students in the development of the assessment criteria. Atkins used a whole-class discussion format to collaborate with students in creating an assessment vocabulary for their projects, beginning with a review of the aims of the assignment. The class then discussed the baseline expectations and primary traits of the assignment. For example, students suggested that in completing the assignment they should meet deadlines, create persuasive texts, use color well, and adhere to general design principles. Such criteria were certainly reasonable and in line with those that an instructor might propose. Atkins emphasized during the discussion that he thought it important for students to engage in technological activities for which they had little to no prior experience and pushed the discussion to a conversation about how students might be rewarded should they go beyond the baseline criteria for the assignment.
In response, students articulated the types of activities for which they ought to be recognized and made suggestions about how those activities might be documented and, thereby, assessed. The discussion helped students understand how assessment criteria could be formulated. Here is a brief list of students’ initial comments about assessing their digital projects from an aspirational perspective:
- “I want to get some credit for trying something new, if I am going to be forced to try something new”
- “If I already know the program, I should get some credit if I learn something new about it”
- “If I am helping out others with technology, then I must know something about it and I should get some credit for that”
- “If I keep an additional blog about my experiences that should also count as part of the assignment”
- “Maybe you could interview each student before the project and after the project and compare where they started to where they ended”
After the discussion, Atkins built assessment guides based upon what students had identified were the baseline and aspirational primary traits of the assignment. Using student input, he developed primary traits that would be used to assess the projects that accounted not only for the final products but also the processes students used to complete the assignment and the documentation of their extraordinary efforts to increase their expertise. These traits included the following:
- Student attempted new (useful and relevant) software.
- Student gained strides in improving a technological skill.
- Student aided another student in learning a technological skill.
- Student articulated and documented challenges and successes.
- Student exceeded the requirements/expectations for the assignment.
Taking the above advice offered by one student, Atkins asked students to document via a blog and a final project reflection the challenges and successes they experienced throughout the assignment. Students could also explain how they went above and beyond the expectations of the assignment, documenting the new applications they used, the new processes they learned, and the ways in which they sought to exceed the basic primary traits of the project.
Outcomes
Allowing students to participate in formulating project criteria and determining which characteristics are indeed primary traits of a given assignment is part of what transforms the assessment process, making it aspirational. However, we also recognize the time-intensive nature of this approach; thus, documenting outcomes is paramount. For this assignment, the blog and the final project reflections completed by students represent primary artifacts useful in assessing how they were challenged and worked to overcome any hurdles and to what degree the end products they developed fell short of their initial plans for the project. Such information proves to be invaluable in gaining insights into how much the students learned and how much effort was put into the project. Below are a few examples from student blogs.
Student 1
[From Blog postings]
Almost every program I used for this class was a program that was new to me. I learned a lot about how to use Microsoft Publisher, which seems pretty basic, but I had never encountered it during all of my years of getting educated. This will certainly be a good tool to know how to use in the future. From the project with Sydney’s, I learned how to work with others to complete a professional package of written materials. I also learned the importance of delegation, as well as knowing when you need help. At first, within my group, I was responsible for creating the business card and the letterhead and envelope. However, after a week or two of putting in serious effort, I realized I was much farther out of my expertise than I had originally thought. One of my team members realized I was having a problem and asked if I’d feel more comfortable creating the newsletter instead. This helped me a lot and made me realize it’s important not to bite off more than you can chew, because in another situation my team might not have been as willing to adapt. I am also beginning to figure out that doing team projects or working in groups is a reality that I’ll probably always have to deal with for the rest of my professional life. As much as I don’t like group projects in school, it makes me feel better knowing that whatever kind of negative experience I’ve had I can learn from and not repeat later on at a future job or something. I also learned quite a bit while working with Jing and Camtasia. I started with zero experience and now I could make a video using either program if I needed or wanted to. I’m even incorporating my newly acquired knowledge of the programs in other classes now to help with final presentations and projects.
The student clearly articulates that she did not know much about any of the programs, and although we as teachers may have recognized this in class as we watched her work, we would not, however, been able to account for the fact that everything she did in class was completely new to her. The most rewarding part is that she learned how to use multiple applications and tools, and she received credit for that learning. It is also easy to see the aspirational motives at work. When she ran into challenges, her team members aided her. She also was not afraid to try anything new. In fact, she was so motivated she began to use the new tools and applications in other classes to develop presentations.
Another student mentioned similar experiences.
Student 2
[From Blog Postings]
I learned how to use a lot of different software in this class. Jing might be one of my new favorite things, if only because my mother always asks me silly questions about things she sees on the internet. Because of all of the document package work we did, I was inspired to teach myself more tools on Photoshop outside of class time and really put a lot of time into designing my documents. Even though our package didn’t win, I still feel like we had a strong showing and would have been even stronger had we had a little bit more time to bring all of our documents together to ensure high quality.
Student 2 also learned through taking risks with new-found tools that could help her with her projects. In this case, it was easy to see that the student, when we were in class, knew how to use a lot of software, but she felt free to try new tools rather than consistently rely on skills and tools she already knew. What she was learning even had an impact on her life at home. She was now able to record videos and send them to her mother so that she can show her mother how to do things on the Internet. The assignment itself “inspired” her to learn more about a program with which she already had some previous experience. The fact that her team did not “win” (receive the designation as best project from the owner) did not seem to bother her. In fact, because the project involved a contest, she was paying specific attention to what the other teams were doing and was motivated to move beyond the basic criteria.
Student 3 offered more comments about what she learned.
Student 3
Another thing that I learned from this course is that I am a lot better at design software than I originally thought I was. I found out that I enjoy creating documents with InDesign and Microsoft Publisher. They can be intimidating at first but after toying around with them for a little bit, it is easy to get the hang of this type of software.
Student 3 articulates the fear and intimidation that some students may feel when approaching a technological task or a digital writing assignment. The idea of “play” and spending time “fiddling” around with the program seemed to help her get over the fears and become more motivated once the student felt more comfortable with the software.
Aspiration and Motivation
Students in this particular course consisted of professional writing majors who were also very active on the university campus; they included a campus newspaper editor, a student body vice president, and the sports editor for the campus newspaper. The class was loaded with ambitious students, which made things slightly easier for Atkins. The concept of competition seemed to motivate students as well even though this was an unplanned effect of our discussions. Equally important in motivating students were the frequent visits to class by the tavern owner who had a direct stake in what the students were designing. With the tavern owner around, students felt compelled to raise the stakes on their own. As the projects began to take shape, it was easy to see that students were taking it upon themselves to exceed the baseline criteria of the assignment and to risk failure by adding complicated design elements to their document packets. For example, Group 1 not only designed a website, but put much effort into the site to include videos, images, and other links that the local tavern owner wanted, which we present as a static screen capture here to provide a sense of the site.
Figure 2. Sydney's Tavern home page screenshot
One outcome that clearly illustrated the success of our aspirational assessment process in motivating students was that three groups designed new logos for the tavern despite the fact that a logo redesign was removed from the official assignment guidelines. Many students had no experience in graphic design or other artistic endeavors, so for students to take it upon themselves to redesign the logo seemed to show that they were willing to reach beyond the baseline expectations of the assignment if afforded the opportunity. Two groups also created additional documents to include into the packet of materials. For example, some students created business cards, and still others shot video to post on the bar’s Twitter feed and Facebook page. The concept behind the videos turned out to be more elaborate than the owner or I anticipated. We expected simple straight live video of the tavern to provide a sense of the layout, but students went to the business and taped patrons eating and playing games, which gave the impression that the business was busy with people enjoying themselves. Indeed, we did not expect that students would go to such lengths to capture the video and then edit it before uploading it. The more the local tavern owner became involved and the more students felt free to take chances with software and design, the better the projects became and the more the students learned about collaboration, software, design, and serving a local client. To see another group’s design package, please see: student design package (PDF). It was easy to see that students were trying out software they had never encountered before. During class, they would sometime exclaim to Atkins and the other students that “hey, I’m gonna try this, but someone better help me if I mess it up!” Statements like these illustrated their growing comfort level and simple willingness to give new things a try without the stress of worrying about the final assessment of the product.
By the end of the assignment, it was also clear that some students experienced increased motivation and put forth greater efforts after they felt that they would be rewarded for taking risks with technology and with the assignment as a whole. The reflection pieces, the judges’ opinions, and the presence of the owner all seemed to motivate students to take chances with the project with the hope that theirs might be better than some of the others. The reflection pieces indicated that students were willing to share where things went wrong and how they exceeded the baseline criteria. For example one student wrote in her reflection:
I learned how to use both Jing and Camtasia from this assignment. At the start of the project I had absolutely no experience with either program. Getting started was probably the hardest part, because it took me a long time to figure out what I was doing and how to do it, despite watching lots of tutorials. At first, I was using Jing, but then later I switched to Camtasia, because I found it to be easier to incorporate audio files with. In the process, though, I learned not only how to use one program, but both. I also learned how to convert files into other kinds of files as well (seems pretty elementary, but I feel more electronically challenged than most people my age) because of the way my audio files were saved. The type of file they were saved as was incompatible with both Jing and Camtasia, and after some slight struggling, I realized if I could turn the files into a compatible file type, then I could adapt the files to the program instead of trying to manipulate the programs (which wouldn’t have worked anyway, now that I think about it). I also learned what it means to render a file, which actually makes a lot of sense to me now that I understand it.
As her reflection explains, this student took some risks in using unfamiliar applications for the project. Camtasia is an onscreen video application that can be challenging to use effectively; for a student to try using it without being required to do so illustrates a desire to exceed the basic expectations of the assignment.
CONCLUSION
While we have experienced improvements in student willingness to reach beyond their current proficiencies and the basic requirements for the assignment as a result of implementing our proposed aspirational assessment process, we recognize that this process has some significant limitations. As mentioned above, this is a time-intensive process that requires much planning to generate the primary traits and select the aspirational criteria and assessment artifacts to be used for the project. We have determined that such an investment of time is certainly worthwhile because students demonstrate a greater investment in the project and a deeper understanding of the expectations and requirements because they have participated in determining them. But we recognize such a time commitment still might be perceived as onerous by some instructors or students. Additionally, the process we propose would not work for program-level assessment, which generally requires instructors to generate student learning outcomes and assessment criteria that can be applied to all student work products, regardless of form, and to remain constant over time.
To examine the efficacy and sustainability of our process, in her version of ENG 314: Writing and Technology taught in the spring of 2012, Reilly incorporated aspirational assessment for all major course projects, including technological literacy narratives created in Prezi and computer-based training modules developed in Camtasia and designed to teach elementary students in Ohio how to use NoodleTools to evaluate and correctly cite web sites in their research and writing. Based on her repeated implementation of process using instruction and participation in assessment to motivate students to engage in deliberative practice, Reilly developed the following guidelines, which seemed to aid this process to function effectively:
- Allow time for play, exploration, and gains in proficiency prior to the discussion of assessment for a particular project.
- Look at (preferably externally identified) examples of excellent projects.
- Develop criteria in groups after reviewing the project description, client needs (if relevant), and the course student learning outcomes pertinent to the project.
- Allow student criteria to stand even if you, as the instructor, would have chosen other items on which to focus.
- Make room for peer review and revision time following the development of the assessment criteria.
Reilly found that one or two weeks into a project, depending on its scope, was the optimal time to request that students create the assessment criteria. This provided time for them to become familiar with the project, play with the applications to be used, and investigate examples of the types of texts they were to produce. For some projects, like creating Prezis or writing for Wikipedia, excellent examples have been identified by users of the application or media. For example, Wikipedia designates some entries as featured articles.
Early on in the writing for Wikipedia project, Reilly asked students to evaluate several featured articles and determine what made them worthy of this designation. For each project, Reilly asked students to work in groups to create two levels of assessment criteria: one for a baseline or “good” response to the assignment and one for an aspirational or “excellent” response. As these example criteria indicate, students were quite capable of distinguishing two levels of responses to the assignment and provided quite rigorous and appropriate criteria for all three projects. Even though Reilly might have created slightly different criteria—thus asserting alternate values—for each project, she resisted deviating from what the students developed. In using this process, student articulations of the primary traits for a particular project must be paramount, as they are they expressions of what students see as their goals for the project. For these criteria to function aspirationally, they must be privileged over those favored by the instructor. Finally, Reilly also found that peer review and a contest for extra credit functioned motivationally for students. Both harness competition as a means for showing students what is possible and what they can achieve because their peers are achieving it.
For Reilly, the most significant change in the rhythm and pacing of the course prompted by the use of this assessment process was the additional time required during each project to incorporate discussions of assessment. Upon reflection, we recognize that this is largely the point; as Huot (2002b) argued, assessment activities, including practice in working with the vocabulary of assessment, needs to move both from the end of the project and out of the instructor’s office to become an integral part of teaching digital composing throughout the whole process. Space and time need to be allocated for this. In making these moves, we can go a step further by tasking students with the responsibility for determining how their projects ought to be assessed. Students thereby determine what they would like to achieve and what new proficiencies of digital composing they would like to cultivate. By doing this, students determine where to put their efforts to move beyond their current levels of expertise and, through deliberate practice, learn to compose in new ways through the use of digital media motivated by their own aspirations.
REFERENCES
Ash, Sarah L., & Clayton, Patti H. (2009). Generating, deepening, and documenting learning: The power of critical reflection in applied learning. Journal of Applied Learning in Higher Education, 1, 25–48. Retrieved from http://www.missouriwestern.edu/appliedlearning/volume1.asp
Blair, Kristine L. (1997). Technology, teacher training, and postmodern literacies. Retrieved from ERIC database. (ED413597)
Broad, Bob. (2003). What we really value: Beyond rubrics in teaching and assessing writing. Logan: Utah State University Press.
Ericsson, K. Anders. (2003). The search for general abilities and basic capacities: Theoretical implications from the modifiability and complexity of mechanisms mediating expert performance. In R. J. Sternberg & E. L. Grigorenko (Eds.), The psychology of abilities, competencies, and expertise (pp. 93–126). Cambridge, UK: Cambridge University Press.
Ericsson, K. Anders; Krampe, Ralf Th.; & Tesch-Römer, Clemens. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100, 363–406.
Folk, Moe. (2009). Then a miracle occurs: Digital composition pedagogy, expertise, and style. Unpublished doctoral dissertation, Michigan Technological University, Houghton, MI.
Freire, Paulo (1970). Pedagogy of the oppressed. New York: Seabury Press.
Gee, James Paul. (2003). What video games have to teach us about learning and literacy. New York: Palgrave Macmillan.
Hess, Mickey. (2007). Composing multimodal assignments. In Cynthia L. Selfe (Ed.), Multimodal composition: Resources for teachers (pp. 29–37). Creskill, NJ: Hampton Press.
Huot, Brian. (2002a). (Re)articulating writing assessment for teaching and learning. Logan: Utah State University Press.
Huot, Brian. (2002b). Toward a new discourse of assessment for the college writing classroom. College English, 65, 163–180.
Latour, Bruno; Jensen, Pablo; Venturini, Tommaso; Grauwin, Sébastian; & Boullier, Dominique. (2012). The whole is always smaller than its parts: A digital test of Gabriel Tarde’s monads. British Journal of Sociology, 63, 590–615.
Murray, Elizabeth A.; Sheets, Hailey A.; & Williams, Nicole A. (2010). The new work of assessment: Evaluating multimodal compositions. Computers and Composition Online. Retrieved from http://www.bgsu.edu/cconline/murray_etal/index.html
National Writing Project with DeVoss, Dànielle Nicole; Eidman-Aadahl, Elyse; & Hicks, Troy. (2010). Because digital writing matters: Improving student writing in online and multimedia environments. San Francisco: Jossey-Bass.
Neal, Michael R. (2011). Writing assessment and the revolution in digital texts and technologies. New York: Teachers College Press.
Odell, Lee, & Cooper, Charles R. (1980). Procedures for evaluating writing: Assumptions and needed research. College English, 42, 35–43.
Odell, Lee, & Katz, Susan M. (2009). “Yes, a t-shirt!”: Assessing visual composition in the “writing” class. College Composition and Communication, 61, 197–216.
Remley, Dirk. (2012). Forming assessment of machinima video. Computers and Composition Online. Retrieved from http://www.bgsu.edu/departments/english/cconline/cconline_Sp_2012/SLassesswebtext/index.html
Saunders, Pearl I. (1999). Three faces of assessment. Primary trait scoring: A direct assessment option. Retrieved from ERIC database (ED444624).
Schullery, Nancy M. (2003). A holistic approach to grading. Business Communication Quarterly, 66, 86–90.
Shipka, Jody. (2009). Negotiating rhetorical, material, methodological, and technological difference: Evaluating multimodal designs. College Composition and Communication, 61, 343–366.
Sorapure, Madeleine. (2006). Between modes: Assessing student new media compositions. Kairos: A Journal of Rhetoric, Technology, and Pedagogy, 10 (2). Retrieved from http://kairos.technorhetoric.net/10.2/binder2.html?coverweb/sorapure/index.html
Wilson, Maja. (2006). Rethinking rubrics in writing assessment. Portsmouth, NH: Heinemann.
Yancey, Kathleen Blake. (2004). Looking for sources of coherence in a fragmented world: Notes toward a new assessment design. Computers and Composition, 21, 89–102.
Return to Top