Chapter 5
“Something Old, Something New”: Evaluative Criteria in
Teacher Responses to Student Multimodal Texts
Emily Wierszewski
ABSTRACT
A common suggestion to instructors assessing student multimodal work is that they modify their existing print-essay assessment criteria or create entirely new criteria. However, little empirical research has been done that investigates the practice of multimodality in the composition classroom, especially in assessment contexts, resulting in a lack of knowledge about teacher practices of assessing multimodal work, including to what extent print, modified, or “new” kinds of criteria are, in fact, present in those practices. In this chapter, I present the results of a study of eight teachers responding to their students’ multimodal work. The results of this study suggest that these teachers draw upon print criteria to respond to student multimodal work, although a few potentially significant modifications and additions appear to be unique to the assessment of multimodal texts.
INTRODUCTION
In his criticism of the current state of published research in rhetoric and composition, Richard Haswell (2005) admonished the discipline for its “inability, as yet, to convince scholars outside the field that it is serious about facts, perhaps its inability to convince them that it is not afraid of what those facts might uncover about its favorite practices” (p. 219). In recent years, multimodality has continued to gain momentum as a contender for a “favorite practice” in the field of Rhetoric and Composition. Multimodal composition is characterized as a necessary pedagogical approach to facilitate success for all students in an increasingly technological and global era and to make composition relevant in the influential pedagogical treatises of scholars like Gunther Kress (2003), the New London Group (2000), and Anne Frances Wysocki, Johndan Johnson-Eilola, Cynthia L. Selfe, and Geoffrey Sirc (2004). However, there has not been enough empirical research1 investigating multimodal assessment practices in the composition classroom.
A common thread in the literature that does exist to date is that teachers assessing digital or multimodal forms of composition ought to modify their existing print-essay assessment criteria or generate entirely new criteria. In some cases, scholars have even provided teachers with revised or new criteria in the form of heuristics or rubrics. Lee O’Dell and Susan Katz (2009), for instance, supplied teachers with a set of compositional choices borrowed from visual and verbal communication to keep in mind as they assess and, similarly, Meredith Zoetewey and Julie Staggers (2003) concluded with a rubric that focuses on “transforming the rhetorical criteria we already understand” (p. 148). This growing body of scholarship suggests our existing print values do not seem to be sufficient for assessing multimodal texts. Noticeably absent from this discussion has been empirical research on teacher multimodal classroom practices, especially their assessment practices; as Pamela Takayoshi and Brian Huot (2009) confirmed: “There is little data-based scholarship that reports on faculty making the transition to multimodal composition teaching” (p. 98).2
As a result, we know precious little about teacher practices of assessing multimodal work, including to what extent print, modified, or “new” kinds of criteria are, in fact, present in those practices.
In this chapter, I share the results of a study of eight teachers responding to their students’ multimodal work in verbal protocols in an attempt to address the question: What print values do teachers use when they assess multimodal work, and what kinds of criteria seem to be unique to new, multimodal pedagogies? For the purposes of this study, multimodal texts were defined as those that contained more than one modality (visual, linguistic, aural, gestural, spatial) and ranged from slideshow presentations to videos to brochures. Teachers engaged in 60-minute-long protocols as they read a self-selected body of their students’ multimodal texts, speaking their thoughts and evaluative comments out loud as they responded. To ascertain the extent to which print values were present in teacher multimodal assessment practices, the evaluative comments that surfaced in the transcripts of these protocols were compared to the kinds of evaluative comments teachers wrote on print essays in Robert Connors and Andrea Lunsford’s (1993) study. Based on this analysis, I argue that the criteria teachers use to assess multimodal texts are predominately aligned with print criteria, although a few potentially significant modifications and additions appear to be unique to the assessment of multimodal texts and indicate directions for future research.
LITERATURE REVIEW
Takayoshi (1996) argued that pedagogical shifts like the one toward multimodality require attendant modifications to assessment practices and theories because assessment and teaching are so intimately connected. Curiously, although much has been written about the future shape of multimodal pedagogy, comparatively little has been said about how assessment practices might also evolve. In what has been written about multimodal assessment, researchers concur that because multimodality is a new approach to composition, new approaches to assessment are also required; in other words, the ways we understand and talk about print or non-digital texts are insufficient for assessing multimodal texts. The work of multimodal assessment scholars is predominately theoretical, with a focus on why multimodal texts require these new approaches or, more practically, on providing heuristics or rubrics that outline new ways of talking or thinking about text. Interestingly, few of these researchers have engaged in empirical research investigating teacher multimodal assessment practices to ascertain the extent to which “new” assessment approaches or ways of thinking about text are actually being used by teachers, or what those “new” approaches might look like in classroom contexts and what those approaches suggest about directions for future scholarship.
Although she was not writing specifically about multimodal texts, Takayoshi (1996) was among the first to suggest that in digital writing environments, writing teachers must reevaluate their goals for student work. Because digital texts make the connection between form and content (or form and function) visible, Takayoshi argued that it was critical for writing teachers to examine this connection in their assessments. Specifically, teachers need to consider the relationship between rhetorical goals and the new kinds of textual features and choices that students created or risk a return to formalism. She predicted that “it may be that we need to step outside what we already know about text” (p. 255) to appreciate and understand digital forms of writing. Similarly, Gail Hawisher and Charles Moran (1997) posited that changes to the nature of writing brought on by new technologies would require that teachers develop new values or criteria for assessing digital texts.
Multimodal assessment scholars have often borrowed from these early arguments for the “newness” or differences of digital text. Zoetewey and Staggers (2003), for example, contended that computers allow for multimodal forms of communication that are “fundamentally different” from print (p. 135). For instance, multimodal compositions are able to resist the “linear constraints” of print, and they also make issues of “design and information” inseparable, as Takayoshi (1996) astutely noted earlier. Zoetewey and Staggers argued that teachers cannot rely on old methods for assessing print texts in digital environments because it will cause them to neglect what is “new” about new media. The authors stressed that the visual elements of these new texts must be viewed rhetorically to avoid what Takayoshi termed a return to “the grammar and punctuation of current–traditional rhetoric” (p. 250). They urged teachers to “focus instead on transforming the rhetorical criteria we already understand, such as coherence, clarity, relevance, so that we can read and evaluate them as they operate in new media” (p. 148). The key is transformation: Teachers must take into account that multimodality is different from print in profound ways and transform what they know about rhetorical effectiveness. Zoetewey and Staggers concluded with a rubric for teachers that transforms existing rhetorical criteria.
Others also advocate building off of our knowledge of rhetoric to assess multimodal texts. Sonya Borton and Huot (2007), for instance, suggested that, “all assessment of multimodal compositions should be tailored to teaching students how to use rhetorical principles appropriately and effectively” (p. 99). Like Zoetewey and Staggers (2003), they provided a rubric for teachers to use in the assessment process. Borton and Huot’s rubric contains a sample list of assessment criteria for multimodal texts that addresses rhetorical features like purpose, audience, tone, and transitions. An emphasis on borrowing and repurposing rhetorical elements in multimodal assessment can also be found in the work of Madeline Sorapure (2005), who advocated that:
On the one hand, we need to attend to the differences between digital and print compositions in order to be able to see accurately and respond effectively to the kind of work our students create in new media. . . On the other hand, we need to work from what we know. (p. 1)
Sorapure argued that because there are many possible combinations and projects in multimodal composition, teachers need assessment processes that allow them to address the effectiveness of a variety of texts. Although Sorapure acknowledged the strengths of a broad rhetorical approach in accomplishing this task, she lamented that such an approach “doesn’t in itself offer any specific guidance or criteria for handling the multimodal aspects of the composition” (p. 3). To address the multimodal aspects of a text without being too prescriptive or neglecting context, Sorapure proposed applying rhetorical tropes like metaphor and metonymy during the assessment process. By re-purposing rhetorical tropes, teachers are able to account for new kinds of arrangements and, at the same time, build off what they already know to evaluate those arrangements.
In Writing Assessment and the Revolution in Digital Texts and Technologies, Michael Neal (2011) also highlighted the importance of balancing new and existing knowledge about text during the assessment process, writing that “during this time of growth and change, we need to stay grounded in what we know and what we can contribute to the conversation” (p. 95). At the same time, Neal argued, we cannot merely transfer our existing assessment criteria or course and assignment outcomes to new text types, but instead must consider how they fit or need to be modified in light of how technologies affect the shape of new texts.
Although these scholars have emphasized the importance of modifying existing knowledge about print or rhetorical texts in multimodal assessment, others have advocated adopting entirely new kinds of assessment criteria or practices. O’Dell and Katz (2009), for example, drew on theory from other fields such as visual and verbal communication to propose the kinds of choices or conceptual processes involved in the composition of multimodal texts. Kathleen Blake Yancey (2004), too, argued that digital texts have diverse “virtues” and therefore require a new set of values or a new language to talk about their effectiveness. Yancey contended that teachers need to reexamine their textual values or risk limiting their understanding of what digital texts are capable of. If teachers stick with a print paradigm, they may “be held hostage to the values informing print, values worth preserving for that medium, to be sure, but values incongruent with those informing the digital” during the assessment process (pp. 89–90). Yancey asserted that digital texts “offer us new opportunities” (p. 100) and if we apply print-based values, we might miss out on what those new opportunities afford or, even worse, we might consider those new opportunities to be errors.
Whether scholars advocate repurposing our existing knowledge about writing and rhetoric to assess multimodal texts or generating entirely new criteria, one thing is clear: There seems to be agreement that multimodal texts cannot be assessed using the same criteria we use to evaluate traditional print texts. At the very least, these criteria must be rethought and, at the extreme, they must be replaced. However, the practicality of these suggestions has not yet been examined through empirical research. Yet as Wysocki (2004) argued, multimodality is a practice that “needs to be informed by what writing teachers know” (p. 5), and data-based research grounded in teacher classroom practices can reveal important information about multimodal assessment. For instance, what do teacher assessment criteria actually look like? When teachers respond to multimodal texts, do they reach for new or modified criteria, or fall back on “what they know?” In this study, I use empirical research to investigate how teachers negotiate the newness of multimodal texts during assessment. Specifically, I use data from classroom contexts to explore whether or not teachers seem to implement print values when they assess, and what kinds of values seem to be unique to new, multimodal pedagogies.
Further data pertaining to teacher multimodal assessment practices comes from the survey research conducted by Anderson et al. (2006) and Murray et al. (2010). Although Anderson et al.’s survey encompasses a wide variety of questions pertaining to multimodal pedagogy, one section of the survey was devoted to assessment. In this section, all teachers participating in the study reported that when they assess multimodal texts, they base their evaluations at least partially upon the fit between the content of the work and the rhetorical context. Other assessment criteria teachers claimed to rely on involved student audiences, the purpose and content, use of technology, and the thought and effort put into the project. Murray et al.’s research was more focused on the use of rubrics in multimodal assessment, and the authors found that many teachers expressed discomfort with the multimodal assessment process and felt their writing programs had not properly trained them to effectively assess multimodal texts. Murray et al. suggest that teachers believe multimodal assessment is a fundamentally different process, one with which they lack expertise, indicating that they might believe it is not appropriate to carry over their existing knowledge or expertise with print or rhetoric to new, multimodal assessment contexts. although these criteria are interesting and certainly indicate the kinds of things teachers report valuing in multimodal texts, because they come from self-reported survey data they still do not necessarily tell us much about the actual assessment process—only about teacher recollection of that process.
METHOD
To address the research question: What print values do teachers use when they assess multimodal work, and what kinds of criteria seem to be unique to new, multimodal pedagogies?, I asked eight instructors to engage in verbal protocols as they responded to a self-selected body of their own students’ multimodal texts for a period of 60 minutes.
Participants
Participants for the study were recruited on a volunteer basis after an email was sent out to prospective participants explaining the project. The only requirement for participation was that participants were currently teaching an English course in higher education and planned to assign at least one multimodal project in that course; I explained to participants in the initial contact email that multimodal texts are those that integrate multiple modalities such as the linguistic, visual, aural, gestural, or spatial, and are typically mediated by computer technology. The population for the study consisted of the first eight instructors to respond. The participants were of varying levels of expertise and education, ranging from graduate assistants through tenured professors. All held a minimum of a Master’s degree in a related field and were employed in English departments at one of three different PhD-granting institutions in the Midwest (participant demographics are further outlined in Table 1).
Name* | Title** | Institution | Course Taught | Type of Course | Highest Level of Education | Years of Teaching Experience |
Anna | Professor (TT) | B | Writing, Style, and Technology (upper-division) | online | PhD in Curriculum and Instruction | 36 |
Susan | Assistant Professor (NTT) | A | Digital Media Studies (upper-division) |
face-to-face | MA in Communication | 34 |
Marie | Assistant Professor (NTT) | A | College Writing II | face-to-face | MA in English | 33 |
Joe | Assistant Professor (TT) | A | First Year English Composition (stretch) | face-to-face | PhD in American Culture Studies | 11 |
Aaron | Associate Professor (TT) | C | Expository Writing (upper-division) |
online | PhD in Rhetoric and Writing | 11 |
Leah | Graduate Teaching Associate | B | Basic Writing | face-to-face | PhD (in progress) in English | 7 |
Sara | Graduate Teaching Fellow | A | College Writing II | face-to-face | PhD (in progress) in Rhetoric and Composition | 4 |
Robert | Graduate Teaching Fellow | A | Business and Professional Writing (upper-division) | online | PhD (in progress) in Rhetoric and Composition | 3 |
* Names are pseudonyms.
** TT = tenure-track; NTT = non-tenure-track.
Table 1. Teacher Participant Profiles
Participants in the study represent several kinds of institutions, experiences, and practices but were chosen on a volunteer basis. As such, participants are not necessarily a representative sample because they were restricted to a particular geographic area and may be more interested, invested, or advanced in their thinking about multimodality than others because they volunteered to participate. I also had a professional relationship with most of the participants in the study prior to the project; many participants were my co-workers. This relationship may have affected some participants’ willingness to share information during the interview and their level of comfort during the protocol.
The majority of participants in this study began assigning and assessing multimodal texts early in their teaching careers. Decades ago, the two most experienced teachers in this study, Anna and Susan, encountered multimodality as teachers of professional and technical writing. In an interview, Anna described that when she accepted her first college-level teaching job at a Midwestern engineering school in 1981, that she “began to see the benefit of multiple modes of expression, multiple modalities of expression, and how they figured into communication.” There, she taught courses in professional and technical writing that integrated visual, audio, and video elements. Although she no longer teaches at this institution, more than 30 years later she still assigns and assesses multimodal texts on a regular basis, reporting in her interview that, “I don’t teach any classes that don’t integrate multimodality.” Similarly, Susan noted in an interview that multimodality has been a part of her pedagogy since she began her teaching career in business writing over 30 years ago. She reported that when she first heard the term multimodality just a few years prior, she thought,
I’ve been doing this all the time! I didn’t know that that was anything new! You know in Business Writing for example, there’s a chapter in every single Business Writing textbook on document design. . . it was just the word that was new to me. . . Even in my Expository Prose class, which was a sit around the table and talk about writing kind of thing, I had always encouraged students to begin with photos if they were visually inclined.
More recently, Aaron, Sara, and Robert commenced their teaching careers as graduate students and, at the same time, were exposed to multimodality as a pedagogical tool. Aaron, the most experienced of the three, admitted in an interview that multimodality has “always been a part of my sense of writing” since he was an undergraduate English major in the 1980s. When he started teaching as an MFA student in 1988, his institution pushed graduate assistants to use multiple modes to reach students: “They encouraged people to use visual media and movies and television and that sort of thing.” When he entered his doctoral program a few years later in the early 1990s, the use of technology and multimodality in the classroom became “a bigger deal.” However, Aaron believes multimodality has always been a part of writing and what he asks students to do in the writing classroom, because “In a very fundamental sense it seems to me that there’s always multiple modes in how we write, how we experience text.” He identifies multimodality as a large part of his courses “across the board.”
As students at a large Midwestern research institution, both Sara and Robert were pursuing doctoral degrees in Rhetoric and Composition at the time of this study. Since they both started teaching as graduate assistants there in 2006, they have integrated multimodal texts into their teaching. Robert noted that it was his graduate coursework that pushed him to do so. Speaking about a course that introduced him to the concept of multimodality, he remarked in an interview that the course
challenged me to think about it [multimodality] in a different way even though I was pretty skeptical at first. And you know I wanted to do it not just to do it, but for a purpose. And that was hard for me to do because I kept thinking, you know, how am I gonna do this? I wasn’t sure. And so I think I’m still in that process now of figuring it out. And I think each time I assign it, this visual argument, I find something new that I want to get out of it.
While Robert struggles with multimodality as an intellectual problem, he experiments with how to best integrate multimodal assignments in his college writing courses for undergraduates. As a student in the same graduate program as Robert, Sara was also introduced to multimodality through her graduate coursework. Both times she taught College Writing II, the course she was leading during this study, Sara included at least one multimodal assignment. For nearly the entire duration of their short teaching careers, both Sara and Robert have had experience assigning and assessing multimodal texts.
For Marie and Leah, however, multimodality factored into their teaching later in their careers. Further, for these two instructors, multimodality came about as a result of professional development opportunities rather than as an inherent part of the subjects they teach or of their graduate coursework. For instance, Marie has been teaching writing for 34 years, but admitted that she only first began to see the value of multimodality in 2004. She reported in an interview that she slowly integrated more and more multimodal assignments into her courses:
I remember the first time I asked kids to add a visual to their work I was in Business Writing. . . They have to add graphs and charts to these reports that they’re doing. If they’re doing a training manual they have to add some visual to. And so I became aware that that was that had to be added in Business Writing and I tried it in Argument then. . . And I’ve just sort of gradually added more and more and more.
This change in her pedagogical approach came about as a result of a variety of factors, including reading about the push for multimodality in composition scholarship as well as attending professional development programs like the Computers in Writing Intensive Classrooms (CIWIC) summer institute at Michigan Tech. She explains that at CIWIC, “I went back for the technology part of it—how to develop video essays, aural essays. When I first went I didn’t think it was going to be as intense as it was.” It was just over five years ago at the CIWIC institute that she learned both how and why to integrate multimodal texts in the writing classroom. She also became familiar with strategies for assessing multimodal works: “We evaluated our videos at the end. Everybody’s. And our audio essays. What was strong, what was good.”
Interestingly, it was her attendance at CIWIC that also helped Leah see the value in integrating multimodal texts into her curriculum. Although Leah began teaching a few years prior to attending the institute as a graduate student and had encountered the concept of multimodality in her coursework as early as 2005, she confessed in an interview that “it wasn’t until I went to the [CIWIC] workshop with Cindy Selfe that I really understood why I should incorporate different modalities into my class. After I went to that workshop was the first time I incorporated a non-alphabetic print assignment into my class.” After attending CIWIC, Leah began her doctoral studies at a large Midwestern research school and continued to assign and assess multimodal texts in the courses she taught there as a graduate assistant. Interestingly, part of her current doctoral assistantship involves working as a support person for faculty struggling with integrating technology and multimodality into their courses.
Like Aaron, Sara, and Robert, Joe encountered multimodality in graduate school. However, Joe’s primary experience with multimodality has been as a teaching tool to help connect students to print material; this is perhaps not surprising, considering that Joe is a literature professor and the only participant in the study with a degree outside of writing and communication. As a doctoral teaching assistant, he was mentored by a faculty member who “really encouraged us to use documentaries and films and PowerPoint presentations and images and anything we had to help students better understand the topic.” When asked how he integrates multimodality in his classroom during an interview, Joe first described how he used multiple modes like video and audio to help students connect in meaningful ways to print material. When teaching A Raisin in the Sun in an African American Literature course, for instance, Joe said that, “It is necessary to have the students see how the characters that Lorraine Hansberry describes in the book in the play really are. You know what is the real emotions of Walter Reed on film?” Although Joe noted that occasionally he does ask students to produce multimodal texts, mainly in the form of slideshow presentations, his primary experience with multimodality has been as a teaching tool rather than as a compositional strategy for students.
For most participants in this study, assigning and assessing multimodal texts has always been a natural part of their pedagogy. In Anna and Susan’s case, for instance, multimodality was integral to the professional and technical writing courses they taught decades before “multimodality” became a buzzword in Rhetoric and Composition. Years later, Aaron, Sara, and Robert encountered multimodality in their graduate studies and applied what they learned as they began their teaching careers as graduate teaching assistants. For Leah and Marie, however, multimodality wasn’t always a part of their pedagogy. It was only through professional development after their teaching careers had already begun that they were encouraged to ask students to compose multimodal texts. Finally, for Joe, multimodality has primarily functioned as part of his teaching presentation, rather than as a type of student composition.
Protocols
Protocols focused on participants’ multimodal reading processes rather than on their written comments on student work, because recent research in teacher response has demonstrated that reading response is an interpretive rather than purely textual act (consult, for instance, Edgington, 2005; Fife & O’Neill, 2001; Huot, 2002; Phelps, 2000). In other words, although the focus of response research has traditionally been on teachers’ written commentary (as in Connors and Lunsford’s 1993 study), the reading or “interpretive” process precedes and informs that written assessment; without the interpretation, there would be no written commentary. During the protocol, participants were asked to read a self-selected sample of their students’ multimodal texts at the time when students submitted them. Participants read for one hour; the number of texts participants were able to read during this time period varied. They were asked to speak their thoughts aloud as they read. While the protocol process imposes some artificiality on the situation, asking teachers to read their own students’ work contributes to the authenticity of the reading process: A teachers’ reading of her students’ work in a protocol more closely approximates her actual response practices than does asking her to read work created by another student in another course. The materials that teachers elected to read were mixed, but were consistently multimodal and mediated by computer technologies (see Table 2).
Name | Assignment Title | Types of student texts read during protocol | Modes read during protocol |
Aaron | “Visual Rhetoric/Comic Assignment” | Comics, reflective essays | Visual, verbal |
Anna | “Concept in 60” | Videos | Visual, aural, verbal, spatial |
Joe | “Collaborative Research Project and Presentation” | Slideshow presentations | Visual, verbal |
Leah | “Past Literacies: Literacy Narrative Assignment” | Oral essays | Aural, verbal |
Marie | “May 4th Visitor’s Center Final Project” | Brochures, slideshow presentations | Visual, verbal, spatial |
Robert | “Visual Argument” | Visual arguments, reflective essays | Visual, verbal, spatial |
Sara | “Multimodal Mini-Ethnography/Portfolio” | Slideshow presentations | Visual, verbal, spatial |
Susan | “Progress Report” | Slideshow presentations | Visual, verbal, aural, spatial |
Table 2. Materials Read by Participants During the Protocol
Coding
After the protocol data was collected and transcribed, transcripts were first divided into T-units—what Cheryl Geisler (2004) referred to as “the smallest group of words that can make a move in language” (p. 31). Because the present study sought to understand the response criteria teachers used during the task of reading student work, dividing data into the smallest unit possible, the T-unit, was the most likely strategy to capture all of the behavioral nuances involved in a complex process like reading. Each T-unit was then coded based upon the nature of the comment the teacher made. Coding began with several passes through the data set, at which point I realized that not all of the T-units in the corpus were evaluative comments. I then made a distinction between evaluative comments, which were the focus of this study, and the extraneous material. Two codes for the extraneous material were generated based on shared features of that material across protocols: reader response and meta.
Reader response comments are those that involve engaging with or making sense of the text as a reader, rather than as an evaluator (note that in Connors and Lunsford’s 1993 study, reader response comments are considered evaluative, as in “I like/dislike”). In these kinds of comments, teachers were figuring out what a text meant, as in this example from Anna’s protocol: “So I get the idea that this is about a band because there’s a guy playing a drum and a guy playing a guitar and music.” The second type of non-evaluative category, meta, accounts for comments made by the teacher that either explain or describe his or her response process. These comments can be considered a consequence of the protocol method. Because teachers knew the researchers would eventually be watching video of the protocols, they often felt compelled to justify or describe what they were doing. An example, also from Anna’s protocol, occurs when she remarks, “I’m going to play it twice” as she re-watches one of her student’s videos. In this comment, she was explaining her activities to the researcher.
After protocol transcripts were sorted into evaluative and non-evaluative comment types, I focused on evaluative comments, because the goal of the research was to unearth the kinds of evaluative criteria teachers used to assess student multimodal work. First, I compared all evaluative comments to Connors and Lunsford’s (1993) taxonomy of teachers’ written comments on print essays. In their study, Connors and Lunsford examined teachers’ “global comments” written on a corpus of 3,000 traditional print essays; they defined “global comments” as the teachers’ “response to the content of the paper, or to the specifically rhetorical aspects of its organization, sentence structure, etc.” (p. 205), but also recognized that addressing formal features may also constitute global commentary when teachers move beyond “grammatical complaints or corrections” toward “comments on the effectiveness” of formal features (p. 213). Connors and Lunsford’s work is unique in rhetoric and composition research, as it is one of the only pieces to actually identify the kinds of textual features teachers respond to when they assess student work. As such, it provides a useful baseline or point of comparison for this study. As I began the coding process, it was helpful for me to first examine the data through the lens of the neat taxonomy of print-based values provided by Connors and Lunsford. While it could be argued that this comparison could potentially bias results toward print, such a comparison was necessary to identify and analyze teachers’ new or repurposed values—values that did not fit anywhere on Connors and Lunsford’s spectrum. After all, the purpose of this study is to determine how much instructors seem to fall back on values that existed before the introduction of multimodal composition as they assess, and Connors and Lunsford’s work provides a helpful list of those existing values. One important distinction between Connors and Lunsford and the present study is that while Connors and Lunsford examined teachers’ written commentary on student work, this study looks at comments teachers made as they read and responded to student work.
Some immediate overlap was found between teachers’ evaluative comments during the protocols in this corpus and several of the items on Connors and Lunsford’s (1993) taxonomy of evaluative criteria for print essays (see Table 3 for a complete list of comment types borrowed from Connors and Lunsford’s study).
Comment types used from Connors and Lunsford | Comment types unique to multimodal corpus |
audience | creativity |
details/examples | grammar |
following the assignment | idea development |
organization | multimodality |
overall | movement |
paragraph structure | technical execution |
purpose | |
sentence structure | |
source material |
Table 3. Evaluative Comment Code List
After identifying this overlap, I engaged in a process similar to the open-coding process described by grounded theory scholar Anselm Strauss (1987). I read through the remaining un-coded data repeatedly, generating informal notes and memos about initial observations of each protocol. Following the guidelines for open coding established by Strauss, I began by analyzing the entire corpus of protocol data on a microscopic level, looking first at individual protocols and then at shared features among the corpus as a whole. As a result of this process, two things happened: one is that some slight modifications were made to Connors and Lunsford’s definitions of print criteria, and the other is that additional codes were generated that accounted for comments teachers made during their protocols that did not seem to fit with Connors and Lunsford’s taxonomy.
First, I altered the definitions of Connors and Lunsford’s (1993) categories to account for differences between their data and the present corpus. The categories of documentation and source materials were collapsed into one category for this corpus, source materials, because comments labeled in this way often addressed both issues at once. I also defined overall progress in a slightly different way, as general evaluative comments about a student’s progress either on the assignment at hand or in the course (while Connors and Lunsford restricted this category to progress “beyond commentary on paper”). Paper format was renamed formal arrangement; although formal arrangement comments encompass all of the issues Connors and Lunsford associated with paper format (including “margins, spacing, neatness, cover sheets,” p. 213), the term formal arrangement is more encompassing of the multitude of text types in this multimodal corpus. No teacher was reading just a “paper,” so formatting issues reached beyond margins to include arrangement of elements other than text.
Next, I emerged six new categories of evaluative comments: creativity, grammar, idea development, movement, multimodality, and technical execution. Rather than divide these codes into “rhetorical” and “formal” groupings as in Connors and Lunsford’s (1993) study, I labeled the categories “higher-order” and “lower-order,” because even lower-order issues like grammar can be thought about in rhetorical ways and, conversely, higher-order issues like organization can also be thought about in arhetorical ways. Higher-order comments focus on “big picture” issues that run through large portions of a text, while lower-order comments tend to focus on a more microscopic, editing level. Definitions and examples for these new codes can be found in Table 4, where higher-order codes are represented in black and white, while lower-order are identified by purple.
Type of response | Definition | Example |
Creativity |
comments concern the use of creative or inventive approaches to the assignment, including remarks about choice, originality, and thoughtfulness | “At the same time, I think of all the ones I’ve seen she’s used the most inventive shots.” “I think that’s a great insight.” |
Grammar | comments concern grammatical issues not related to sentence or paragraph structure, such as spelling or capitalization | “Anyway, anyways should be anyway.” |
Idea development | comments concern the logic or development of ideas, including suggestions for revision or notes about the quality/lack of ideas | “So just it’s continuing the same vein where he’s sort of listing um subsequent events instead of digging into one.” “Oh there’s nothing...I think there may be a little bit more explanation about the gutter would probably be um would be good” |
Movement | comments concern the pacing, speed, or progression of ideas/concepts in the text | “Nice building of the motion of the piece, too.” “The story isn’t necessarily moving forward as quickly as maybe it should be at this point.” |
Multimodality | comments address the relationship between two or more modalities, or sometimes between multiples of the same mode (e.g., how do two images relate?) | “I also want to say I’m also impressed with the way in which your music enters into a conversation with the video uh video shots.” |
Technical Execution | comments concern the quality of the technical execution, including ability to use software/hardware skillfully, to imbed video, and to provide functioning links; also related to the aural/visual/etc. readability of the elements students create or select | “He just he’s holding the audio recorder probably too close to his mouth because there’s a lot of feedback.” |
Table 4. Evaluative Comment Types Unique to Multimodal Corpus: Definitions and Examples
This coding scheme was purposively not tested with outside raters to determine its “reliability.” Morse et al. (2002) proposed that in qualitative research, reliability is an indication of rigor that emerges throughout the qualitative inquiry process, rather than just at its conclusion with the verification of a coding scheme with outside raters. As the authors contend, “we need to refocus our agenda for ensuring rigor and place responsibility within the investigator rather than external judges of the completed product. We need to return to recognizing and trusting the strategies within qualitative inquiry that ensure rigor” (p. 15). In other words, reliability is predominately achieved during—not after—the research process. Morse et al. described several aspects of qualitative research that ensure reliability, including “investigator responsiveness,” “methodological coherence,” an “appropriate sample,” synchronous coding and collection, and a theoretically minded approach to analysis (pp. 11–13). In this study, reliability functioned as an active concern that helped shape the direction of the research.
Although not expressly discussed in Strauss (1987) or Glaser and Strauss’ (1967) conceptions of grounded theory, two components at the beginning of the qualitative research process that ensure reliability are what Morse et al. (2002) identified as “methodological congruence” (p. 12) or a fit between research questions and research methods, and an appropriate sample. The question posed in this study, What print values do teachers use when they assess multimodal work, and what kinds of criteria seem to be unique to new, multimodal pedagogies?, requires an investigation of teachers’ reading practices, which can only be accessed as they occur using think-aloud protocol as a method. The selection of method in this study was guided by an interest in addressing the research questions in the best way possible. Additionally, participants in this study represent teachers with a variety of education and teaching experience who are employed at a number of institutions and work within different writing programs. The differences between participants were purposeful, as this study sought to collect “sufficient data to account for all aspects of the phenomenon” (p. 12).
Both Morse et al. (2002) and Strauss (1987) insisted that qualitative researchers must be flexible and intuitive. Analysis and theory development are grounded in the data, making it “essential that the investigator remain open, use sensitivity, creativity and insight” (Morse et al., p. 11). During the coding processes in this study, I recorded all possible and emerging ideas about the significance of the protocol data in coding memos over time in a process similar to open coding. I relied on protocol data as well as my background knowledge in pedagogy, multimodality, and assessment to identify the central problems in the corpus and to consistently develop categories. As Peter Smagorinsky (2008) wrote, the labels a researcher assigns to the data are unique to the researcher: “Codes are not static or hegemonic but rather serve to explicate the stance and interpretive approach that the researcher brings to the data” (p. 399). Although theoretical sensitivity might be interpreted as subjectivity, Armstrong, Gosling, Weinman, and Marteau (1997) pointed out that “subjectivity does not necessarily mean singularity” because typically researchers’ views are “socially patterned” (p. 605). That is, a researcher’s theories about the data are social in the sense that they stem from public bodies of knowledge that are collectively constructed.
Results
The entire protocol corpus, consisting of 2,020 T-units, was coded according to the scheme explained in the previous section using existing and modified codes from Connors and Lunsford’s (1993) taxonomy as well as the six new codes unique to this corpus. Coding revealed that the corpus was comprised of nearly half evaluative comments (952 T-units), while the remaining half consisted of meta comments (682) and teachers’ non-evaluative engagement with the text’s content in reader response comments (386). This means that as they read, half of the time teachers were not engaged in evaluation but were either explaining what they were doing or making sense of the student’s text as a reader (see Table 5 for details).
Type of response | # of responses | % of responses |
Evaluative (higher- and lower- order) | 952 | 47% |
Meta | 682 | 34% |
Reader Response | 386 | 19% |
Total | 2020 | 100% |
Table 5. Comment Totals for the Entire Corpus
Of the 952 evaluative comments in the corpus, the most frequently occurring type of comment dealt with formal arrangement (180 T-units) as shown in Table 6. The second most frequent evaluative comments were overall comments about the student’s performance or progress (118 T-units), followed by comments about the organization of ideas/content (104 T-units) and audience concerns like tone (82 T-units). These top four most frequently occurring evaluative comment types overlap with the print-based comment types identified by Connors and Lunsford (1993) and collectively account for just over half of the data set (484 T-units or 51% of the set); while comment types unique to this corpus are indicated by italics in Table 6.
Type of evaluative comment | # of evaluative comments | % of evaluative comments |
formal arrangement | 180 | 19% |
overall | 118 | 12% |
organization | 104 | 11% |
audience | 82 | 9% |
grammar | 70 | 7% |
idea development | 69 | 7% |
source material | 60 | 6% |
details/examples | 49 | 5% |
purpose | 48 | 5% |
following the assignment | 36 | 4% |
multimodality | 33 | 3% |
technical execution | 29 | 3% |
creativity | 28 | 3% |
movement | 28 | 3% |
sentence structure | 23 | 2% |
paragraph structure | 2 | <1% |
Total | 952 | 100% |
*Purple text indicates lower-order comments; italics indicate codes unique to the multimodal corpus
Table 6. Evaluative Comment Frequencies in the Entire Corpus
Evaluative comment types unique to this corpus, indicated by italics in Table 6, account for 257 T-units collectively, or roughly one-third of the total data set. The three most frequently occurring evaluative comment types unique to this corpus were grammar (70 T-units), idea development (69 T-units), and multimodality (33 T-units).
Naturally, given differences in teachers’ purposes for responding, assignment types, teaching experiences, and philosophies, among other contextual factors, the frequency of evaluative comment types was not always consistent across participants. For instance, explicitly rhetorical concerns like audience and purpose were addressed frequently by teachers like Leah and Robert, and less so by Joe and Sara. Additionally, the category of grammar, unique to this corpus, was found in only three protocols. The distribution of these comment types across participants can be seen in Table 7.
Evaluative comment type | Aaron | Anna | Joe | Leah | Marie | Susan | Sara | Robert |
audience | 6 | 6 | 0 | 21 | 7 | 6 | 2 | 34 |
creativity | 2 | 6 | 4 | 5 | 0 | 0 | 11 | 0 |
details/examples | 9 | 7 | 9 | 14 | 3 | 0 | 7 | 0 |
following the assignment | 9 | 4 | 0 | 9 | 1 | 9 | 2 | 2 |
formal arrangement | 3 | 3 | 1 | 0 | 63 | 42 | 51 | 17 |
grammar | 0 | 0 | 9 | 0 | 30 | 31 | 0 | 0 |
idea development | 17 | 1 | 18 | 11 | 5 | 7 | 3 | 7 |
movement | 0 | 8 | 0 | 7 | 2 | 10 | 1 | 0 |
multimodality | 1 | 4 | 4 | 1 | 0 | 0 | 11 | 12 |
organization | 15 | 13 | 11 | 42 | 5 | 6 | 9 | 3 |
overall | 14 | 13 | 15 | 2 | 38 | 15 | 8 | 6 |
paragraph structure | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 |
purpose | 0 | 11 | 5 | 0 | 15 | 2 | 13 | 2 |
sentence structure | 4 | 0 | 4 | 0 | 2 | 7 | 2 | 4 |
source material | 0 | 0 | 13 | 2 | 22 | 16 | 7 | 0 |
technical execution | 2 | 4 | 0 | 4 | 3 | 3 | 13 | 0 |
*Purple text indicates lower-order comments; italics indicate codes unique to the multimodal corpus.
Table 7. Evaluative Comment Types Across Participants
DISCUSSION
The results of this study suggest a great deal of congruence between the types of comments teachers made on their students’ multimodal texts and the kinds of comments teachers made on students’ print essays decades ago in Connors and Lunsford’s (1993) study. The top four most frequent evaluative comment types in this study— formal arrangement, overall, organization, and audience—all overlapped with categories found in Connors and Lunsford’s data set.
The most frequent type of lower-order comment in this corpus was formal arrangement. Its sister category in the Connors and Lunsford (1993) study, paper format, occurred at a high frequency there, too, as the third most frequent kind of “formal” comment in that study. Formal arrangement comments concern the organization of formal features of a text, including their regularity/consistency, their alignment or physical arrangement, their size, length, color, or font, and/or the balance between or number of them. The frequency of formal arrangement comments in this corpus suggests that how a text is physically organized is an important criterion for teachers as they assess multimodal work. This is not necessarily a new finding, because the layout of a print essay was also important to teachers in Connors and Lunsford’s study years ago. What may be important is how concerns about physical layout seem to have evolved with multimodal texts. While Connors and Lunsford noted that paper format denoted concerns like “margins” and “neatness,” in this corpus formal arrangement comments covered much more, such as the alignment of multiple images on a page or the consistent use of a color scheme across a text. There are many more elements of the text’s physical layout for teachers to pay attention to in their assessment of multimodal texts.
Another shared category with Connors and Lunsford’s (1993)work is organization, which is a label affixed to comments that concern how the ideas or content in a piece are arranged, including a sense of continuity between ideas and the use of content organizational devices like introductions, transitions, or conclusions. In Aaron’s protocol he remarks that, “She’s got a lot of stuff going on in this second paragraph.” Here, Aaron is concerned with the fact that the student seems to have too many ideas, which are perhaps not clearly related, in one paragraph. It is interesting to note that formal arrangement and organization are among the most frequently occurring kinds of comments teachers make on student multimodal texts in this corpus. Both the observable physical structure and more abstract content structure of student work appear to be key criteria in these teachers’ assessments of student multimodal work.
While structure seems to be key to instructors’ assessment of multimodal texts, the frequency of overall evaluative comments – another category shared with Connors and Lunsford – may suggest teachers’ uncertainty about how to name the things that they find effective or less effective in student multimodal work. Overall evaluative comments consist of general evaluative comments about students’ progress or their general performance on the assignment. An example from Anna’s protocol, for instance, occurs when she remarks that, “So truly I love this piece.” While Anna is making a judgment about the student’s text, it is considered a general overall comment because it is not clear from the comment what it is about the text that she loves.
Finally, teachers in this study seem to share a common rhetorical approach to texts with instructors from Connors and Lunsford’s (1993) work, as audience was the fourth most frequent evaluative comment type in this study. Audience comments concern the audience and how they might perceive the student’s text, including how the student has or needs to appeal to that audience’s interests or needs. In her protocol, Leah makes several audience-based evaluative comments, frequently noting how well students have communicated an appropriate tone for the audience. In other projects, like Marie’s, a real audience for the student work existed and as a result, she often made evaluative comments about how that audience would respond such as “I think the visitor’s center will use it and I think it’s good.” Comments concerning audience seem to exemplify what scholars like Zoetewey and Staggers (2003) advocated when they discussed the importance of borrowing from what we already know about rhetoric to talk about the success of a multimodal text. Audience is a core rhetorical concept—indeed is at the heart of rhetorical study—and indicates that teachers seem to be applying this concept to their assessment of multimodal documents.
Despite these similarities, during the coding process it became apparent that Connors and Lunsford’s taxonomy of teachers’ evaluative comments, based on teachers’ written commentary on traditional print essays, was not always sufficient to describe teachers’ commentary on student multimodal texts in this study. As a result, six new codes emerged (discussed in the video below).
Video Transcript
Video 1. Discussion of codes
One type of code unique to the multimodal corpus was grammar. The prevalence of grammar comments may indicate that despite the fact that the verbal mode plays less of a central role as a modality in multimodal texts (in contrast to a monomodal print essay, which is all verbal), proper grammatical construction is still a factor in several teachers’ assessments of student multimodal work. It should be noted here, however, that grammatical comments were not present in every protocol but concentrated in three of the most experienced teachers’ responses: Joe, Marie, and Susan.
A second new category, idea development, was present in every protocol and indicates that in this data set, teachers engaged with student ideas in evaluative ways. It also suggests that teachers remain invested in students’ ideas—not just the form of their work, as the dominance of formal arrangement comments might first suggest.
It is perhaps not surprising that the third new category, multimodality, did not exist in Connors and Lunsford’s study from 1993, because the student texts teachers responded to there were monomodal. When teachers made multimodality comments in this study, they were addressing the relationship between the modalities in a text. In Anna’s protocol, for instance, she responds to her students’ video projects and notes how the aural mode (music) relates to the visual (video shots): “I also want to say I’m also impressed with the way in which your music enters into a conversation with the video shots.” Though every project teachers read in this study contained at least two modalities (consult Table 2), teachers did not often construct evaluative comments based on multimodality in the text. Comments labeled multimodality only accounted for roughly 3% of the evaluative comments teachers made overall. This suggests that more often than not, teachers were evaluating the success of the modes individually, rather than in concert. This finding contrasts with the definition of a multimodal text in scholarship as one that uses multiple modalities in conjunction to make meaning. Of these 33 comments addressing multimodality, 18 focused on the relationship between visual and textual elements in student work, 5 on the relationship between multiple visuals in a text, 3 each on visual/spatial and visual/aural relationships, 2 on visual/gestural relationships, and one each on textual/spatial and aural/spatial relationships. Overwhelmingly, then, multimodality comments appear focused on visual elements, especially the combination of visuals with alphabetic text. This is to be expected given that, as Table 2 illustrates, the majority of the assignments teachers read in this study were dominated by visuals (the exceptions being Leah, who read aural essays, and Anna, who read videos).
The fourth category new to this multimodal corpus was technical execution. These comments, which addressed student ability to use technology skillfully to create a finished product, appeared in nearly every protocol. Leah, for instance, was responding to students’ audio essays and frequently evaluated student ability to use the recording equipment successfully. In one such comment, she remarks that “He’s just, he’s holding the audio recorder probably too close to his mouth because there’s a lot of feedback.” A little later on, she evaluates another project by commenting that, “It seems like she could have gone back over and smoothed that area out a little bit.” The prevalence of technical execution comments indicates the growing importance of composing technologies in the creation of student work in writing classrooms. It also suggests that, for better or worse, teachers are aware of and concerned with how well students are able to use those new technologies to produce multimodal texts.
Appearing nearly as often as comments about technical execution were comments concerning creativity. In this study, creativity comments concerned the use of creative or inventive approaches to the assignment, including remarks about choice, originality, and thoughtfulness. As Anna was watching her students’ video projects, she made comments about the “inventiveness” of the shots produced by one particular student: “At the same time, I think of all the ones I’ve seen she’s used the most inventive shots.” Creativity and inventiveness are skills often praised as critical for participation in a global economy by proponents of multimodal pedagogy. The New London Group (2000), for instance, emphasized that multimodality requires text makers to produce works that “may be variously creative or reproductive in relation to the resources for meaning-making available” (p. 28) and that learning to creatively combine resources can help students to find voice or agency. An emphasis on agency through creative design can also be found, for instance, in the work of Guther Kress (2003) and Anne Frances Wysocki (2004). Creativity comments seemed to be one clear place where teachers’ evaluative practices and literature on multimodal pedagogy overtly aligned.
Finally, the last type of unique comment dealt with movement in student work. These comments addressed the pacing, speed, or progression of ideas in the text, as in this comment from Anna’s protocol: “Nice building of the motion of the piece.” Here, Anna evaluates how well the student has used the video to create a physical sense of movement through ideas. Movement comments appeared in five of the eight protocols but were most concentrated in Anna, Leah, and Susan’s responses. Interestingly, these teachers’ assignments were vastly different, and the modalities each of these participants interacted with were different as well (see Table 2). This suggests that perhaps movement is a criterion important across modalities including the visual, verbal, spatial, and aural.
When the frequency of these comment types is examined across participants, it is worth noting that although there was not an incredible variation (the greatest percent difference between participants was 13%), some of the higher numbers do belong to teachers who have the most experience assigning and assessing multimodal texts (see the “Methods” section for a detailed discussion of participant experience). Susan and Anna, for instance, are the most experienced teachers in the study and have both been engaged with multimodality in the classroom for over 30 years. Their protocols were comprised of 33% and 29% comment types unique to the multimodal corpus, respectively (see Table 8). The protocols of instructors who have come to integrate multimodality more recently tended to contain fewer unique comment types. For instance, Marie, who has been practicing multimodal pedagogy for just over 5 years, relied on “new” values in her protocol just 20% of the time. Similarly, Robert’s protocol contained just 22% “new” comment types. Some exceptions to this trend do exist, however. For instance, although he had the least experience with assigning and assessing multimodal texts, Joe’s protocol contained a rather high amount (31%) of “new” comment types. Further, while Sara only has just over 3 years’ experience with multimodal texts and with teaching more broadly, her protocol was comprised of 28% “new” comments, just a 1% difference from Anna, who has over 30 years of experience with multimodality. When frequency trends are examined overall, however, they do seem to suggest that teachers with more experience assigning and assessing multimodal texts rely more often on evaluative criteria based in multimodal (rather than print) paradigms. It is likely that as instructors in this study assign and assess multimodal documents more often, they would move toward “new” values in ways similar to the more experienced teachers in this study.
Name | # of unique comment types | # of comment types from Connors and Lunsford | % of unique comment types | % of comment types from Connors and Lunsford |
Susan | 51 | 104 | 33% | 67% |
Joe | 26 | 58 | 31% | 69% |
Anna | 23 | 57 | 29% | 71% |
Sara | 39 | 101 | 28% | 72% |
Aaron | 22 | 61 | 27% | 73% |
Leah | 28 | 90 | 24% | 76% |
Robert | 19 | 68 | 22% | 78% |
Marie | 40 | 156 | 20% | 80% |
Table 8. Frequency of evaluative comment types across participants
CONCLUSION
Overall, the findings of this study suggest that, in practice, teachers tend to borrow extensively from existing print-based evaluative criteria as they assess multimodal texts. This finding runs counter to the cautions of scholars like Yancey (2001), who argue that teachers ought to move away from a focus on print-based criteria. At the same time, the fact that some of the print criteria borrowed are derived from rhetorical theory, including audience and purpose, reflects the concerns of scholars like Zoetewey and Staggers (2003) or Borton and Huot (2007) who argue that rhetorical principles ought to be transferred to how we talk about and understand new text types. Further, the fact that Connors and Lunsford’s (1993) print criterion of paper format had to be expanded and modified to formal arrangement to fit this corpus suggests that teachers might be adapting at least one of these print criteria to account for the “new potentials” of new text types, as scholars like Neal (2011) have suggested teachers ought to do.
The dominating presence of these formal arrangement comments in this corpus also raises questions about how we are talking about formal features in students’ multimodal work, questions that might be pursued in future research projects (because the aim of this study was purely to describe criteria, not to speculate about how they were being employed). Over 15 years ago, Takayoshi (1996) cautioned that, “Without careful consideration of the relationship of visual rhetoric and rhetorical goals, computer-generated textual features could easily become the grammar and punctuation of current-traditional rhetorics, with an emphasis in teaching and writing on the correctness of the surface features” (p. 250). Takayoshi warned teachers of the danger inherent in de-contextualized notions of form, anticipating that as students began to use various kinds of digital technology to create new texts, formal features would grow in prevalence and importance in teachers’ response practices. As Takayoshi cautioned, we should take care to avoid emphasizing through our responses that there is a “right” and a “wrong” way to arrange a multimodal text. Instead, we should take care to do as Wysocki (2004) has implored us: As we respond, “generosity too must enter, so that we approach different-looking texts with the assumption not that mistakes were made but that choices were made and are being tried out and on” (p. 23). A focus on the rights and wrongs of form cannot account for the kinds of choice, creativity, and experimentation demanded by multimodal pedagogical models.
Finally, while this study demonstrates that teachers often borrow from print paradigms, it also indicates that teachers are actively generating concepts and criteria foreign to print essays as they respond to multimodal texts. Among these new kinds of comments are concepts like creativity and interaction between the modes, which reflect the goals of multimodal pedagogy outlined by theorists and suggest teachers find multimodal relationships and thinking outside of the box to be important to the success of a multimodal text. Though these comments did not occur at an overwhelmingly high rate of frequency in the corpus, their mere presence suggests they may be emerging concepts worth devoting more of our attention to in future research or other scholarship. Is it the case, for instance, that teachers recognize the importance of multimodal relationships in a text but do not fully understand how to talk about whether or not those relationships are successful or meaningful? At the same time, among these new comment types are a concern for grammar and the proper execution of technology—both criteria that emphasize correctness over choice and inventiveness. Like the dominance of formal arrangement comments, the dominance of these two comment types raises interesting questions about how we might account for choice and inventiveness in multimodal texts, as theorists suggest these are important learning goals in this new pedagogical paradigm.
NOTES
1. In this study, the term “empirical research” used to indicate “research that carefully describes and/or measures observable phenomena in a systematic way planned in advance of the observation” (MacNealy, 1999, p. 6).↩
2. The only data pertaining to teacher multimodal assessment practices comes from the survey research conducted by Anderson et al. (2006) and Murray et al. (2010).↩
REFERENCES
Atkins, Anthony; Anderson, Daniel; Ball, Cheryl; Homicz Millar, Krista; Selfe, Cynthia; & Selfe, Richard. (2006). Integrating multimodality into composition curricula: Survey methodology and results from a CCCC grant. Composition Studies, 34 (2), 59–84.
Armstrong, David; Gosling, Ann; Weinman, John; & Marteau, Theresa. (1997). The place of inter-rater reliability in qualitative research: An empirical study. Sociology, 31 (1), 597–606.
Borton, Sonya, & Huot, Brian. (2007). Responding and assessing. In Cynthia L. Selfe (Ed.), Multimodal composition: Resources for teachers (pp. 99–111). Cresskill, NJ: Hampton Press.
Connors, Robert J., & Lunsford, Andrea Abernathy. (1993). Teachers’ rhetorical comments on student papers. College Composition and Communication, 44 (2), 200–223.
Edgington, Anthony. (2005). What are you thinking? Understanding teacher reading and response through a protocol analysis study. Journal of Writing Assessment, 2 (2), 125–147.
Fife, Jane M., & O’Neill, Peggy. (2001). Moving beyond the written comment: Narrowing the gap between response practice and research. College Composition and Communication, 53 (2), 300–321.
Geisler, Cheryl. (2004). Analyzing streams of language. New York: Pearson/Longman.
Glaser, Barney G., & Strauss, Anselm L. (1967). The discovery of grounded theory: Strategies for qualitative research. Chicago: Aldine Publishing Company.
Haswell, Richard H. (2005). NCTE/CCCC’s recent war on scholarship. Written Communication, 22 (2), 198–223.
Hawisher, Gail, & Moran, Charles. (1997). Responding to writing on-line. New Directions for Teaching and Learning, 69, 115–125.
Huot, Brian. (2002). (Re)articulating writing assessment for teaching and learning. Logan: Utah State University Press.
Kress, Gunther. (2003). Literacy in the new media age. London: Routledge.
MacNealy, Mary Sue. (1999). Strategies for empirical research in writing. New York: Longman.
Morse, Janice M.; Barrett, Michael; Mayan, Maria; Olson, Karin, & Spiers, Jude. (2002). Verification strategies for establishing reliability and validity in qualitative research. International Journal of Qualitative Methods, 1 (2), 1–19.
Murray, Elizabeth A.; Sheets, Hailey A.; & Williams, Nicole A. (2010). The new work of assessment: Evaluating multimodal compositions. Computers and Composition Online. Retrieved from http://www.bgsu.edu/cconline/murray_etal/index.html
Neal, Michael. (2011). Writing assessment and the revolution in digital texts and technologies. New York: Teachers College Press.
New London Group. (2000). A pedagogy of multiliteracies: Designing social futures. In Bill Cope & Mary Kalantzis (Eds.), Multiliteracies: Literacy learning and the design of social futures (pp. 9–38). London: Routledge.
O’Dell, Lee, & Katz, Susan. (2009). “Yes, a t-shirt!”: Assessing visual composition in the “writing” class. College Composition and Communication, 61 (1), 197–216.
Phelps, Louise Wetherbee. (2000). Cyrano’s nose: Variations on the theme of response. Assessing Writing, 7 (1), 91–110.
Smagorinsky, Peter. (2008). The method section as conceptual epicenter in constructing social science research reports. Written Communication, 25 (3), 389–411.
Sorapure, Madeline. (2005). Between modes: Assessing student new media compositions. Kairos, 10 (2). Retrieved from http://english.ttu.edu/KAIROS/10.2/binder2.html?coverweb/sorapure/index.html
Strauss, Anselm L. (1987). Qualitative analysis for social scientists. Cambridge: Cambridge University Press.
Takayoshi, Pamela. (1996). The shape of electronic writing: Evaluating and assessing computer-assisted writing processes and products. Computers and Composition, 13 (2), 245–257.
Takayoshi, Pamela, & Huot, Brian. (2009). Composing in a digital world. The transition of a writing program and its faculty. WPA: Writing Program Administration, 32 (3), 89–119.
Wysocki, Anne Frances; Johnson-Eilola, Johndan; Selfe, Cynthia L.; & Sirc, Geoffrey. (Eds.). (2004). Writing new media: Theory and applications for expanding the teaching of composition. Logan: Utah State University Press.
Wysocki, Anne Frances. (2004a). Opening new media to writing: Openings and justifications. In Anne Frances Wysocki, Johndan Johnson-Eilola, Cynthia L. Selfe, & Geoffrey Sirc (Eds.), Writing new media: Theory and applications for expanding the teaching of composition (pp. 1–41). Logan: Utah State University Press.
Yancey, Kathleen Blake. (2004). Looking for sources of coherence in a fragmented world: Notes toward a new assessment design. College Composition and Communication, 21 (1), 89–102.
Zoetewey, Meredith W., & Staggers, Julie. (2003). Beyond “current–traditional” design: Assessing rhetoric in new media. Issues in Writing, 13 (2), 133–157.
Return to Top