Program Assessment and Evaluation
This category of writing program administration involves developing "site-specific measures for the assessment and evaluation of the goals, pedagogy, and overall effectiveness of the composition program":
- "Meaningful assessment" refers to the "overall determination of programmatic effectiveness."
- "Meaningful evaluation" refers to the "specific determination of students and instructors" (CWPA).
As the language in "Evaluating the Intellectual Work" highlights, the drive toward "accountability" ensures that assessment has become an "over-riding" concern for WPAs. Not surprisingly, student feedback is an oft-cited source of data for assessment of writing programs and their evaluation of students. As I discuss in the next section, evaluations of instructors by students tended to be treated within the frame of faculty development. Here, I emphasize student inclusion in conversations about the performance of particular program elements.
Students assess program effectiveness
This category overlaps significantly with program creation in that student participation may lend validity to program self-assessment (Gallagher) or prompt reform (Jurecic). Likewise, students offered feedback on whether stated outcomes were achieved in particular courses (Adler-Kassner and Estrem; Brady; Chase; Moon) or program components like a writing center (Huang).
Students assess evaluation methods
Notably, students are often invited to comment on placement methods, the process by which they were assigned to first-year writing courses from among basic, mainstream, ESL and/or multicultural options. In other words, students are asked to evaluate the program's evaluation of them.
Because placement can so strongly influence students' success, as well as their initial response to programs and courses, WPAs are clearly concerned that students understand and respect the program's chosen methods. As with curricula, these studies usually coincided with a new approach (Isaacs and Keohane; Robertson), particularly a move to directed self-placement ( Bedore and Rossen-Knill; Dryer; Jones). In this area, the results were often rather clear-cut; as Ed Jones concludes, the most significant finding "related to student attitudes toward [directed self-] placement is that they liked it" (67).
Students respond to assessment results
Related research gathered data regarding how assessment practices affect students on a personal or emotional level (Grego and Thompson; Ruecker). For example, Todd Ruecker conducted surveys and interviews regarding students' attitudes toward placement in order to improve placement of multilingual students, confirming "the importance of considering the diversity of student perspectives regarding placement in order to design a writing program that effectively accommodates the growing number of L2 writers in higher education" (92). Student responses to high-stakes practices can lead to refinements of both method and presentation.
Students reflect on their relationship with writing
Beyond determining student satisfaction, certain assessment projects also pursued (and may have prompted) deeper insights about students' relationships with writing in general as well as within the particular institutional context (Blakely and Pragnac; Gradin; Hansen, et al. ). Such prompts asked students to reflect on the development of their writing through questionnaires and portfolio cover essays. This work suggests that engaging student participation within assessment processes may challenge the dynamic of one-way evaluation in favor of productive mutual reflection.