This category of WPA work addresses the WPA's responsibility "to maintain a strong staff development program" that ensures instructors are "well trained and generally in accord with the overall programmatic goals and methodologies":
- Faculty development depends upon a coherent "training program for new and experienced staff" that "communicate[s] current pedagogical approaches and current research in rhetoric and composition";
- Faculty development requires consistent "support for staff activities in course design, pedagogical development, and research" within "an atmosphere of openness and support" marked by "open lines of communication" among an often complex network of faculty, lecturers, adjuncts, graduate teaching assistants, and/or undergraduate peer tutors (CWPA, "Evaluating").
Effective faculty development balances rigorous training and community building. As "Evaluating the Intellectual Work" asserts, this responsibility is "one of the most salient examples of intellectual work carried out within an administrative sphere" (CWPA). Within this high-stakes category of WPA work, these findings suggest, student participation may be valuable but must be carefully managed.
I include references to the ubiquitous end-of-term evaluations here, rather than in the previous section, for two reasons. First, when articles on curricular revision referred to student surveys, they were positioned as part of the pilot research, not focused on instructors. Second, although course evaluations might also be considered part of program assessment, the articles discussed here highlighted their role in faculty development. It is also worth noting that while these documents are usually referred to as course evaluations, they seem to be understood, at least implicitly, as instructor evaluations.
Students evaluate instructors' performance
References to summative course evaluations often reflect concerns about these tools' negative potential. In "Fostering Cultures of Great Teaching," Diana Ashe argues against the heavy weight often placed on these evaluations, citing studies that indicate students often reward good grades and punish challenging instructors.
As a result, the high stakes of standard university quantitative surveys can be threatening to some faculty; they also tend to yield unproductive criticism in lieu of thoughtful reflection (Emerson).
Such concerns have led WPAs to strategically revise the avenues through which students participate in faculty development. In Lisa Emerson’s 2004 study of team-based WAC initiatives, student feedback—described as "essential to the success of the projects"—was gathered through journals, interviews, surveys, and focus groups (55).
In particular, Emerson reports, student focus groups provided more constructive, specific feedback that was also less threatening because it was not addressed to audiences with professional power over the instructors. A significant conclusion was that "student feedback needs to be of a specific kind—feedback is not a virtue in itself—and it needs to be managed carefully if it is going to have a positive impact on the quality of the program" (55).
As in this case, WPAs were able to prompt more productive evaluation by revising or replacing standard university evaluation instruments with customized versions that better reflected their programs' priorities (Brunk-Chavez; Hindman). These alternative materials often prompt students to (re)consider their own contributions to the quality of the course.
Students share views on pedagogical strategies
Shifting the focus from evaluation to conversation opens up opportunities for students to contribute more directly to faculty development. Pamela Bedore and Brian O’Sullivan solicited student input about peer review and self-assessment, using the results to inform program discussions with graduate instructors about those issues.
WPAs also reported inviting undergraduate students into graduate teaching workshops (Latterell; Peters). According to Bradley Peters, this kind of student participation complements a collaborative approach to program administration "because their frank responses at the colloquia enabled all of us to ponder where instruction succeeded, where it could be improved, and where it failed" (127). In such cases, students join in the process of programmatic self-assessment and reflection, including but not limited to particular instructors' performances.
Students evaluate their own performance
Students are also asked to turn that critical perspective on their own contributions to learning environments. This emphasis on students' self-evaluation appeared in Geoffrey Chases account of curricular revision, in which students were prompted to assess "their own progress and commitment" as well as the course and instruction (53).
Similarly, while promoting student-centered teaching methods, Jane Hindman incorporated questions about students’ own performance within instructor evaluations—an improvement of validity as well as a rhetorical strategy:
By requiring students to consider their own functioning in the classroom as well as their teacher's capacity to facilitate their learning, this evaluative instrument should yield teaching effectiveness scores that prize [the program's focus on] student-centered learning. In addition, its original and provocative questions initiate efforts to reshape students' attitudes about their instructors and their educational experiences. (20)
Hindman's survey also asked students to reflect on the evaluation form itself; responses suggested students' willingness to reconsider their own evaluative criteria and provide feedback on the program's evaluative instruments.
Students collaborate on course improvements
At the course level, students' active engagement was the focus of Darsie Bowden's "Small Group Instructional Diagnosis: A Method for Enhancing Writing Instruction." In the SGID method, "both teachers and students provide input and receive feedback about their course through interviews that take place at midterm—in time to make changes in the quality of classroom interaction" (117). To minimize risk, this process of small-group and whole-class discussions was aided by a facilitator and managed independently of the program director.
As Bowden reports, these collective discussions led to practical benefits, but they also prompted students to debate "the relative merits and drawbacks of their own criticisms and suggestions" (120). She remarks upon students' positive response to the process, which gave them "a strong sense that they do indeed have some significant influence on their intellectual growth and development in the class" (123). At this level of specificity, both instructors and students, as well as the programs that house them, may reap immediate and potentially lasting benefits from students' participation in faculty development.