Retrofitting Materials

What follows is the testimony of a veiled subject, unaware of how my privileged, limited notion of sound and soundwriting impacted my experiences teaching a multimodal composition course, open for enrollment to a diverse range of bodies. — Jennifer

[background music begins, upbeat electric melody of ascending and descending tones featuring interludes of faster tempos that drive an allegro tempo (Pendulum, 2012)]

Jennifer: Every other spring, I teach a multimodal composition class for our majors. Now, this is still a fairly new class on our campus, and I've only taught it a few times. And each time, I've organized the class around studies of modes in isolation. So, my idea was this very part-to-whole approach: We would think about a mode's affordances—say, what can you do with digital sound?—and then we would experiment composing in that mode. Leading up to our concluding with this grand gathering of modes in a final project, that would put the multi in multimodal.

So, it makes sense, right? I mean. This approach worked—for the most part—the first couple of times I taught the course. But then something terrifying and wonderful happened. And I realized that my plan wasn't going to work this time.

[transition sound: booming reverse, an auditory whiplash of sorts]

Lauren (ASL interpreter): This email is to notify you that a deaf/hard of hearing student is registered for your Multimodal Composition class. Please remember that all videos must be captioned and any audio recordings must have transcripts. Thank you.

[sound of static, tape rewinding]

[distorted repeat of Lauren’s voice from above] … that a deaf/hard of hearing student is registered for your Multimodal Composition class.

[electric flutters sound]

Jennifer: I mean how do you teach sound, without… well, sound?

[sound of static noise, no voices. then static noises stop. silence.]

And I couldn't see how to move forward.

[new background music, similar to the opening (Time Lapse, 2012)]

OK, so, I looked back at my schedule. You know: the plan. I had blocked three weeks of the semester to study sound. Three weeks of listening. Three weeks of speaking, recording, layering, remixing sounds. And one student in the class wouldn't be able to participate.

I started looking through the models I'd planned to use in class and example after example was just not ADA [Americans with Disabilities Act] compliant—meaning there was no captioning and no transcripts. I'm talking about award-winning journalism in national publications. And texts commonly referenced in multimodal comp classes. Even our textbook's companion digital resource failed to be compliant. And, by that, I mean none of them had a means of accessing sound for someone who was deaf.

So, I started thinking about student projects. What was going to happen when we composed with sound? When we recorded our voices? Or listened for peer review? And at that point, all I could see were hurdles, and there were just too many to overcome. Honestly, I debated cutting our study of sound altogether. But then I realized that if I did that, it wouldn't be fair to the hearing students. And that would be just as much a disservice. But. Still, I just couldn't resolve this injustice of planning activities or looking at texts that denied access to someone who was deaf.

[shift in background music (Pendulum, 2012). This music mimics an electric guitar sound with a similar upbeat tempo as the earlier music, providing an energetic mood.]

So, after the initial shock, my inner scholar went down this philosophical rabbit hole, trying to figure out what was happening and why was this such a problem for me?

Despite good intentions, I realized that my approach to teaching multimodality was grounded in this framework—this rhetoric—that privileged dominant, normative responses to sound. And my course design, it sent a message loud and clear by failing to recognize individuals who couldn't hear it clearly.

I designed this class that defined sound as a fundamentally aural mode for communication. I realized that my hearing–speaking body limited my understanding of sound. And it wasn't until that moment that I could see that I was perpetuating this closet hierarchy in soundwriting by making hearing and verbalizing with sound a dominant activity in the class.

So, my semester working with Kirsten in the multimodal comp class taught me more about what I didn't know about sound than what I thought did know about sound. I'm pretty sure I still don't have it figured out in terms of how to teach sound. I'm certain I haven't. Which really is the point. There is no one-size-fits-all approach to soundwriting, a pivotal lesson in my humbling experience. Instead, I offer my narrative as a reflection, both theoretical and practical, of my attempts to hear. see. and feel sound differently.

[background music fades]

My initial struggles were in discovering my course materials were inaccessible. Of course, as a teacher, I am aware of federal Americans with Disabilities Act [ADA] regulations requiring that I eliminate discriminatory barriers to learning impacting students with disabilities. These practices are integrated into the culture of our campus, supported by our Noel Center for Disability Resources at Gardner-Webb University. Still, I'd only taught Multimodal Composition a few times, and I was naive about the amount of non-compliant soundwriting materials I used in previous versions of the course. I planned to use DVD documentaries, mass media webtexts, and textbook companion media, all of which failed to meet basic accommodation guidelines for students who are deaf.

In response, I contacted these publishers of various texts to inquire about accessible materials to accompany their media (e.g., synchronized captioning for videos, transcripts for audio). One company emailed me a transcript file for a video I had in DVD format. While better than nothing, I could not properly incorporate this file in a simultaneous interface because I had no way to edit playback of my existing media. And handing Kirsten a transcript and asking her to follow along wasn't going to work. (Imagine trying to read a book and watch a silent video at the same time.) The second publication included an award-winning New York Times webtext filled with sound components, lacking captioning or transcripts. Anxious to share such a compelling narrative, I emailed the author, who quickly responded, surprised to learn of its inaccessibility and promising to ask the project's video/web producer about captioning. Perhaps it was my righteous indignation, but I was left stunned that such considerations were not part of the project's initial project design. I even discovered that media in our multimodal textbook's digital companion site featured uncaptioned videos and untranscribed sound clips.

These texts all exhibited a default status that affirmed an elusive normate, providing aftermarket alternatives or failing to even recognize alternatives. In doing so, these composers discriminated against deaf bodies, excluding them from their narratives. One might respond that adopting universal design principles may prevent these types of exclusions because, in theory, universal design principles heighten access to a text, including redundant modalities (e.g., alt text over images, captioning in videos, transcripts for audio). Jay Dolmage (2008) argued that universal design "as a praxis is still a matter of social justice… we recognize the priority of negotiation, the importance of including everyone in the discussions that create space" (p. 25). For teachers using open source media, "universal design" may appear to be a mythological, elusive creature. Of course, no text can be universally accessible; even bodies with physiological similarities interact and understand texts differently. Though these principles would improve access to a variety of media, their wavering presence speaks to our culture's dominant assumptions about whose bodies are a part of their intended or imagined audience.

In the case of soundwriting pedagogy, ADA regulations require that teachers provide auxiliary services such as transcripts, closed captioning, or live captionists (see U.S. Department of Justice, 2010). Many models of soundwriting genres exist in open source, less restrictive spaces, complicating a teacher's job even more because those sites do not require accessible sound. Using such inaccessible texts creates an inhospitable environment in the classroom, including "multimodal texts [that] are not commensurable across modes" (Yergeau et al., 2013, "Modality") and subsequently excluding individuals in our classrooms.

For example, my own practice of incorporating open-source media was complicated by the epic #fail of YouTube captioning. Caption fail created an accessibility boundary that I, as the teacher, was forced to navigate without an institutional system in place to handle captioning of third-party media on YouTube. I turned to Amara.org to retrofit captions for videos hosted on YouTube. Amara's interface, both intuitive and reliable, required only a YouTube URL to initiate a caption-as-it-plays interface that enabled me to transcribe, sync, and share correct captions. While this virtual, patch kit met ADA guidelines, it represented what scholars would call a retrofit (Dolmage, 2008; Yergeau et al., 2013).

Using outsourced, inaccessible soundwriting models, such as commercial or open-source media files, forces teachers to produce auxiliary services themselves, resulting in retrofits, added components or accessories to existing products that act as afterthoughts or corrections (Dolmage, 2008). Stephanie Kershbaum explained that retrofits are often "reactive, responding to situations or problems that arise, rather than seeking to anticipate potential concerns with the design or production of a multimodal text or environment" (Yergeau et al., 2013, "Retrofitting"). While retrofits might legally provide access to students with disabilities, they perpetuate a heirarchy in soundwriting experiences where accommodations as afterthoughts communicate a privilege for normate bodies and problem solve for non-normate bodies.

At the time, I patted myself on the back for having discovered hacks (like Amara) that were accommodating. Since, I've come to question assumptions made about relationships between language and sound. My journey to learn more has landed me back in the student's desk—quite literally—auditing American Sign Language classes taught by my deaf colleagues in the basement of our library. The more I learn about ASL's grammar and syntax, the more I realize how poor a retrofit even captioning is.

Kirsten explores a range of accessibility with sound.

Really, I have… I would say that I have the same level of hearing ability as a normal hearing person. But I struggle with how to understand sound. Hearing and understanding are different. I can understand and recognize environmental sounds like fire alarms, cars driving, footsteps. I can understand music and some spoken language. But spoken language is my biggest struggle at present because I don't really understand most of it. Just a few words here and there. I might understand a few spoken words a person says, but if they speak really fast, it all just runs together into one big, confusing noise, and I don't understand.

It can become quite frustrating for me as I try to understand what others are saying, especially with those who don't understand that I can't really understand spoken language well and they keep speaking. Communication breaks down, and it often all turns into a disaster at the end. That's really frustrating for me.