Chapter 2
Managing Datacloud Decisions and “Big Data”:
Understanding Privacy Choices in Terms of Surveillant Assemblages
Angela Crow
ABSTRACT
In this chapter, I describe surveillance and privacy concerns at stake when selecting digital platforms designed to facilitate writing assessments. Although writing program administrators have always needed to address privacy concerns and data control when designing assessment, in the era of “big data,” these issues become more complex because data amalgamation is increasingly sophisticated, accessible, and facilitated by recent changes in FERPA guidelines. In this chapter, I address specific examples of online content management systems such as Google’s resource packages for universities and other corporations’ eportfolio platforms in terms of privacy, data sharing/mining, and policies subject to change and beyond writing program administrator. Given concerns regarding privacy and the degree to which students and professors can opt out of tracking, I conclude with a recommendation that composition studies creates its own online digital platform as a means of controlling data distribution and contributing to national assessment conversations.
Many educators comb conferences, looking for instruments such as the ideal rubric, ePortfolio platform, or placement procedures, as if a one-page handout or Web site were the answer to their assessment needs. While we can and should learn from existing writing assessment systems, we cannot assume that the answers to our problems can be addressed by developing and mass marketing instruments regardless of context. (Neal, 2011, p. 20)
Michael Neal’s (2011) description of educators who “comb conferences, looking for instruments” might be amusing to imagine were it not also accurate of my strategies for tackling a recent assessment project. In much the same way that I sometimes stop at the local convenience store and purchase a lottery ticket hoping to win big, I will admit to an equally illogical desire to find a simple handout or web site that would help create a vibrant and effective assessment plan, generate enthusiasm amongst faculty and students, and avoid the creation of a climate of suspicion. Perhaps in response to the number of requests for the one-page handout and the perfect rubric, and in response to rooms crammed with roamers like me at conferences hoping for an easy fix, Peggy O’Neill, Cindy Moore, and Brian Huot (2009) offered A Guide to College Writing Assessment, designed to help department committees decide on the size and scope of an assessment, grounding their suggestions in theoretical contexts. They raise a host of concerns, from funding resources to strategies for “matching methods to guiding questions” (O’Neill et al., p. 116) to data analysis. O’Neill et al. also provide a range of tangible resources in their appendix, from recommendations for additional readings to specific examples of focus group questions, rubrics, and surveys for students. They emphasize that these samples illustrate context-specific strategies and are not intended for immediate adoption. For those interested in creating viable assessment of digital writing programs, this kind of guide, together with resources in new media and digital literacy (Ball & Kalmbach, 2010; Neal, 2011; Whithaus, 2006) afford the possibility of shaping a better assessment plan.
Increasingly, however, digital data storage concerns are also necessary in discussions of assessment plans. In an article titled “Big Data’s Arrival,” Paul Fain (2012) reported on a study drawn from a “database that measures 33 variables for the online coursework of 640,000 students—a whopping 3 million course-level records.” The study, which explored “student performance and retention across a broad range of demographic factors” measured “student engagement through their Web interactions, how often they look at textbooks and whether they respond to feedback from instructors, all in addition to their performance on coursework.” The description of the measurements raises multiple questions about data assemblages, including the degree to which faculty and students understood that their individual universities had opted into such a study, the possible recommendations reached based on the research questions asked, and what kinds of experts were involved in shaping and assessing research questions.
This chapter is based on my experiences at Georgia Southern University and its first-year writing program, which teaches over 3,000 entering students for 6 hours over a two-semester sequence. The sheer number of documents required for an effective assessment raise a set of challenges that have existed as long as programs have assessed student writing (Haswell, 2001). Given contemporary online capabilities that can facilitate logistical quagmires, writing programs might choose to contract with an online company for a database-management system that would offer an intuitive interface; support a local, context-specific assessment plan; allow faculty and students to easily enter multiple electronic examples of requested materials; encourage the creation of locally appropriate evaluative frameworks; facilitate document distribution to reviewers; provide quantitative and qualitative data analysis options necessary for reports; offer long-term storage options; and, finally, limit the potential that the data collected could become a part of a larger data amalgamation—or at least place constraints on data sharing that fit with a collective position on appropriate sharing. The challenge, in the midst of shifting capabilities regarding data analysis, is to somehow know what questions to ask, what options to investigate, and how to make good long-term choices.
In this chapter, I explore some contemporary online platforms offered by different companies to suggest potential data concerns. Increasingly, online venues establish partnerships with companies like Turnitin, Pearson, or ETS, and these corporations may have access to student data in ways that contribute to a giant pool of essays—whether for automated response improvement, or for the ostensible prohibition of plagiarism, or for large-scale studies such as the one Fain (2012) described. Given the 2012 changes in FERPA designed to allow greater data sharing between educational institutions and given greater access for corporations outside the traditional academy, we may see striking changes in the “terms of service” agreements for future or existing resources. It remains unclear how much or what type of access is possible now or in the future. What is clear is perhaps disconcerting: Terms of service are subject to change, the government has changed FERPA to encourage larger data-sharing studies and mining, and we cannot predict future possibilities or dangers. Although these concerns are difficult to untangle, they are not new to writing program administrators (WPAs). Contemporary corporations and platforms require us to reconsider familiar questions of privacy, access to data, assessment participants, and how we will create assessments that are adequately complex and tailored to local context. These questions existed for those who established portfolio assessments in the era of ink and paper texts, and they will continue to be relevant in future considerations of possible approaches to assessment design.
ASSESSMENT AND SURVEILLANCE: A BRIEF CONTEXT
Early articles on writing assessment pointed out the challenges of storing documents in hard-copy form; new models of assessment must navigate different hazards, particularly those of data security and privacy/surveillance concerns, which are challenging to anticipate because tools for aggregating data across databases continually improve (Nissenbaum, 2009); with changes to FERPA, incentives to improve data sharing have increased. Assessment plans that rely on hard documents often have a limited range of analyses available simply because of the sheer volume of texts and due to the time and cost of creating an electronic copy of each document. However, there may be some comfort in knowing that the writing program has greater control over the documents if only because of these logistical constraints. If someone really wants to analyze student documents for a program that still collects only hard copies, the person would need to gain access to the actual documents. With online assessment, however, that control is lost.
As we know from a variety of voices in field, and as reflected in the Conference on College Composition and Communication (2009) position statement on writing assessment,
assessments of written literacy should be designed and evaluated by well-informed current or future teachers of the students being assessed, for purposes clearly understood by all the participants; should elicit from student writers a variety of pieces, preferably over a substantial period of time; should encourage and reinforce good teaching practices; and should be solidly grounded in the latest research on language learning as well as accepted best assessment practices.
This statement attends to large-scale data-mining possibilities by emphasizing the focus on the local contexts or the “well informed current or future teachers of the students being assessed.” In Neal’s (2011) language, local programs should argue against assessment plans “that value data mining, surveillance, and centralized control of decision making function at the expense of teacher expertise and student agency in the writing process” (p. 10). For those in the midst of shaping a writing assessment plan, the CCCC position statement—and Neal’s specific additions—are up against the realities of the kind of data collection that Fain’s (2012) article suggests is much more feasible in newer online courseware integrated, as is often is, with assessment platforms. The current challenge becomes that of shaping a location-specific data collection that attends to local contexts (Broad, 2003; Huot, 2002; Neal, 2011; O’Neill, Moore, & Huot 2009) while also attending to changes in assessment processes that involve large-scale data analysis, which may or may not reflect disciplinary values (Adler-Kassner & O’Neill, 2010), but may have allow access to data should we choose particular online datacloud or data amalgamation options.
Tensions Related to Un/knowing the Scope of Data Amalgamation
While I understand that my desire for a one-page handout on assessment is a desire to know without knowing, to work quickly without having to spend time understanding the range of challenges and complexities in a field of study, I also understand that with diligence, a study of the rich research available on writing program assessment is within reach. If I hope for a one-page handout, it comes from a desire to find an easy-enough solution given competing demands for my time. Ironically, the opposite seems true when assessing various online platform options: When I try to sort out how data might be stored in an online platform, I encounter the promise of an easy fix, of a one-page handout, of an ideal rubric, and immediately become guarded and leery. On web sites marketing ePortfolio company services, the easy fix exists. The temptation is to believe the sales pitch for what Shoshana Feldman (1987) would call the “subject presumed to know” or, in this case, the platform presumed to know.
In Feldman’s (1987) discussion of Freud, Lacan, psychoanalysis, and education, she suggested that the exchange of knowledge already includes a curious relation to ignorance and resistance. Working with the terms analyst and analysand, Feldman argued that the analysand endows the analyst “with the authority of the one who possesses knowledge—knowledge of what is precisely lacking in the analysand’s own knowledge” while the analyst is in the dark, unknowing. The analyst knows a textual knowledge, “but such knowledge cannot be acquired (or possessed) once and for all: each case, each text, has its own specific singular symbolic functioning and requires a different interpretation” (p. 81). For Feldman, psychoanalysis “uncovers the mirage inherent in the function of the subject presumed to know,” and this dynamic extends beyond the interplay of psychoanalysis: it is the “emotional dynamic of all discursive human interactions” (p. 84).
Although a turn to psychoanalysis may be a stretch for some readers, the dynamic of the one who knows and the one presumed to know helps to make sense of the inclination to seek a simple answer at a conference presentation for a complicated problem. If Neal (2011) suggests that he can’t play the role of the “subject presumed to know”—evoking something of a sort of amused and also shamed response from this reader—the claim to have the easy fix raises to consciousness an understanding that easy fixes are a familiar and projected desire. In a climate of suspicion, when education is seen as needing to prove its worth through a range of increasing assessment practices (Adler-Kassner & O’Neill, 2010), a WPA may worry about what cannot be known at the same time that s/he inevitably encounters the pressure to perform the role of one who knows.
In the best-case scenario, a talented group of writing faculty have the time and resources to work with the data generated by the assessment plan and are able to contribute not only at the local level but also participate in nation-wide data-sharing research projects. In other words, although large data-mining studies might result in troubling research questions and analyses, the converse is also possible. If writing experts participate in the crafting of the questions that shape large data analysis, and if large-scale data assessment finds a way to adequately address local contexts, the results might be extraordinarily useful. However, when these large-scale data assemblages begin to articulate, as Fain (2012) suggested, “which academic programs are best suited for a 25-year-old male Latino with strengths in mathematics” we have to situate our approaches through contemporary ways of thinking about assessment, surveillance, and privacy, particularly the concept of surveillant or data assemblages (Haggerty & Ericson, 2000; Lyons, 2003; Nissenbaum, 2009). Although we might not imagine corporations such as Pearson or Google capable of fine-grained distinctions drawn from large-scale data, increasingly, data amalgamations allow for what Google terms “fuller portraits” of individuals. As David Matheson (2009) suggested, tracking will increasingly allow for the “names, behavior patterns, locations, addresses, social connections, political affiliations, medical conditions, and financial statuses of specific persons” (p. 332). And these tracking capabilities enable the kind of social sorting that Shoshana Magnet (2009) described regarding the targeting of “Canadian residents born in Iran, Iraq, Libya, Sudan, and Syria” (p. 30) by U.S. border patrols. The increasing potential for data amalgamation requires that we attend to positions regarding privacy and individual dignity (Matheson, 2009; Millar, 2009). Namely, we need to participate in research that lessens the possibility of damaging discrimination based on social sorting. In what follows, I suggest a framework for assessment in terms of surveillance and privacy concerns as a context for discussing three online platforms.
ASSESSMENT, SURVEILLANCE, AND CONTEMPORARY PRIVACY CHALLENGES
The terms assessment and surveillance are loaded with cultural connotations. In Helen Nissenbaum’s (2009) Privacy in Context, she used monitoring instead, because surveillance connotes a political association and surveillance is done “by those in authority. . . for purposes of behavior modification or social control” (p. 2). Surveillance perhaps is the most loaded in terms of associations with nation-states or authorities, although each of these terms (monitoring, assessment, surveillance) are overdetermined. In interdisciplinary work in surveillance studies, scholars point out the ambiguity in the term. Surveillance, David Lyon (2003) suggested, “can be located on a continuum from care to control,” and as a result, “some element of care and some element of control are nearly always present, making the process inherently ambiguous” (p. 5). Although Lyon’s 2003 text focused on the aftermath of 9/11, his emphasis later shifted to a focus on a larger climate of suspicion (Lyon, 2006), which may also be an accurate description for current academic rhetoric regarding accountability (Adler-Kassner & O’Neil, 2010). As Lyon (2003) suggested: “in some contexts, surveillance may ensure that certain groups or individuals are not discriminated against” but in other contexts, “intensified surveillance may have socially negative effects which mean that proscription takes precedence over protection, social control over mutual care” (p. 17).
The concern in our discipline has been to somehow extend this notion of care and to remind readers of the power at stake in assessment that shifts from care to social control. Alan Hanson (1994), in Testing Testing, argued that tests are “mechanisms for defining or producing the concept of the person in contemporary society and that they maintain the person under surveillance and domination” (p. 3). Brian Huot and Michael Williamson (1997) suggested that “assessment often functions as. . . . a way for administrators or other powerful stakeholders to assume and wield their power and influence” (p. 49). Similarly, O’Neill (1998) reminded readers that “through the portfolio, the writer is positioned in a grid of power relations where decisions about the writer—such as placement or proficiency—are made” (p. 155), a point Huot (2002) repeated when he argued that “assessment has been used as an interested social mechanism for reinscribing current power relations and class systems” (p. 7). Although this emphasis on power relations has been crucial, the risks for social control with current surveillance technologies raise the stakes even further. Indeed, Neal (2011) focused more on digital platforms, argued for the need to pay attention to “surveillance technologies” (p. 66), and suggested the dangers of “mechanistic instruments” that rely on data mining, as a means to “centralized control of decision making” (p. 10), echoing concerns raised by Anne Herrington and Charles Moran (2010). Writing assessment scholars have long attended to the power dynamics at play, attempting to limit a slide towards social control on the surveillance continuum.
Various scholars have suggested approaches to writing assessment that lean more towards mutual care. For example, Huot (2002), describing the differences between writing assessment practices, emphasized the “importance of context and the individual in constructing acceptable written communication” (pp. 104–105), which led to his suggestion that assessment practice be site-based, locally controlled, context-sensitive, rhetorically based, and accessible. While a locally controlled assessment might still fail to attend to the care end of the continuum, this framework—and recent examples of approaches that multiple programs take to attempt a more organic assessment (Broad, 2003; Broad et al., 2009)—suggest the desire to ease assessment away from the social control end.
In addition to holding a mental model of surveillance that maintains an awareness of this balance between social control and care, a contemporary mental model of surveillance has moved away from Michel Foucault’s familiar rendition of the panopticon and relies, instead, on an image of a “surveillant assemblage” (Haggerty & Ericson, 2000), a mental model that challenges us to consider how we contribute to a student assemblage. According to Kevin Haggerty and Richard Ericson, Foucault “proposed that panoptic surveillance targeted the soul, disciplining the masses into a form of self-monitoring that was in harmony with the requirements of the developing factory system” (p. 615). Using Zigmund Bauman’s (1991) contention that a shift in surveillance has occurred, one that increasingly focuses on consumer framings, Haggerty and Ericson argued that “surveillance is used to construct and monitor consumption patterns,” a tracking that lacks “the normalized soul training which is so characteristic of panopticism” (p. 615). Haggerty and Ericson draw on Gilles Deleuze’s and Felix Guatarri’s (1987) A Thousand Plateaus, and suggest that in contemporary society, both “state and non-state institutions are involved in massive efforts to monitor different populations” (pp. 605–606). This kind of shift, Haggerty and Ericson argued, shapes a surveillance that:
is associated with attempts to limit access to places and information, or to allow for the production of consumer profiles through the ex post facto reconstructions of a person’s behaviour, habits and actions. In those situations where individuals monitor their behaviour in light of the thresholds established by such surveillance systems, they are often involved in efforts to maintain or augment various social perks such as preferential credit ratings, computer services, or rapid movement through customs. (p. 615)
These “preferential credit ratings, computer services, or rapid movement through customs” may be shaped in part by student performance in college, and perhaps in the future by an academic rating based on an individual’s academic data—similar perhaps, to a credit rating. Lyon (2003) and Nissenbaum (2009) both suggested that “increased, automated, algorithmic surveillance” (Lyons, p. 81) will lead to further sorting, segmentation, and slotting of individuals based on mined, gathered, and stored data.
To the degree that a larger portrait of a student becomes increasingly possible, the concept of a surveillant assemblage may be helpful as a way of interrogating data-mining strategies in local and national writing programs. If one brings this concept of the surveillant assemblage, with the dangers and hazards of social sorting based on increasing abilities to aggregate data by “omnibus information providers” (p. 184), the challenge becomes the ability to create a framework for “the design of a fair system of decision making” (Nissenbaum, 2009)—one that
balances the interests of commercial actors with the interests of individuals, giving consideration to such features as assuring non-arbitrary grounds for exclusion, transparency of principles determining inclusion and exclusion, and the relevance of decision criteria to particular decisions. (pp. 188–189)
The concern over data assemblages raises familiar questions about contemporary privacy framings. According to Daniel Solove (2010),
Privacy is a sweeping concept, encompassing (among other things) freedom of thought, control over one's body, solitude in one's home, control over personal information, freedom from surveillance, protection of one's reputation, and protection from searches and interrogations. (p. 1)
While the scope is daunting, Nissenbaum (2009) argued that “what people care most about [in terms of privacy] is not simply restricting the flow of information but ensuring that it flows appropriately” (p. xx). Nissenbaum’s text offers readers a range of example responses to privacy concerns to suggest that contextual integrity helps shape decisions about privacy. In other words, in the midst of venues that facilitate social networks, and in the midst of increasing technology capabilities by corporations and nation-states, conceptions of privacy are changing shape rapidly, and individuals draw on a range of sometimes unconscious rubrics to determine whether they will opt in to systems that require a degree of personal data sharing. Nissenbaum argued that privacy is a complicated concept, and we tend to hold a variety of opinions about appropriate relations to privacy, which requires a framework of questions to assess appropriate privacy moves in particular situations. She suggested the following framework when considering a specific practice:
- Describe the new practice in terms of information flows.
- Identify the prevailing context. Establish context at a familiar level of generality (e.g., “health care”) and identify potential impacts from contexts nested within it, such as “teaching hospital.”
- Identify information subjects, senders, and recipients.
- Identify transmission principles.
- Locate applicable entrenched informational norms and identify significant points of departure.
- Prima facie assessment: There may be various ways a system or practice defies entrenched norms. One common source is a discrepancy in one or more of the key parameters. Another is that the existing normative structure for the context in question might be “incomplete” in relation to the activities in question.
- Evaluation I: Consider moral and political factors affected by the practice in question. What might be the harms, the threats to autonomy and freedom? What might be the effects on power structures, implications for justice, fairness, equality, social hierarchy, democracy, and so on? In some instances the results may overwhelmingly favor either accepting or rejecting the system or practice under study; in most of the controversial cases an array of factors emerge requiring further consideration.
- Evaluation II: Ask how the system or practices directly impinge on values, goals, and ends of the context. In addition, consider the meaning or significance of moral and political factors in light of contextual values, ends, purposes, and goals. In other words, what do harms, or threats to autonomy and freedom, or perturbations in power structures and justice mean in relation to this context?
- On the basis of these findings, contextual integrity recommends in favor of or against systems or practices under study. (In rare circumstances, there might be cases that are sustained in spite of these findings, accepting resulting threats to the continuing existence of the context itself as a viable social unit.) (pp. 162–163)
Although Nissenbaum’s framework begins in familiar territory (i.e., evaluating the rhetorical situation), her two evaluative prompts (numbers 7 and 8) suggest challenging questions for which there may not be answers. However, she offered several case studies to suggest a range of evaluative approaches. One such case was of “a conscientious high school administrator deciding on features for a new computerized student record system” (p. 149). The administrator made a series of decisions—from “what information to store in the system, for how long, to whom access should be granted, and under what terms” (p. 149), along with decisions related to tools for compiling, aggregating data, and mining data, and the ability to cluster students based on data profiles. Nissenbaum argued that concerns about privacy may raise concerns about “liberty, autonomy, fairness, and harm” along with concerns related to “efficiency and cost effectiveness, potential income for schools willing to sell information about students for marketing and employment recruiting purposes, potential to increase security in high-risk neighborhoods by keeping closer track of social and religious groups within schools, past associations, and so on” (p. 149).
Although Nissenbaum’s (2009) suggestion regarding the sale of information and the tracking of social and religious group affiliations disturbs me enormously, these issues often frame contemporary data assemblages. It would be naïve to participate in conversations about data sharing without understanding that these are issues under consideration. However, we, as a field, and we, as individual participants, need to develop positions on what we determine is ethically appropriate given the norms associated with surveillant assemblages; Nissenbaum’s framings may help us with that task.
SPECIFIC OPTIONS
In what follows, I discuss three categories of options for a local program wanting to find an online panacea that would facilitate program assessment. These three categories are based on local and national queries regarding potential online resources, and I draw on Nissenbaum (2009) and field research to suggest the issues at stake with each option. Those three options are: first, Google Application packages offered to universities; second, companies who specialize in eportfolios for assessment (and integrated with this, local campus courseware); and third, a do-it-yourself (DIY) option. The first option, Google Apps packaging for universities, namely Google Drive and Google Sites, seems to be a solution for some programs because universities can build on something that already exists at a local campus. Depending on the local context, Google Apps can be used to set up and facilitate the collection and evaluation of a program’s documents and student documents written for course work. The second option requires that a university purchase access to an online platform designed with assessment purposes in mind (e.g., LiveText, Chalk and Wire, and, to a lesser extent, MyCompLab). This option also allows students to submit their writing and faculty to add documents. Evaluation is easier with this framework because much of the logistics of sending documents to reviewers is sorted by the platform. In addition, these tools can be integrated with local campus courseware (i.e., Moodle, Sakai, Blackboard, Desire to Learn, etc.). Finally, a third option is a do-it-yourself one—for example, building an interface drawing on a resource like Drupal—that would function similarly to the corporate software; however, the local school would maintain more control over the data. Each category brings its own data storage considerations, its own benefits and risks.
Google Applications
At Georgia Southern University, Google Drive in combination with Google Sites were recommended as a viable solution to our assessment challenge. A state-wide change in online courseware reduced the current program’s viability for long-range assessment, and resources were dwindling, so this option was suggested repeatedly. Other departments on campus had used Google Drive for assessment (although they were much smaller programs); the results from other universities were brought up as viable examples of instituting ePortfolio options through Google Sites and Google Drive. Gail Ring, (2011), for example, offered advice at an American Academy of Colleges and Universities conference on how to institute “Large Scale ePortfolio Implementation on a Slim Budget” at Clemson, relying on Google Sites and Google Drive.
The Google Apps package for universities raises important issues regarding security-based policy decisions universities are making and the complications of data-storage choices. In May 2010, estimates for the number of people using Google Apps for education were about 8 million, or 60% of schools in the U.S. (Weintraub, 2010); that number had risen to about 15 million in July 2011 (Kovacs, 2011). Advertising its services, Google emphasizes cost, security, and student familiarity with Gmail as determining factors. In an article that explored the decisions behind the trend (Murphy, 2007), Adrian Sannier, who served as the University Technology Officer at Arizona State University in 2007, compared his resources to Google’s by drawing on military metaphors: “I look at my army—I certainly have a formidable force, they’re sharp characters, but. . . compared to Google’s army? I have a police force, and they have the United States Marines.” The trade off, as author Andy Guess (2007) reminded his Higher Ed audience, is between security and control of data: “As they [different universities] weigh the benefits of third-party e-mail services, they must also consider the consequences of moving students' and faculty members' personal data to off-campus servers over which they have no control.” This question of who controls the data is a significant one, as significant as the concern for data security and budget shortfalls.
In addition, lurking always in the background is the worry over how a company’s policies may change in the future. For example, in January 2012, Google announced changes in its approaches to privacy policies, changes that “combin[ed] data across its Web sites to stitch together a fuller portrait of users” (Kang, 2012). According to Cecilia Kang, Google argued that the changes would “simplify the company’s privacy policy—a move that regulators encouraged,” but Kang also noted that the move raised concerns regarding privacy. Google’s changes coincide, in timing, with policy changes in FERPA regulations that allow for larger data-sharing capabilities for institutions and corporations. For universities that have adopted Google Apps, this news regarding data collection may renew concerns regarding the exchange of data for collective security.
Currently, student data is collected by Google, and although ads—which draw upon user data—aren’t running on the Gmail interface at our university, a “fuller portrait” for each student, faculty, and staff member can now be assembled by Google, facilitated even further by faculty, staff, and students who link external Gmail accounts to their university Gmail accounts. In the past 5 years, when universities chose this route for email, Google Drive, Google Groups, Google Sites, and Google+, they might not have anticipated that signing students up for Gmail “could affect the experience on seemingly unrelated Web sites such as YouTube” (Kang, 2012) or that Google would tailor hits from search terms to the data about each individual user; however, at this point, a “fuller portrait” is part of the exchange of security for data. Although the previous courseware, WebCT (which was used until January 2013), did not interface effectively with Google Apps, the Georgia University System switched to Desire2Learn in 2013, which affords better integrations of Google Apps with the courseware. This has implications for aggregate data analysis and for “fuller portraits” or surveillant assemblages.
Given this context, should a local program draw on Google Apps? Nissenbaum (2009) asked for two evaluations:
Evaluation I: Consider moral and political factors affected by the practice in question. . . . Evaluation II: Ask how the system or practices directly impinge on values, goals, and ends of the context. In addition, consider the meaning or significance of moral and political factors in light of contextual values, ends, purposes, and goals. (pp. 162–163)
On many campuses, these evaluations have already been addressed by asking: Do we trade data for security? In a climate of tightening budgets and limited resources, our university made that trade without offering lasting alternatives for faculty or students who would prefer not to participate in Google’s “fuller-portrait” agenda. However, if the question were put to faculty and students in terms of financial incentives—that is, should we stay with our current system and limit data access, but raise the technology fee and risk various kinds of email hacking scenarios, or save money that can be used for other technology needs—faculty and students might have agreed to the trade. What is necessary with the recent fuller-portrait announcement is some way of explaining how Google is currently using the data it accrues, and, in an ideal world, a way for students to opt out of surveillance. From a perspective of data assemblages, or surveillant assemblages, any decision to draw on Google Apps participates in the creation of fuller data portraits.
In January 2012, in response to Google’s announcements regarding privacy policy changes that accompanied its changes in the rankings of search results (shaped based on the knowledge from the fuller portrait), other social media sites created a “Don't Be Evil” bookmarklet or browser plug-in that “alters Google search results to make them more like they were before” (“Don’t be Evil,” 2012). The campaign suggested the challenges at stake in monitoring and responding to companies of Google’s size and reach. Who amongst us knows enough about the existing strategies for algorithm design to create plug-ins that would address critiques we have with Google’s reach? For how long would a university need to be able to create responsive plug-ins that would address privacy norms?
In practical terms, Google Apps at this point is not robust enough (without substantive additional local tweaking) to be a viable option for a large-scale assessment plan although it does offer resources that might work for smaller programs (Barrett, 2009). It is possible to imagine that Google would create increased accessibility to this kind of resource development, and then a local WPA would face the complicated decision of whether to draw on a viable and feasible resource, thus trading information for ease of use.
E-Portfolio Companies
At Georgia Southern University, some smaller departments use resources like LiveText.com as an ePortfolio option. Because of the size of the first-year writing program and the accompanying costs, it quickly became apparent that we would not be able to afford one of these programs unless students were charged a fee. MyCompLab was the only resource that explicitly set up its marketing with writing programs, individual writing teachers, and students in mind. Some first-year writing programs at other universities and community colleges require that all their students purchase the MyCompLab option, and it is possible that the administrators of the writing program can collaborate with Pearson representatives to gather data for large-scale assessment purposes. To contrast, LiveText and Chalk and Wire are clearly set up to establish large contracts with universities that extend beyond first-year writing, and university administrators negotiate terms of data storage and usage. These programs not only provide the ePortfolio platform to students but also provide administrators in charge of assessment the resources they need to evaluate multiple courses across the curriculum for multiple years. With their services, not only can programs and departments be tracked, but a student’s progress can be more fully nuanced as data can be aggregated with the university’s student information system, evoking images of another type of “fuller portrait”—potentially helpful and also potentially problematic.
A look at LiveText may highlight some of the difficulties of deciding on the value of certain online options in terms of surveillant assemblages. Livetext will, with the approval of administrators at individual campuses, share information with its partners, which include Turnitin and ETS, as well as BlackBoard, Moodle, and various companies who provide student information systems. However, the shifting legal landscape affords the possibility of changes, an important point in terms of the challenges for writing program administrators, charged with creating assessments and using resources that are difficult to assess in terms of evolving privacy policies. At an institution like ours, a WPA would not have the option of selecting from a range of eportfolio services. Instead, that decision would be made at a state-wide level and any negotiations vis-à-vis student data would be made there.
Our university transitioned from WebCT to Desire2Learn in summer 2012. Desire2Learn, similar to LiveText and Chalk and Wire, partners with other companies, including Elsevier, Pearson, and McGraw Hill, each of which is competing for lucrative testing contracts. Our option, now, would be to select Desire2Learn because the company offers a similar ePortfolio resource for assessment purposes, while also allowing for a much easier migration between the courseware and Google Apps, where the sense of a “fuller portrait,” seems to suggest the need for a clearer understanding of the current and future implications of such partnerships. Questions we must address include: How might the student file—this surveillant assemblage—be deployed in the era of social media venues? How might these various partners shape our agendas based on students’ fuller portraits? What policies need to be developed, given these rapidly shifting terrains?
At Chalk and Wire, the partnerships seem more limited but still include the courseware interfaces like Moodle, Blackboard, and a school’s student information system. Although Chalk and Wire is currently independent company, there is no guarantee that they will not be bought by a corporation like Pearson at some point in the future. A diligent administrator cannot control those realities, and the federal government does not regulate these companies. At this point, a decision to rely on Chalk and Wire is also a decision to allow data trading. Both Chalk and Wire and LiveText indicate that their services can be integrated with the local student information system, but neither explains how data trade happens with companies like Turnitin. Where does the data travel? Who has access to that data outside of the university? Can an individual student opt out when a university has agreed to the Turnitin partnership, for example (Cochrane, 2006; Glod, 2007; Zimmerman, 2008)? How is ETS utilizing data? Can faculty opt out of ETS sharing? Although each of these organizations has policies on data sharing, and can boast a certain level of data security, it remains unclear how data will be shared and how policies will shift with the recent changes in FERPA regulations. Who has access to the data within the university is also a key issue as the local program administrators may not be able to control the kinds of analyses conducted on or with the student data.
To return to Nissenbaum’s (2009) evaluatory criteria, first, “consider moral and political factors affected by the practice in question”; second, “ask how the system or practices directly impinge on values, goals, and ends of the context.” If there are adequate resources available at a local site to approach the data collected with innovative approaches, and if there are adequate systemic framings in place at a university to assure that interpretations of student writing are the result of well-reasoned assessment frameworks—ones that allow for organic, localized, context specific and faculty driven approaches (Adler-Kassner & O’Neill, 2010; Broad, 2009; Herrington & Weeden, 2009), then a program like Chalk and Wire, which seems to have more capacity to shape a localized framing may be feasible; however, a range of questions about data management remain.
Companies compete with algorithms and profit from their ability to acquire and manipulate data, and policies at local and state levels regarding contracts with vendors are not always easily available. If they are, finding alternatives to the university system decisions are difficult, at best. In addition, resources change rapidly and new kinds of data aggregation become feasible, shifting companies’ policies and terms of service. Similar to the Google Apps option, administrators, at this point, may be faced with the dilemma of managing data storage while also attending to a myriad of other concerns. In terms of the trade—data sharing for security, for access to analysis, for increased facilitation of data entry—institutions may welcome these options, and I would argue that we, as a community, should provide more guidance on how to facilitate options for individual faculty and individual students who may want to opt out of various present and future data-sharing decisions.
Do-it-yourself Option
A do-it-yourself (DIY) option for assessment seems increasingly seductive, given these corporate spaces in which a smaller population of companies seem to collect a larger range of data and bring as well the ability to not only amalgamate data but create algorithms to search this data and to create fuller portraits of individuals. Many of our conferences rely on open-source, home-grown online submission processes, where the interface facilitates the process from submissions to evaluations, scheduling, and reports regarding numbers of submissions, acceptance rates, etc. These kinds of open-source programs could be adapted to assessment situations, but the challenge comes in the number of students and documents involved in the submission process, the type of documents, and the secure storage of the data once collected.
Resources such as the Public Knowledge Project share open-conference software, open-manuscript software, and open-journal software. Assessment of writing programs might easily utilize a similar resource, but this approach would require that programmers participate, and the number of students involved in the process along with the number of texts may be too significant for a writing program to facilitate. Add in the continuing concern about security at the local institution, and these kinds of options are discouraged early in the process. As an article on EMMA and data collection at the University of Georgia suggested, “any program embarking on such an endeavor, however, needs to secure up front not only adequate server space and a person with the necessary expertise to manage the hardware but also a plan for maintaining and adding to the database” (Desmet, Griffin, Miller, Balthazor, & Cummings, 2009, p. 161).
However, if we, as a discipline, want to advocate for an ethics of care on the surveillance continuum, if we want to protect student rights to their documents, there is much to be gained by finding a way to a national resource that pledges to provide this option as part of a nationally led research project, one in which the exchange of data for the use of the resource would also assure the possibility of assessment grounded in discipline-specific values and principles, and would allow a state university such as ours to employ a viable and useful program and contribute data to a research venue that would afford the possibility of shaping the scope of writing assessment at the national level. In Nissenbaum’s (2009) evaluative framings, this trade of data has the most potential to open space for the discipline to speak in political venues, drawing on a rich data set. It allows for values, goals, and contexts to be articulated by people trained in the field. However, it also requires the most concerted, collective effort, and for this reason seems unlikely—at least for the near future.
CONCLUSIONS
It may seem that I would be arguing that we abandon corporate-based datacloud options that offer valuable resources to programs faced with the difficult challenges of assessment—and I am, at least to some degree. However, I also recognize that the DIY option is not yet a viable option, and in the meantime, assessment efforts continue to grow. In addition, all of us negotiate a range of complicated privacy decisions on a regular basis: Will we search for information using a Google search? Or sign up for a Facebook account and “friend” people in the field? Or create an account on Amazon, and allow the tracking there to recommend texts we might not otherwise find? Or create an account in Netflix, and draw on the wealth of their calculations to discover new movies that we enjoy? We each decide how much to participate as consumers in this culture and our decisions allow us to be sorted based on income, education, gender, age, and a host of other characteristics.
Without viable DIY options, we need better strategies for addressing and communicating the potential implications of various platforms, information about how corporations are shifting policies, and ways to insist on options that allow students and teachers the ability to limit the range of data amalgamation that may help to limit surveillance (Howe & Nissenbaum, 2009). We need a way to approach collective decisions aware of and willing to discuss the benefits and tradeoffs.
If you asked me to articulate my positions on privacy, I would agree with Nissenbaum—privacy depends on context; if pressed, I would admit that a savvy administrator lives with the reality that the decisions about students inevitably involve this uncanny and uneasy relationship with individual files. We participate in systems of surveillance, and we must find our way to strategically press the surveillance system toward the care end of the continuum. This means that we may choose to work within something like Desire2Learn even as the decision has already made by others, and we may find viable strategies to advocate for our own and student privacy in those spaces. We should find viable ways to advocate for a nuanced and complicated relation to a set of privacy decisions that inevitably involve compromises. We must find ways as a community to know more about the scope of those compromises, so that we can more effectively create viable, contemporary, technologically aware understandings of privacy. Reluctant as I am to jump into big data conversations, I would be more likely to make that leap if we, as a discipline, were able to offer an inexpensive resource for writing programs—a central platform that institutions could use instead of these for-profit corporate spaces, a platform and data-collection resource that relied on a board of advisors in our discipline who would guide data collection, analysis, policies, and a program for sustainability. If we were to take this route, we could begin to gather and assess data that would allow us to more effectively shape national conversations.
REFERENCES
Ball, Cheryl E., & Kalmbach, Jim. (Eds.). (2010). RAW: (Reading and writing) new media. Cresskill: Hampton.
Barrett, Helen C. (2009). ePortfolio mash up with GoogleApps. Retrieved from http://electronicportfolios.com/google/
Bauman, Zigmund (1991). Intimations of postmodernity. New York: Routledge.
Broad, Bob (2003). What we really value: Beyond rubrics in teaching and assessing writing. Logan: Utah State University Press.
Broad, Bob; Adler-Kassner, Linda; Alford, Barry; Detweiler, Jane; Estrem, Heidi; Harrington, Susanmarie; McBride, Maureen; Stalions, Eric; & Weeden, Scott. (2009). Organic writing assessment: Dynamic criteria mapping in action. Logan: Utah State University Press.
Cochrane, Mary. (2006, October 16). Does plagiarism-detection service violate student privacy? UB Reporter. Retrieved from http://www.buffalo.edu/ubreporter/archives/vol38/vol38n7/articles/FSEC.html
Conference on College Composition and Communication (2009). Writing assessment: A position statement. Retrieved from http://www.ncte.org/cccc/resources/positions/writingassessment
Deleuze, Gilles, & Guattari, Felix. (1987). A thousand plateaus: Capitalism and schizophrenia. (Brian Massumi, Trans.). Minneapolis: University of Minnesota Press.
Desmet, Christy; Griffin, June; Miller, Deborah C.; Balthazor, Ron; & Cummings, Robert. (2009). Re-visioning revision with eportfolios in the University of Georgia first-year composition program. In Kathleen B. Yancey (Ed.), Electronic portfolios 2.0: Emergent research on implementation and impact (pp. 155–163). Sterling, VA: Stylus.
“Don’t Be Evil” tool alters new Google search results. (2012, January 23). Los Angeles Times. Retrieved from http://latimesblogs.latimes.com/technology/2012/01/dont-be-evil-google-search.html
Fain, Paul. (2012, February 1). Using big data to predict online student success. Inside Higher Ed. Retrieved from http://www.insidehighered.com/news/2012/02/01/using-big-data-predict-online-student-success
Feldman, Shoshana. (1989). Jacques Lacan and the adventure of insight: Psychoanalysis in contemporary culture. Cambridge, MA: Harvard University Press.
Glod, Maria. (2007, March 29). McLean students sue anti-cheating service. The Washington Post. Retrieved from http://www.washingtonpost.com/wp-dyn/content/article/2007/03/28/AR2007032802038.html
Guess, Andy. (2007, November 27). When e-mail is outsourced. Inside Higher Ed. Retrieved from http://www.insidehighered.com/news/2007/11/27/email
Haggerty, Kevin D., & Ericson, Richard V. (2000). The surveillant assemblage. British Journal of Sociology, 51 (4), 605–622.
Hanson, Allan F. (1994). Testing testing: Social consequences of the examined life. Berkeley: University of California Press.
Harrington, Susanmarie, & Weeden, Scott. (2009). Assessment changes for the long haul: Dynamic criteria mapping at Indiana University Purdue University Indianapolis. In B. Broad (Ed.), Organic writing assessment: Dynamic criteria mapping in action (pp. 75–118). Logan: Utah State University Press.
Haswell, Richard. (2001). Beyond outcomes: Assessment and instruction within a university writing program. Santa Barbara, CA: Praeger.
Herrington, Anne, & Moran, Charles. (2009). Writing, assessment, and new technologies. In Marie C. Paretti & Katrina M. Powell (Eds.), Assessment of writing (assessment in the disciplines; vol. 4, pp. 159–177). Tallahassee, TN: Association of Institutional Researchers.
Howe, Daniel C., & Nissenbaum, Helen. (2009). TrackMeNot: Resisting surveillance in web search. In Ian Kerr, Carole Lucock, & Valerie Steeves (Eds.), Lessons from the identity trail: Anonymity, privacy, and identity in a networked society (pp. 417–436). Cambridge, UK: Oxford University Press. .
Huot, Brian. (2002). (Re)Articulating writing assessment for teaching and learning. Logan: Utah State University Press.
Huot, Brian, & O’Neill, Peggy. (Eds.) (2009). Assessing writing: A critical sourcebook. Boston: Bedford/St. Martin’s.
Huot, Brian, & Williamson, Michael M. (1997). Rethinking portfolios for evaluating writing: Issues of assessment and power. In Kathleen Blake Yancey (Ed.), Situating portfolios: Four perspectives (pp. 43–56). Logan: Utah State University Press.
Kang, Cecilia. (2012, January 26). Google announces privacy changes across products; users can’t opt out. The Washington Post. Retrieved from http://www.washingtonpost.com/business/economy/google-tracks-consumers-across-products-users-cant-opt-out/2012/01/24/gIQArgJHOQ_story.html
Kovaks, Andrew. (2011, July 8). Which educational organizations are using Google Apps for education? Quora. Retrieved from http://www.quora.com/Which-educational-organizations-are-using-Google-Apps-for-Education
Lyon, David. (2003). Surveillance after September 11. Malden, MA: Blackwell Press.
Lyon, David. (Ed.). (2006). Theorizing surveillance: The panopticon and beyond. Devon, UK: Willan Press.
Magnet, Shoshana. (2009). Using biometrics to re-visualize the Canada–US border. In Ian Kerr, Carole Lucock, & Valerie Steeves (Eds.), Lessons from the identity trail: Anonymity, privacy, and identity in a networked society (pp. 359–376). Cambridge, UK: Oxford University Press.
Matheson, David. (2009). Dignity and selective self-presentation. In Ian Kerr, Carole Lucock, & Valerie Steeves (Eds.), Lessons from the identity trail: Anonymity, privacy, and identity in a networked society (pp. 319–334). Cambridge, UK: Oxford University
Millar, Jason. (2009). Core privacy: A problem for predictive data mining. In Ian Kerr, Carole Lucock, & Valerie Steeves (Eds.), Lessons from the identity trail: Anonymity, privacy, and identity in a networked society (pp. 103–119). Cambridge, UK: Oxford University.
Murphy, Chris. (2007, September 27). Using Google's fix-it-as-we-go beta approach-for ERP. Information Week. Retrieved from http://www.informationweek.com/global-cio/interviews/using-googles-fix-it-as-we-go-beta-appro/229215109
Neal, Michael R. (2011). Writing assessment and the revolution in digital texts and technologies. New York: Teachers College Press.
Nissenbaum, Helen. (2009). Privacy in context. Palo Alto, CA: Stanford Law Books.
O’Neill, Peggy. (1998). Writing assessment and the disciplinarity of composition. (Unpublished doctoral dissertation). University of Louisville, Louisville, KY.
O’Neill, Peggy; Moore, Cynthia; & Huot, Brian (2009). A guide to college writing assessment. Logan: Utah State University Press.
Ring, Gail. (2011, January 29). Large-scale ePortfolio implementation on a slim budget. Presentation at the EPortfolio Forum at the Association of American Colleges and Universities, San Francisco, CA. Retrieved from http://www.aacu.org/meetings/annualmeeting/AM11/eportfolioforum.cfm
Solove, Daniel J. (2010). Understanding privacy. Cambridge, MA: Harvard University Press.
Weintraub, Seth. (2010, May 7). 8 million students on Google Apps. CNN Money. Retrieved from http://tech.fortune.cnn.com/2010/05/07/half-of-us-college-students-now-use-google-apps/
Whithaus, Carl. (2005). Teaching and evaluating writing in the age of computers and high-stakes testing. Mahwah, NJ: Lawrence Erlbaum.
Zimmerman, Traci A. (2008, September 17). McLean students file suit against Turnitin.com: Useful tool or instrument of tyranny? Presentation at the Conference on College Composition and Communication Developments. Retrieved from http://www.ncte.org/cccc/committees/ip/2007developments/mclean