How Faculty Attitudes and Expectations toward Student Nationality Affect Writing Assessment.

0
505
  • Earlier research on assessment suggests that even when Native English Speaker (NSE) and Non-Native English Speaker (NNES) writers make similar errors, faculty tend to assess the NNES writers more harshly. Studies indicate that evaluators may be particularly severe when grading NNES writers holistically. In an effort to provide more recent data on how faculty perceive student writers based on their nationalities, researchers at two medium-sized Midwestern universities surveyed and conducted interviews with faculty to determine if such discrepancies continue to exist between assessments of international and American writers, to identify what preconceptions faculty may have regarding international writers, and to explore how these notions may affect their assessment of such writers. Results indicate that while faculty continue to rate international writers lower when scoring analytically, they consistently evaluate those same writers higher when scoring holistically. Crusan (2010) has argued that “All teachers, consciously or unconsciously, hold biases about virtually every aspect of the workings of their classroom” (p. 89). In this context, bias is not a pejorative term, but rather an instructor’s “individual-agenda, or discipline-based preference” (p. 88). One instructor may favor group work while another prefers lecture; one discipline may tend to value individual creativity while another emphasizes collaboration. These biases extend to how different faculty members define good writing: a study by Crusan (2001), for example, revealed that medical faculty rank grammatical correctness as one of the most important features of good writing whereas conciseness and clarity were paramount for business faculty (p. 92). (For other studies of discipline-based bias, see Brown, 1991; Mendelsohn & Cumming, 1987; Roberts & Cimasko, 2008; Santos, 1988; Song & Caruso, 1996; Weigle et al., 2003.) While all student writers may find adapting to the expectations of a particular discipline or instructor challenging, Non-Native English Speaker (NNES) writers, (that is, writers whose first language is not English), often struggle more than their native-speaker peers to understand and meet faculty expectations for good writing (Crusan, 2010, p. 91. See also Leki, 2006). NNES writers who struggle with grammatical correctness will not fare well if the faculty member tends toward what Crusan (2010) refers to as a left-brain assessor, one who “focus[es] more on mechanical aspects of testing and rating and emphasize[s] the logical continuity of thought and mechanical aspects of writing” (p. 91). NNES students with a good mastery of grammar may still struggle to meet teacher expectations if their notion of how to organize an argument varies substantially from their instructor’s. Connor (2002) notes that “the linear argument preferred by native English speakers may well represent what such speakers view as coherent, though speakers of other Lindsey and Crusan 2 languages may disagree” (p. 497). A study by Kobayashi & Rennert (1996) evaluated 465 readers with different cultural backgrounds and concluded that “culturally influenced rhetorical patterns affected assessment of EFL [English as a Foreign Language] students” (p. 397). An NNES writer from a culture that values an indirect approach, with the point coming at the end of his paper, may find his work downgraded by a professor who expects the thesis to appear early in the writing. Faculty may reduce the negative impact their biases may have on student grades and comprehension simply by making their preferences about what constitutes good writing explicit to students. Providing assessment criteria with an assignment, for example, offers a useful means for faculty to clarify any biases they have regarding mechanics and/or content (Crusan, 2010; Ferris & Hedgcock, 2005; Valdez-Pierce, 2000). Such a step benefits all students, not only NNES writers, by fronting what factors a faculty member will weight most heavily in assessment. Faculty can also take steps to consider the presence of cultural bias in their materials, that is, the extent to which a faculty member or a discipline privileges a particular way of learning or knowing as superior (Tyler, Stevens & Uqdah, 2009). But what about any hidden or unconscious biases a faculty member may hold? How might these affect assessment? As Clark (2010) notes, “Teachers view themselves as ‘good’ and ‘fair’ people, and they are skeptical about being biased,” but the millions of Implicit Association Tests, or IATs, administered since 2002 by psychologists for Project Implicit reveal that not only are implicit biases pervasive, but that people are unaware of them: “Ordinary people, including the researchers who direct this project, are found to harbor negative associations in relation to various social groups (i.e., implicit biases) even while honestly (the researchers believe) reporting that they regard themselves as lacking these biases” (“General Information,” 2008). It stands to reason that at least some faculty might possess what we, for lack of a better term, refer to as ethnolinguistic bias: the tendency of faculty to evaluate students’ ability with written language less or more favorably due to positive or negative bias triggered by markers, such as nationality. Does Ethnolinguistic Bias Exist? Studies suggest that even when educators are cognizant of student diversity, their ability to translate that knowledge into new teaching and assessment practices has often moved slowly or not at all. Clair (1995) identified three studies that revealed that awareness does not necessarily change practice or attitude: Sleeter (1992) found that although many of the participating teachers [in a multicultural education program] perceived that they had learned much, there was little change in their attitudes and practice. Ahlquist (1992) noted that teacher attitudes and beliefs remained unchanged for the most part during a multicultural foundations course. . . . McDiarmid (1990) studied teachers’ attitudes toward ESL students both before and after a 3-day workshop designed to influence these attitudes and found that the multicultural presentations had little influence on the teachers’ beliefs about ESL students. (p. 193) Such findings concur with a review of more recent studies by Tyler, Stevens & Uqdah (2009) that suggest evidence of cultural bias in teaching throughout the academy: “In addition to cultural bias found throughout public school curricula and standardized testing, cultural bias is believed to be salient throughout the institutional practices promoted and executed by school teachers and administrators” (p. 293). Ndura’s analysis of ESL instructional materials, for example, revealed that ESL textbooks “fail to reflect the growing diversity of students’ life experiences and perspectives” (2004, p. 150). This finding parallels studies of textbooks used in Native English Speaker (NES) classrooms where “most contributions to academic subject matter . . . are made by members of the majority race or culture . . . and much of the text throughout this subject matter is used to reinforce the superiority of this group” (Tyler, Stevens, & Uqdah, 2009, p. 292). Studies by Boykin et al. (2006) and Tyler, Boykin, & Walton (2006) both concluded that many classroom Faculty Attitudes and Expectations toward Student Nationality 3 teachers are biased in favor of students whose behavior reflects mainstream cultural values, such as competition and individualism, rather than those who demonstrate alternative ethnocultural values such as communalism. Teachers tended to see students who followed the mainstream values as more motivated and higher-achieving. In terms of bias specifically in writing assessment—whether in first year composition or in courses across the curriculum—such perceptions can run deep despite efforts to train faculty across the curriculum to be more aware of how a student’s linguistic and cultural background affects how they complete assignments. Scholarship in second language acquisition has repeatedly indicated that “second language acquisition is a slow and gradual process and that expecting ESL students’ writing to be indistinguishable in terms of grammar from that of their NES [Native English Speaker] counterparts is naïve and unrealistic” (Silva, 1997, p. 362). Across the curriculum, many faculty may be unaware of this scholarship, or simply be uninterested because they don’t perceive writing instruction as their responsibility (Salem & Jones, 2010). Rather than break down assignments into structured steps that guide students in developing the strategies required for success, many faculty across the curriculum continue to simply assign a paper and collect the final product with no discussion of how one moves from assignment to successful finished essay in a particular discipline. Even those explicitly trained to assess writing may continue to fall back on traditional methods that emphasize product above all else: “composition specialists have long suspected that many teachers, although they publicly eschew a focus on the final product of writing and celebrate process-centered writing instruction as excellent pedagogy, still practice current-traditional rhetoric in the classroom” (Crusan, 2010, p. 93). Writing assessment studies since the 1980s have also repeatedly found that teachers modify their assessment strategies when grading NNES work. A 1982 review of 12 studies about Native English Speaker reactions to NNES writing concluded that, while faculty were generally able to successfully comprehend the message being conveyed, readers were significantly irritated by NNES errors. Vocabulary errors which occurred with grammatical mistakes were considered the most annoying (Ludwig, 1982). How such irritation affected final assessment, h