Caroline M. Cunningham
Carolyn M. Callahan
S. Christopher Roberson
The University of Virginia
The research staff at the University of Virginia has just completed an investigation of the reliability and validity of a peer nomination form developed by Dr. Anne Udall. Dr. Carolyn Callahan and research staff, Caroline Cunningham, Chris Roberson, and Ari Rapkin, selected the peer nomination form for investigation based on the commitment of The National Research Center to search out and investigate the soundness of alternative assessment tools to identify gifted and talented students.
In searching for solutions to the problem of minority underrepresentation in programs for the gifted, researchers have begun to turn their attention to identification strategies which extend beyond the traditional focus upon standardized measures. Frasier (1991) stresses the need to look beyond “paper” information, such as that found in standardized tests, to “people” information, such as that found in nominations. Such nominations can come from a variety of sources—teachers, parents, peers, and persons in the students’ communities (Frasier 1989, 1992). Acting on the assumptions (a) that peer nominations may be less biased toward cultural differences than other forms of identification (Adams, 1990), (b) that they may allow for the recognition of otherwise untapped information concerning gifted minority students (Rhodes, 1992), and (c) that they could be a valuable means for identifying creativity in gifted students (Hadaway & Marek-Schroer, 1992), the NRC/GT selected an instrument that had preliminary evidence of face validity and content validity.
Despite the increased support and use of peer nomination, Gagné, Begin, and Talbot (1993) report that most of the peer nomination instruments currently being used “lack the barest information on their reliability and validity as screening instruments” (p. 39). Accepting the challenge to rectify this problem, we have examined the reliability and validity of Udall’s peer nomination instrument. First, the instrument was revised based on Udall’s earlier study of the instrument. The final form of the instrument we investigated consists of 10 questions which address the following specific categories of gifted behaviors: speed of learning, task commitment/motivation, general intelligence, and creativity in the areas of play, music, art, and language. Examples of these questions are: “What boy OR girl learns quickly but doesn’t speak up in class very often?” and “What girl OR boy is really good at making up dances?” Students are asked to evaluate their classmates’ behaviors and then name those most fitting the listed categories.
The sample size for this study consisted of 555 fourth, fifth, and sixth grade students from 3 Collaborative School Districts—Tucson Unified School District and Amphitheater Schools in Tucson, AZ and Donna Independent School District in Donna, Texas—which have large Hispanic populations (<90%). Each participating teacher provided a list of the students who participated in the study and demographic information on each student-name, grade, gender, ethnicity, and whether or not the student had been identified as gifted by the school district. To measure the consistency of this instrument, we administered the peer nomination form twice using a time interval of 6 weeks between the 2 administrations. To ensure that the items on the instrument measure categories of gifted behaviors which we want them to measure, we established the relationships between individual items and clusters of items which addressed similar behaviors.
We found the overall consistency of the peer nomination instrument to be high as demonstrated by the test-retest reliability correlation obtained by administering the instrument twice. Individual items addressing specific areas of giftedness, such as art and music, also had high degrees of consistency. In addition, those questions or clusters of questions addressing the same categories of gifted behaviors related more closely with each other than with questions or clusters addressing different categories of gifted behaviors. This pattern serves as initial evidence of the instrument’s construct validity, or its ability to measure what it is supposed to measure.
In both rounds of testing, females were nominated significantly more times than males on questions addressing general intellectual ability and dance ability. Males were nominated significantly more times than females in the area of drawing ability in both rounds and in the area of making up games in Round 1. These differences suggest that the scores on these particular questions be assessed differently for males and females. For example, in assessing general intellectual ability using this instrument, schools should closely examine nominations in their setting and adjust interpretation of nominations accordingly.
While ANOVA results showed differences by race for African-Americans and Asian-Americans in the second round, these results may be spurious due to the extremely small sample size of African-Americans and Asian-Americans included in the study. Further study using these populations is necessary before any conclusions can be drawn about the use of this instrument with African-American or Asian-American students. It is important to note that no significant differences were found between the nominations of Hispanics and Caucasians. Thus, this instrument reflected cultural neutrality toward Hispanics, the target population for this study. In addition, we found no significant differences across the grade levels.
While we suggest further study of this instrument using samples which reflect cultures other than Hispanic, our analyses of the reliability and validity of this instrument, as well as of the gender and race issues, suggest promise.