Assumptions Underlying the Identification of Gifted and Talented Students

Fall 1993 Masthead


E. Jean Gubbins
Del Siegel
Joseph S. Renzulli
Scott W. Brown

The University of Connecticut
Storrs, CT

For decades the “metric of giftedness” has been test scores, more specifically IQ scores. The tradition of relying on IQ scores to define one’s ability curried favor with psychologists and educators at the turn of the century as the technology of measurement took hold. Numbers became the determinants of what we thought students could accomplish in school. We took comfort with a “solid objective” approach to assessing abilities. The level of comfort, however, was often challenged when there were dramatic differences between the academic accomplishments of our students and what the numbers predicted. We soon realized that the prophecy of the numbers was really just for future numbers on the same or similar tests. Given this insight, along with new theories of intelligence by Gardner (1983) and Sternberg (1985), we wanted to ask practitioners and policy makers about their assumptions underlying the identification process.

We recalled that several years ago Dr. Marshall Sanborn of the University of Wisconsin recommended the following guidelines for a comprehensive identification system in an unpublished paper cited in Renzulli, Reis, and Smith, 1981:

  • Apply multiple techniques over a long period of time.
  • Understand the individual, the cultural-experiential context, and the fields of activity in which he/she performs.
  • Employ self-chosen and required performances.
  • Allow considerable freedom of expression.
  • Reassess the adequacy of the identification program on a continuous basis.
  • Use the identification data as the primary basis for programming experiences.
Development of the Assumptions Survey

Sanborn’s guidelines were studied, along with a review of the literature, to create an item pool that would become the basis for a national survey on the Assumptions Underlying the Identification of Gifted and Talented Students. Items were generated, field tested, revised, and field tested again with content area experts, graduate students majoring in gifted and talented education, and participants in the 1991 National Association for Gifted Children (NAGC) Conference. Twenty revised items were retained and the survey was disseminated to 6,300 potential respondents. The main source of respondents was the Collaborative School Districts associated with The National Research Center on the Gifted and Talented. Other sources included our Consultant Bank members and participants in a session at the 1992 NAGC Convention. Completed surveys were returned by 3,144 people from 47 states, one territory, and Canada, resulting in a 50% return rate. All types of communities were represented, including those with diverse demographic, ethnic, and socioeconomic characteristics. Teachers at all grade levels and administrators with various building and district level responsibilities were included in the sample.

Respondents were asked to indicate the degree to which they agreed or disagreed with items reflected in Sanborn’s guidelines. A five point Likert scale was used ranging from strongly disagree to strongly agree. Sample items included statements such as the following:

  • Identification should be based primarily on an intelligence or achievement test.
  • Teacher judgment and other subjective criteria should not be used in identification.
  • Identification should take into consideration the cultural and experiential background of the student.
  • Giftedness in some students may develop at certain ages and in specific areas of interest.
  • Regular, periodic reviews should be carried out on both identified and non-identified students.

Given the large number of respondents and the number of items, the best way to interpret the results was to distill the data using a factor analytic approach, principal component analysis. This type of analysis would search the data set for correlations and determine the number of underlying factors in the instrument. Six factors were generated originally. Two factors had two items each; these factors were connected conceptually and were collapsed into a single factor, resulting in a five factor instrument. The twenty-item instrument could then be interpreted using the factor names and descriptors in Figure 1: Restricted Identification Practices, Individual Expression, On-going Assessment, Multiple Criteria, and Context-Bound Identification Techniques.

Factor
Item
Item no.
Descriptor
1
Restricted
4.
8.
11.
14.
15.
Achievement/IQ test
Precise cut-off score
No teacher judgment/subjective criteria
Restricted percentage
Services for identified students only
2
Individual
Expression
6.
7.
10.
19.
Case study data
Assess student-selected tasks
multiple formats for expressing talent
non-intellectual factors
3
On-Going
9.
13.
17.
18.
Identification information leads to programming
Judgment by test qualified persons
Alternative identification criteria
Regular, periodic reviews
4
Multiple
Criteria
1.
2.
3.
Multiple expression of abilities
Developmental perspective and interest
Multiple types of information
5
Context-
bound
5.
12.
16.
20.
Cultural/experiential background
Locally developed methods and criteria
Knowledge of students’ cultural/environmental background
Reflect types of services and activities

Figure 1. Factor names and descriptors.

Data Analyses and Interpretation

A review of the data analysis by educators, consisting of regular classroom teachers, teachers of the gifted and talented, administrators, and consultants, revealed significant differences in the extent of agreement or disagreement among these groups. For example, multivariate analysis of variance (MANOVA) procedures with the five factors of the instrument as the dependent variables and the four levels of educator as the independent variables revealed several significant differences. Following the multivariate analyses, univariate analyses of variance (ANOVAs) were computed for each dependent measure (Factors 1-5) separately. ScheffĂ©’s tests were used as the multiple comparison procedure to follow-up significant ANOVAs. The statistical data on each factor will be presented in another journal article that is in preparation. The major trends in the data will be highlighted.

It is interesting to note that the means for all educators indicated disagreement with Restricted Identification Practices (Factor 1) relying on intelligence or achievement tests, precise cut-off scores, exclusion of teacher judgment or subjective criteria, fixed percentage of students, and services for identified students only. There were statistically significant differences in the level of disagreement between regular classroom teachers and teachers of the gifted, with the teachers of the gifted having greater disagreement. Regular classroom teachers and administrators also had statistically significant differences on Factor 1, with administrators having greater disagreement (see Figure 2).

Fall 1993 Article 2 Figure
Figure 2. Mean response by school role.

Significant differences among the educators’ level of agreement were not found for Factor 2 – Individual Expression, emphasizing the use of case study data, student-selected tasks, multiple formats for expressing talents, and non-intellectual factors (e.g., creativity and leadership). Educators agreed that identification techniques should be responsive and sensitive to the individual’s ability to express talents and gifts through various measures or observation tools.

On all remaining factors, however, there were significant differences among the educators’ responses. Regular classroom teachers agreed, but not as strongly as teachers of the gifted, administrators, and consultants, that On-going Assessment (Factor 3) was important. Educators believed that regular, periodic reviews involving judgments of persons best qualified to assess the student’s performance were important considerations in designing and implementing a flexible identification system. They were also in agreement about using alternative identification criteria for specific performance areas. All of these data from alternative criteria, periodic reviews, or expert judgments provide direction and guidance for future programming experiences and opportunities.

A similar response pattern emerged for Multiple Criteria (Factor 4) with regular classroom teachers having significantly different responses from teachers of the gifted, administrators, and consultants. Regular classroom teachers agreed, but not as strongly, with statements emphasizing that gifted and talented students may express their abilities in many ways or that giftedness in some students may develop at certain ages and in specific areas of interest. Their level of agreement was also not as strong concerning the use of several types of information about a student as a basis for an effective identification plan.

The differences for Factor 5 (Context-bound Identification) were among teachers of the gifted and the other three groups: regular classroom teachers, administrators, and consultants. Teachers of the gifted had a stronger level of agreement than other groups of educators about their beliefs in the importance of the students’ cultural, experiential, and environmental backgrounds, the need to consider locally developed methods and criteria for specific populations, and the efficacy of matching the identification process with the services and activities available in the district. It appears that across all factors, the teachers of the gifted who work most closely with programming issues and practices have stronger opinions about the most appropriate identification practices.

Congruence of Research Findings and Practices

The survey results present an interesting picture of the assumptions underlying identification practices. Educators disagreed with a restricted approach, agreed with individual expression, on-going assessment, and context-bound procedures. Furthermore, they strongly agreed with the importance of using multiple criteria. This does not sound too unusual; these assumptions are part of the litany of the response to the question: How do you identify gifted and talented students? What is unusual and somewhat perplexing is the discrepancy between these assumptions or beliefs expressed by educators and subsequent practices documented by other researchers in recent times.

In the NRC/GT study on Classroom Practices of over 3,000 third or fourth grade teachers, Archambault, Westberg, Brown, Hallmark, Emmons, and Zhang (1993) found that most of the public schools surveyed used achievement tests (79%), followed by IQ tests (72%), and teacher nomination (70%) as their main sources of data collection. The data sources were similar, but the order was different in the findings by Cox, Daniel, and Boston (1985): teacher nomination (91%), achievement tests (90%), and IQ tests (82%). Alvino, McDonnel, and Richert (1981) confirmed these procedures in an earlier study when they found that most identification procedures included intelligence tests, nominations, and achievement tests. These procedures of using tests or teacher recommendations are limited, and they do not reflect the findings of the study on the Assumptions Underlying the Identification of Gifted and Talented Students.

Understanding that our assumptions or beliefs and practices may not be in full agreement is a first step in reviewing the appropriateness of existing or future identification policies and the specific identification practices that should be guided by state and local policy. We need to promote discussions centering around two simple, but recurring questions: Who are the gifted and talented? How do we find them? Responses to these questions will hopefully influence future beliefs and research-based practices that are more congruent than those revealed in the present study. The challenge then is to bring beliefs and practices together and to include other techniques, such as biographical and autobiographical data; product or portfolio review; performance assessment; developmental identification; and self, peer, or parent nomination in the development of a flexible and defensible identification system that is responsive to the educational needs of our students.

Reference
Alvino, J., McDonnel, R. C., & Richert, S. (1981). National survey of identification practices in gifted and talented education. Exceptional Children, 48(2), 124-132.
Archambault, F. X., Westberg, K. L., Brown, S. W., Hallmark, B. W., Emmons, C. L., & Zhang, W. (1993). Regular classroom practices with gifted students: Results of a national survey of classroom teachers. Storrs: University of Connecticut, The National Research Center on the Gifted and Talented.
Cox, J., Daniel, N., & Boston, B. A. (1985). Educating able learners: Programs and promising practices. Austin, TX: University of Texas Press.
Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. New York, NY: Basic Books.
Renzulli, J. S., Reis, S. M., & Smith, L. H. (1981). The revolving door identification model. Mansfield Center, CT: Creative Learning Press.
Sternberg, R. J. (1985). Beyond IQ: A triarchic theory of human intelligence. New York, NY: Cambridge University Press.

Authors’ Note: We would like to acknowledge our research assistants: Florence Caillard, Wanli Zhang, and Ching-Hui Chen for their valuable work on the large scale data analyses procedures. The data for the Assumptions Study will be analyzed further for future publications by our research team.

 

Back to Newsletter Articles Page