ERIC
ERIC Identifier: ED458216
Publication Date: 2001-06-00
Author: Cassady, Jerrell C.
Source: ERIC Clearinghouse on Assessment and Evaluation College Park MD.

Self-Reported GPA and SAT Scores. ERIC Digest.


THIS DIGEST WAS CREATED BY ERIC, THE EDUCATIONAL RESOURCES INFORMATION CENTER. FOR MORE INFORMATION ABOUT ERIC, CONTACT ACCESS ERIC 1-800-LET-ERIC

College students who participate in educational or psychological research projects are sometimes asked to report their Scholastic Assessment Test (SAT) scores or their grade point average (GPA)scores as part of the research. The use of self-reports from students is a common, yet risky, methodological venture because it relies upon individuals to provide accurate and unbiased ratings without external verification of the data. This Digest investigates the methodological practice of relying on self-reported SAT and GPA scores, explores the differential reliability of self-reported SAT and GPA values, and examines trends of deviation in a sample of Midwestern teacher education students.

The research in SAT score accuracy has generally indicated that students' reports correlate with actual scores in the range of .60 to .80(Goldman, Flake, and Matheson, 1990; Frucot and Cook, 1994; Trice,1990). Furthermore, there is evidence that individuals who do not provide their scores are more likely to have low SAT scores, suggesting a potential skew in the self-report performance literature (Flake and Goldman, 1991; Trice, 1990). In a rigorous analysis of the relationship between actual and reported scores on the SAT, Shepperd (1993) reported that students with low SAT scores not only inflated their self-reported scores, but also rated the score they received on the SAT as inaccurate or flawed. Furthermore, when students reported SAT scores with no explicit instructions, the tendency to inflate the score was evident. However, when the students were asked to report their SAT scores for a second time (two months after the initial report), but with an incentive for accuracy and the assurance that any inflation would be known, the average deviation from true score was 9 points for the total scale SAT score, a mere 1/10 of a standard deviation (Shepperd, 1993). Shepperd hypothesized that this pattern supported the theory that the inflation was an attempt to portray a positive image, rather than a misrepresentation arising from a memory deficit.

With GPA ratings, there is also evidence for skewed self-reports; specifically, there is greater inflation by students with lower GPAs than by students with higher GPAs (Dobbins, Farh, and Werbel, 1993; Frucot and Cook, 1994). This inflation of GPA has been found to be free from a ceiling effect, and it has been proposed to be a consequence of social desirability (Dobbins et al., 1993). The following study was conducted to test the accuracy and trends of deviation noted in undergraduates' self-reported SAT and GPA values. The results were expected to support previous reports that self-reported values for GPA and SAT were relatively reliable (in the range of .70 to .90). Furthermore, the results were expected to identify that for both GPA and SAT, low scorers' ratings would vary from actual scores more than high scorers', with the self-reported values demonstrating an inflated value. Finally, it was predicted that individuals who overestimated their performance levels would do so to a greater magnitude than those individuals reporting an underestimation of their performance.

METHOD

Participants. Eighty-nine undergraduate students at a mid-sized Midwestern university reported their current cumulative GPA and the scores they had received on the SAT. Respondents were predominantly Caucasian females ranging in age from 19 to 28 (M = 19.99, SD = 1.06); all were in the second year of an undergraduate pre-service teacher education program.

Procedure. The participants were asked to provide their undergraduate cumulative GPAs and their official SAT scores as part of another research project. If they were unsure of their scores, they were instructed to provide their "best guess" regarding the SAT verbal, math, and total scores, as well as GPA. Students were not told at the time what their scores would be used for, nor that the scores would be checked against their official records. The participants were debriefed in a subsequent experimental session, at which time they provided consent to access the necessary university records.

Students who did not take the SAT (typically taking the ACT for entrance) were excluded from the analyses of SAT score accuracy. Similarly, students without official university grade records (i.e., transfer students from community colleges) were excluded from the analyses on the accuracy of GPA.

Analyses. To investigate the impact of direction of reported scores' deviation from the actual scores, each participant's reported value was categorized as either an overestimation, an underestimation, or as accurate in relation to the official records. These reports were examined to identify whether the magnitude of deviation from students who overestimated and underestimated their scores differed significantly from each other. To examine whether low-scoring individuals inflated their scores more than high-scoring individuals in both SAT and GPA self-report values, four groups were established for each measure, using the quartile split method. To investigate differential magnitudes of deviation based on both direction of deviation (overestimation and underestimation) and actual performance level, univariate analyses of variance were conducted on the absolute value of the deviation of the reported score from the actual score.

RESULTS

Students' self-reported GPA scores were found to be remarkably similar to official records. The Pearson product moment correlation revealed a significant correlation between self-reported and actual cumulative GPA, r = .97, p < .0001, n = 75. Similarly, correlational analyses of the accuracy of the students' self-reported SAT scores revealed significant relationships between self-reports and actual performance levels for the total score (r = .88, p = .0001, n = 72), verbal subscale (r = .73, p = .0001, n = 64), and math subscale (r = .89, p = .0001, n = 64).

To examine deviation of GPA scores, a two by four univariate analysis of variance was used, with two levels of direction of deviation (overestimation and underestimation) and four levels of actual GPA (as established by quartile placement in the sample). The ANOVA revealed a significant main effect for level of GPA on deviation from reality. Neither the main effect for direction of deviation nor the interaction produced a significant effect. The data indicated progressively more accurate ratings of GPA as the level of GPA increased. Post-hoc analyses of group differences revealed differences between the quartiles, with the first-quartile deviations being significantly higher than the third (p < .005) and fourth (p < .001), and the participants in the second quartile producing significantly higher deviations than the fourth (p < .05).

Similar analyses were conducted on the verbal and math subscales of the SAT. Because the total score for the SAT is a combination of these two subscales, no additional analysis of the total score was conducted. To examine deviation of the SAT subscale scores, two separate two by four univariate analyses of variance were conducted, with two levels of direction and four levels of SAT performance. The ANOVA revealed no significant effects for the verbal subscale. The results for the math subscale revealed a trend similar to GPA, with a significant main effect for level of SAT performance (as determined by quartile placements), while the main effect for direction of deviation and interaction were not significant. Post-hoc analyses revealed that members in the first quartile produced significantly higher deviations than members in the second (p < .03) and third (p <.004) quartiles.

DISCUSSION

The results of this study of GPA and SAT self-reports allow for ageneral statement regarding the role of self-reported performanceindicators to be made. The initial hypothesis regarding accuracy ofratings was supported, revealing that the participants had highlyreliable ratings of cumulative GPA (r = .97). Such high correlationswould suggest that overall, self-reported GPA levels are sufficientlyaccurate. The overall accuracy of the students' self-reported SATscores were considerably lower than the accuracy of GPA; however,the average accuracy was still within reasonable guidelines (Nunnallyand Bernstein, 1994). The results supported the expectation that theaccuracy of self-reported SAT scores would be lower thanself-reported GPAs.

This difference in accuracy may be related to the factors of repetition and recency. Cumulative GPA is reported to undergraduate students on a consistent and frequent basis, typically at least two to three times per year. SAT scores, however, are not typically reported to the students once they've been admitted to the university; consequently, the majority of these participants would not likely have seen their official SAT scores for a period of two or more years.

Further investigation revealed that accuracy of self-reported scores was dependent upon the independent variable of performance level. The analyses of accuracy in self-reported GPA revealed that the bottom 25% of students provided estimates that were significantly less accurate than each of the remaining quartile groups. These data support a trend reported by Dobbins et al. (1993), who revealed that students with lower GPAs tended to inflate their scores more than students with higher averages. In a similar vein, self-reports of SAT performance generally became more accurate as actual performance increased. Overall, it appears that students at the lowest end of performance are more likely than the high-achieving groups to misrepresent their scores. This is consistent with the proposal that the students at the low-performing levels may provide inflated scores as a function of social desirability (Dobbins et al, 1993).

Contrary to the initial hypothesis, there were no differences in deviation from actual scores by those participants who overestimated and underestimated their performance levels. The expectation was that the deviations would be higher for overestimators, consistent with the social desirability hypothesis. However, no such trend was revealed, suggesting that the deviations from actual scores are due in part to errors in memory, and not all deviations are driven by a desire to misrepresent ability levels.

Given ideal conditions, there would be no sense in relying on students to report their GPA and SAT scores from memory. However, several conditions may limit a researcher's ability to gain access to official records, including administrative rules and privacy issues. When these conditions arise, forcing a researcher into a compromised methodological activity, these data suggest that researchers can rely upon self-reported GPA estimates. The data suggest that the use of self-reported SAT scores is less reliable than GPA estimations, but can be tempered by indicating to the students that accuracy is of primary interest, perhaps by assuring anonymity to the participants (see Shepperd, 1993). The use of self-reported GPA and SAT scores increases the efficiency of data collection available to researchers, particularly when these scores are simply additional variables of interest, perhaps when attempting to account for variance in designs examining course performance, test anxiety, or career orientations. The ease of acquiring these values through self-report, combined with the high levels of accuracy under the current methodology, make this practice an enticing alternative to the more laborious process of accessing official student records.

However, these results do not support the use of self-reported GPA and SAT scores for policy decisions, particularly if the students are able to determine the intent of the score collection. In situations where the students' GPA and SAT scores will be used to differentiate among candidates for selection into special programs or positions, students may be more likely to provide false estimates to improve their standing. Furthermore, this practice should not be generalized to participants at different developmental levels without assessing a pilot sample to ensure the reliability is still adequate.

This digest is based on an article originally appearing in Practical Assessment Research and Evaluation

REFERENCES

Dobbins, G. H., Farh, J. L., and Werbel, J. D. (1993). The influence of self-monitoring and inflation of grade-point averages for research and selection purposes. Journal of Applied Social Psychology, 23, 321-334.

Flake, W. L., and Goldman, B. A. (1991). Comparison of grade point averages and SAT scores between reporting and nonreporting men and women and freshmen and sophomores. Perceptual and Motor Skills, 72, 177-178.

Frucot, V. G., and Cook, G. L. (1994). Further research on the accuracy of students' self-reported grade point averages, SAT scores, and course grades. Perceptual and Motor Skills, 79, 743-746.

Goldman, B. A., Flake, W. L., and Matheson, M. B. (1990). Accuracy of college students' perceptions of their SAT scores and high school and college grade point averages relative to their ability. Perceptual and Motor Skills, 70, 514.

Nunnally, J. C., and Bernstein, I. H. (1994). Psychometric Theory (3rd Ed.). New York: McGraw-Hill, Inc.

Shepperd, J. A. (1993). Student derogation of the Scholastic Aptitude Test: Biases in perceptions and presentations of College Board scores. Basic and Applied Social Psychology, 14, 455-473.

Trice, A. D. (1990). Reliability of students' self-reports of scholastic aptitude scores: Data from juniors and seniors. Perceptual and Motor Skills, 71, 290.

-----

This publication was prepared with funding from the Office of Educational Research and Improvement, U.S. Department of Education, under contract ED99CO0032. The opinions expressed in this report do not necessarily reflect the positions of policies of OERI or the U.S. Department of Education. Permission is granted to copy and distribute this ERIC/AE Digest.



Title: Self-Reported GPA and SAT Scores. ERIC Digest.
Note: Based on an article originally appearing in "Practical Assessment Research and Evaluation."
Document Type: Information Analyses---ERIC Information Analysis Products (IAPs) (071); Information Analyses---ERIC Digests (Selected) in Full Text (073);
Available From: ERIC Clearinghouse on Assessment and Evaluation, 1129 Shriver Laboratory, University of Maryland, College Park, MD 20742. Tel: 800-464-3742 (Toll Free).
Descriptors: College Entrance Examinations, College Students, Education Majors, Grade Point Average, Higher Education, Psychological Studies, Research Methodology, Teacher Education
Identifiers: ERIC Digests, Scholastic Assessment Tests, Self Report Measures

###


[Return to ERIC Digest Search Page]