ERIC
ERIC Identifier: ED458214
Publication Date: 2001-04-00
Author: Boston, Carol
Source: ERIC Clearinghouse on Assessment and Evaluation Washington DC.

The Debate over National Testing. ERIC Digest.


THIS DIGEST WAS CREATED BY ERIC, THE EDUCATIONAL RESOURCES INFORMATION CENTER. FOR MORE INFORMATION ABOUT ERIC, CONTACT ACCESS ERIC 1-800-LET-ERIC

Most teachers are comfortable with developing and using tests for classroom purposes, whether to see how much students have learned, to provide a basis for grades, or to gain an understanding of individual students' strengths and weaknesses. As state departments of education move forward with their testing programs, teachers are becoming increasingly familiar with tests used as measures of accountability. A third layer of testing arises on the national level and includes the National Assessment of Educational Progress (NAEP) and President Bush's plan to require states to test third- through eighth-grade students in Title I schools annually in reading and mathematics, with state results verified against NAEP or a commercial test such as the Iowa Test of Basic Skills.

This Digest presents various views of the federal role in testing and offers a brief examination of NAEP, "the nation's report card," in both its national sample format and its state administration, which critics fear has the potential to become a de facto national test if it is selected as the basis for comparing state tests. It also suggests action steps and resources to enable teachers to take part in the ongoing debate about testing.

The United States, unlike many other countries, has no national test that every student in every state takes to demonstrate mastery of some agreed-upon body of knowledge and skills. Commercial test publishers have long offered achievement tests (e.g., the Iowa Test of Basic Skills, the California Achievement Test, the Terra Nova) that are administered to schools across the country and normalized on national samples, but these are not in themselves national tests because individual schools, districts, or states decide for themselves whether to use them and which to select. The SAT is probably the most common test administered in the country, but it is intended to measure college-bound students' aptitude for college work, not academic achievement across a wide range of subjects for all students. And it has the ACT as competition.

The question of the appropriate role for the federal government to play in testing is a complicated one, and like many policy matters, it has strong political overtones. Over the past two decades, many policymakers have moved from an initial position of strong support for some sort of a national test used as an accountability tool to opposition on the grounds that a national test would usher in a national curriculum and lead to further federal involvement in what has historically been a state and local matter. These policymakers want states to establish administer their own standards and assessments without interference from Washington; they see federal involvement in testing as a sometimes unwelcome effort to dictate what is important for their students to learn.

On the other hand, some policymakers seem to be less troubled by an expanded federal role in testing, but more suspicious about whether nationwide testing would lead to genuine school improvement and higher student achievement or just sort out and penalize low-performing schools and the students in them, who are disproportionately low income and minority. They argue that until there is truly equal opportunity to learn for all students (with equal access to technology, highly qualified teachers, good facilities, and other learning inputs), testing is an empty exercise. Some policymakers also fear that poor test scores might fuel discontent with the public school system and lead to more support for controversial initiatives such as vouchers for private school aid.

Federally mandated testing also raises a variety of practical and technical questions, including the following:

* * Who will pay the considerable cost of developing and administering additional tests?

* * Do states have the technical expertise and personnel to conduct another large-scale assessment and analyze and report results?

* * Will the tests be valid and will scores be reliable for high-stakes purposes such as making decisions about which schools receive financial incentives and which are sanctioned for low performance?

* * How will existing state tests be linked to each other or to "yardsticks" such as NAEP or commercial tests so that student and school progress can be measured fairly and accurately, particularly if rewards and sanctions are tied to results?

NATIONAL ASSESSMENT OF EDUCATIONAL PROGRESS

The National Assessment of Educational Progress, nicknamed "the nation's report card," is a 32-year-old congressionally mandated project of the National Center for Education Statistics (NCES) within the U.S. Department of Education. The NAEP assessment is administered annually by NCES to a nationally representative sample of public and private school students in grades 4, 8, and 12 to get a picture of what American children know and can do. NAEP results are usually watched closely because the assessment is considered a highly respected, technically sound longitudinal measure of U.S. student achievement.

Two subject areas are typically assessed each year. Reading, mathematics, writing, and science are assessed most frequently, usually at 4-year intervals so that trends can be monitored Civics, U.S. history, geography, and the arts have also been assessed in recent years, and foreign language will be assessed for the first time in 2003. Once exclusively multiple choice, NAEP now includes performance-based items that call for students to work with science kits, use calculators, prepare writing samples, and create art projects.

Students in participating schools are randomly selected to take one portion of the assessment being administered in a given year (usually administered during a 1-1/2 to 2-hour testing period). Achievement is reported at one of three levels: Basic, for partial mastery; Proficient, for solid academic performance; and Advanced, for superior work. A forth level, Below Basic, indicates less-than-acceptable performance. Individual student, school, and district data are not reported.

To help states measure students' academic performance over time and to allow for cross-state comparisons, a voluntary state component was added to NAEP in 1990. As of this writing, legislators are considering expanding the role of state NAEP to serve as a check on results from states' annual testing of third through eight graders called for under the Bush education plan. This could mean annual state NAEP testing in reading and mathematics (as opposed to once every four years) for a sample of students in grades four and eight in each state.

A 26-member independent board called the National Assessment Governing Board (NAGB) is responsible for setting NAEP policy, selecting which subject areas will be assessed, and overseeing the content and design of each NAEP assessment. NAGB does not attempt to specify a national curriculum, but rather, outlines what a national assessment should test, based on a national consensus process that involves gathering input from teachers, curriculum experts, policymakers, the business community, and the public.

TESTS, TESTS EVERYWHERE

While almost every state has implemented some sort of state testing program, the differences in what they measure, how they measure it, and how they set achievement levels make it virtually impossible to conduct meaningful state-by-state comparisons of individual student performance. Some people believe state-to-state comparisons are irrelevant because education is a state and local function. Others believe cross-state comparisons will help are important to ensure spur reform and ensure uniformly high-quality education across the country.

Legislation being debated now calls for the use of NAEP or another nationally administered test as a check on the results of annual state tests. Theoretically, a state-level NAEP would yield useful data. In reality, however, NAEP state-level results have sometimes been confusing because achievement levels of students generally appear to be much lower on NAEP than on the state tests. This discrepancy may be attributed to a number of factors, including the following:

* * State tests are more likely to be aligned with state curricula than NAEP is.

* * State tests and NAEP use different definitions of proficiency.

* * State tests and NAEP may use different formats.

* * State tests and NAEP differ in terms of who takes them (e.g., whether students in special education or with limited English proficiency are included).

In general, fewer students are judged to reach the Proficient standard on the NAEP reading and math tests than on state tests (GAO, 1998). This discrepancy can lead people who are not aware of the differences in the two types of tests to question the validity of their own state testing programs or the desirability of participating in a federal one. Using the results of any other nationally normalized standardized test poses the same difficulty.

It is difficult to predict how the national testing issue will ultimately be resolved. President Bush's plan calls for expanding testing in most states and gives NAEP and commercial tests a more prominent role than they currently have. Teachers might be torn between continuing to teach the curriculum aligned with their state assessment or switching gears to focus on whatever other test is being used to determine rewards and sanctions. Given the classroom implications of expanded testing, it makes sense for teachers to stay active in the discussion.

RESOURCES

Barton, P. E. (1999). Too Much Testing of the Wrong Kind; Too Little of the Right Kind in K-12 Education. A Policy Information Perspective. Princeton, NJ: Educational Testing Service. ED 430 052.

Davey, L. (1992). The case for a national test. Practical Assessment, Research & Evaluation, 3 (1). [Available online: http://ericae.net/pare/getvn.asp?v=3&n=1].

Davey, Lynn & Neill, Monty (1991). The case against a national test. Practical Assessment, Research & Evaluation, 2(10). [Available online: http://ericae.net/pare/getvn.asp?v=2&n=10].

General Accounting Office (1998). Student Testing: Issues Related to Voluntary National Mathematics and Reading Tests. Report to the Honorable William F. Goodling, Chairman, Committee on Education and the Workforce, House of Representatives, and the Honorable John Ashcroft, U.S. Senate. Washington, DC: Author. ED 423 244.

National Center for Education Statistics (November 1999). The NAEP Guide: A Description of the Content and Methods of the 1999 and 2000 Assessments. Washington, DC: U.S. Department of Education.

-----

This publication was prepared with funding from the Office of Educational Research and Improvement, U.S. Department of Education, under contract ED99CO0032. The opinions expressed in this report do not necessarily reflect the positions of policies of OERI or the U.S. Department of Education. Permission is granted to copy and distribute this ERIC/AE Digest.



Title: The Debate over National Testing. ERIC Digest.
Document Type: Information Analyses---ERIC Information Analysis Products (IAPs) (071); Information Analyses---ERIC Digests (Selected) in Full Text (073);
Available From: ERIC Clearinghouse on Assessment and Evaluation, 1129 Shriver Laboratory, University of Maryland, College Park, MD 20742-3742. Tel: 800-464-3742 (Toll Free).
Descriptors: Academic Achievement, Achievement Tests, Elementary Secondary Education, Federal Government, Government Role, National Competency Tests, Performance Based Assessment, Politics, Test Construction, Test Use
Identifiers: ERIC Digests, National Assessment of Educational Progress

###


[Return to ERIC Digest Search Page]