Font Size: 
 
 
     
       
Indicator I-1 Reading Competency among School-Age Children
NOTE TO READERS: Please include the following reference when citing data from this page: "American Academy of Arts and Sciences, Humanities Indicators, http://HumanitiesIndicators.org".
Print
Back to Section I-A

Updated with 2011 NAEP main assessment data for reading, math, and science (3/8/2013).

See the
Note on the Difference between NAEP "Achievement" and "Performance" Levels.

The NAEP includes two assessments in reading. The first, currently administered every two years and usually referred to as the “main” NAEP reading assessment, changes in response to the current state of curricula and educational practices. The second test is specifically designed to generate long-term trend data. Administered every two to five years, this examination has remained essentially unchanged since it was first given to students in 1971; it features shorter reading passages than the main NAEP assessment and gauges students’ ability to locate specific information, make inferences, and identify the main idea of a passage. (For a detailed comparison of the two assessments, see http://nces.ed.gov/nationsreportcard/about/ltt_main_diff.asp.)

The NAEP long-term trend exam (LTT) is taken by a nationally representative sample of students in each of three different age groups: 9-year-olds, 13-year-olds, and 17-year-olds. The percentages indicated on the graphs displaying LTT data (Figures I-1a, I-1b, and I-1c) are cumulative totals; they indicate the percentage of students in each grade level scoring at or above each performance level. (The LTT performance thresholds are constructed at 50-point intervals and range from 150 to 350; see the Note on the Difference between NAEP "Achievement" and "Performance" Levels.) The performance levels are also cumulative in the sense that students performing at each level also display all the competencies associated with the level(s) below it. (See NAEP’s descriptions of the skills demonstrated by students scoring at each performance level.) Also, although the performance levels at which the majority of students score are different for each of the age groups (a result of their cumulative nature), the color-coding of the levels is consistent across the LTT graphs. Blue represents the percentage of students scoring at or above the most basic performance level for that age group. Red represents the percentage scoring at or above the intermediate performance level. Gold represents the percentage scoring at or above the advanced performance level.

In 2004, the LTT was updated in several ways. Content and administration procedures were revised, and, for the first time, accommodations were made for English language learners and students with disabilities that would allow these students to be included in the assessment (they have been included in the main NAEP reading assessment since 1996). Both the original and revised formats were administered in 2004 so NCES could investigate the effects of the new format on scores. This "bridge" study indicated that differences in student scores between the two formats were solely attributable to the inclusion of students with disabilities and English language learners in the testing population. On the basis of these findings, NCES concluded, “bearing in mind the differences in the populations of students assessed (accommodated vs. not accommodated), future assessment results could be compared to those from earlier assessments based on the original version.”1

Among 9-year-olds, reading performance increased steadily from the early 1970s to 1980 (Figure I-1a). Reading performance then began declining and by 1990 had largely returned to its original level (though somewhat more students were assessed at the highest performance level for this grade than in 1971). The 1990s were another period of improvement, with incrementally greater percentages of students attaining Levels 150 and 200, which represent the basic and intermediate performance levels. (Nonetheless, the percentage of students scoring at the highest performance level did drop slightly.) The period from 1999 to 2008 was one of more marked progress. In 2008, a higher percentage of students than in any previous assessment year—96%, up five percentage points from 1971—demonstrated the ability to perform simple, discrete reading tasks associated with the most basic performance level (Level 150). At the high end of the performance spectrum (Level 250), students demonstrated gains of a similar magnitude, with an increase from 16% to 21% in the proportion of students exhibiting the ability to interrelate ideas and make generalizations. The greatest gains among 9-year-olds from 1999 to 2008 were realized in the middle of the performance spectrum (Level 200). In 2008, 73% of students—up nine percentage points from 1999 (and up 14 points from 1971)—demonstrated the ability to:

locate and identify facts from simple informational paragraphs, stories, and news articles;
combine ideas and make inferences based on short, uncomplicated passages; and
understand specific details or sequentially related information.

This increase is the most striking development observed in any age group to date. At the same time, however, the increase since 1980, the peak of the early upswing in scores, is a more modest five percentage points. Thus, the story of 9-year-olds’ reading performance since 1971 is one of recovery of lost gains rather than continuous progress.

Figure I-1a, Full Size
Supporting Data Supporting Data

The story for early adolescents, on the other hand, is one of stasis. Although the early 1990s saw an increase in the percentage of 13-year-old students scoring at the middle and high performance levels (Levels 250 and 300), little movement in scores was observed until 2008 (Figure I-1b). In this year, the percentage of students scoring at or above the intermediate performance level increased by four percentage points to 63%. In 2008, nearly all students (94%) displayed at least partially developed skills and understanding (i.e., scored at least 200), and 13% demonstrated the ability to understand complicated information (i.e., scored 300 or better).

Figure I-1b, Full Size
Supporting Data Supporting Data

For 17-year-olds, the percentage of students achieving at least basic competency (Level 250) rose from 79% in 1971 to a high of 86% in 1988 (Figure I-1c). Subsequently, however, this trend reversed, and by 2008 the percentage had returned to that recorded in 1971. The trend in midlevel achievement was similar: an increase followed by reversion to the original level. Thus in 2008, as in 1971, 39% of students left high school (most of the 17-year-olds tested were secondary-school seniors) able to understand complicated literary and informational passages (Level 300). The share of students exiting with the ability to extend and restructure ideas drawn from specialized or complex texts (Level 350) was 6% in 2008, having changed little over the previous 37 years.

Figure I-1c, Full Size
Supporting Data Supporting Data

The LTT is an assessment of basic skills. In contrast, the main NAEP assessment evaluates fourth-, eighth-, and 12th-grade students’ ability to tackle more-demanding reading tasks, including the reading of longer passages or pairs of passages. According to NCES, the main assessment “measures a range of reading skills, from identifying explicitly stated information, to making complex inferences about themes, to comparing multiple texts on a variety of dimensions.” As Figure I-1d indicates, on the 2011 main NAEP assessment, 34% of fourth graders demonstrated reading skills at or above the “proficient” level, while a similar proportion displayed “below basic” skills. (The main NAEP includes four achievement levels: “below basic,” “basic,” “proficient,” and “advanced.” See http://nces.ed.gov/nationsreportcard/reading/achieveall.asp for a description of the skills associated with each achievement level.) The proportion of eighth and 12th graders demonstrating at least proficiency on the reading assessment was similar to that of their younger counterparts, but in the upper grades a smaller share, approximately one-quarter, demonstrated below basic skills (Figures I-1e and I-1f).

Figure I-1d, Full Size
Supporting Data Supporting Data
Figure I-1e, Full Size
Supporting Data Supporting Data
Figure I-1f, Full Size
Supporting Data Supporting Data

For purposes of comparison, Figures I-1d through I-1f also depict student performance on the NAEP math and science assessments. At the fourth-grade level, students’ performance on the 2009 science assessment (the most current science assessment at this grade level as of February, 2013) was somewhat better than their demonstrated abilities in reading. Students also performed better on the math assessment than on the reading, with a share demonstrating below basic skills that was 15 percentage points smaller than that for reading. Among eighth graders, reading performance was similar to that in math in 2011. Students had more difficulty in science, with 35% demonstrating below basic skills, compared with 24% on the reading assessment. At the 12th-grade level, students did better in reading than in either science or math. The commonality among the three assessments, at all three grade levels, is that on none of them did a majority of students demonstrate proficiency.

Another notable difference between the subject areas is that at the two lower grade levels the improvement in student achievement from the early 1990s to 2011 was considerably greater in math than in reading (the results of the most recent science assessments cannot be compared with those from the 1990s because of major changes in the assessment framework). The increase in the percentage of fourth graders scoring at the proficient level or higher on the math assessment was 28 percentage points. The increase in the eighth-grade math percentage over the same period was 19 percentage points. In contrast, the improvement from 1992 to 2011 in reading was six percentage points for fourth graders and four percentage points for eighth graders. Additionally, the decline in the percentage of students performing at below basic levels was more pronounced in math than in reading. Among 12th graders reading achievement declined from 1992 to 2009 (these developments cannot be compared with those in math, because the earliest year for which appropriate math assessment data are available is 2005).

(The NAEP Data Explorer permits analysis of both the long-term trend and main NAEP data sets by gender, ethnicity, and other key variables. With Explorer one can also obtain results of recent reading assessments for individual states and compare these with student outcomes in other parts of the country. For both an overview of Explorer and tips for its effective use, see http://nces.ed.gov/nationsreportcard/pdf/naep_nde_final_web.pdf. The Explorer itself is located at http://nces.ed.gov/nationsreportcard/naepdata/.)

Data from the Organisation for Economic Co-operation and Development’s (OECD) Programme for International Student Assessment (PISA) reveal that while American 15-year-olds demonstrated levels of reading literacy similar to those of students in several other Western industrialized nations—such as France, Germany, the United Kingdom, and Sweden2—they scored measurably lower, on average, than their counterparts in nine jurisdictions (14% of those that participated in PISA; Figure I-1g). In 2009, the United States’ average score on the PISA combined literacy scale3 was statistically indistinguishable from the OECD average but was lower than that of several Asian jurisdictions, as well as Australia, Canada, Finland, and New Zealand. American adolescents did best on the reading literacy test items meant to gauge students’ ability to reflect on and evaluate what they had read (Figure I-1h). They did less well on tasks that involved access and retrieval of information. But even on the higher-order reading tasks on which they tended to do better, American students were outperformed by students in China (Shanghai and Hong Kong), Korea, and Canada, among other jurisdictions.

Figure I-1g, Full Size
Supporting Data Supporting Data
Figure I-1h, Full Size
Supporting Data Supporting Data

With respect to the distribution of reading literacy proficiency among the mid-adolescent population, 30% of U.S. 15-year-olds were capable of difficult reading tasks (i.e., scored at Level 4 or higher). Ten jurisdictions (15% of those participating) had measurably greater shares of students with such capability (Figure I-1i; see http://nces.ed.gov/pubs2011/2011004.pdf, page 10, for a detailed description of the types of tasks associated with each PISA proficiency level). In top-ranked Shanghai and Korea, 54% and 46% of students were able to complete such tasks. Eighteen percent of American 15-year-olds demonstrated reading literacy at sub-basic levels (i.e., scored at Level 1 or below). Six jurisdictions (9% of those participating) had measurably lower shares of students demonstrating such minimal reading literacy.

Figure I-1i, Full Size
Supporting Data Supporting Data
Figure I-1j compares the United States’ international standing in reading literacy to its performance in math and science literacy. The data suggest that the United States’ relative performance was stronger in reading than in math or science. The United States was outperformed by fewer nations on the reading assessment than on the other two examinations. Moreover, while the average differential between the United States’ and higher-scoring jurisdictions’ average scores on the reading literacy assessment was comparable to that for the science exam (approximately 30 points), the differential for math was closer to 40 points. Additionally (not pictured on the figure), the students of Shanghai, the top-ranked jurisdiction on all three literacy assessments, outscored U.S. students in science and math by larger margins, on average, than they did in reading.
This pattern holds when the U.S. is compared with the top-ranked national jurisdictions (Korea, in reading; Singapore, in math; and Finland, in science).4
Figure I-1j, Full Size
Supporting Data Supporting Data


Note
1 U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, n.d. [article revised 30 March 2009], “2004 Bridge Study,” http://nces.ed.gov/nationsreportcard/ltt/bridge_study.asp.

2 The scores of the 16 jurisdictions listed below (25% of all participating jurisdictions) were not measurably different from that of the United States.
  • Belgium
  • Chinese Taipei
  • Denmark
  • Estonia
  • France
  • Germany
  • Hungary
  • Iceland
  • Ireland
  • Liechtenstein
  • Netherlands
  • Norway
  • Poland
  • Sweden
  • Switzerland
  • United Kingdom
3 The combined reading literacy scale reflects students’ scores on the access and retrieve, integrate and interpret, and reflect and evaluate subscales. “However, the combined reading scale and the three subscales are each computed separately through Item Response Theory (IRT) models. Therefore, the combined reading scale score is not the average of the three subscale scores.” (H. L. Fleischman, P. J. Hopstock, M. P. Pelczar, and B. E. Shelley, Highlights from PISA 2009: Performance of U.S. 15-Year-Old Students in Reading, Mathematics, and Science Literacy in an International Context, NCES 2011-004 [Washington, DC: U.S. Government Printing Office, 2010], 7n5.)

The combined scale and subscales for reading literacy, as well as the scales for science and math literacy, range from 0 to 1,000.

4 The Humanities Indicators includes the top-scoring nation as a reference point because the performance of a subnational jurisdiction such as Shanghai or Hong Kong is not strictly comparable with that of the United States.


Note on the Difference between NAEP "Achievement" and "Performance" Levels

Figures I-1a, I-1b, and I-1c display the percentages of students scoring at certain levels on the National Assessment of Educational Progress’s (NAEP) long-term trend reading assessment. This NAEP examination is scored differently from the other NAEP tests, such as those in writing, history, and civics, and the “main” NAEP reading assessment (for an explanation of the differences between the two NAEP reading assessments, see http://nces.ed.gov/nationsreportcard/about/ltt_main_diff.asp).

On the latter exams, students are assessed according to grade-specific achievement scales. A student’s level of achievement is judged to be “below basic,” “basic,” “proficient,” or “advanced” depending on his or her score on the appropriate scale. A score at the “basic” level indicates that a student has demonstrated partial mastery of prerequisite knowledge and skills that are fundamental for proficient work at each grade. A score of “proficient” indicates solid academic performance—students reaching this level have demonstrated competency over challenging subject matter. An “advanced” score represents superior performance. A child scoring at the “advanced” achievement level on the 12th-grade exam in a given subject area is demonstrating different skills than a fourth grader scoring at the “advanced” level.

In contrast, the NAEP long-term trend reading assessment uses a single scale, referred to as a performance scale, for 9-, 13-, and 17-year-olds. What constitutes “basic,” “proficient,” and “advanced” performance depends on the age of the examinee. Both a 9-year-old and a 17-year-old may score at Level 250 (able to interrelate ideas and make generalizations). Such a score would constitute an advanced level of performance on the part of the 9-year-old and a basic level of performance on the part of the 17-year-old.

Back to Content

Back to Top

Skip Navigation Links.  

View figures and graphics: