The NCES Private-Public School Study: Findings are other than they seem

Author/s: 
Elena Llaudet and Paul E. Peterson
Year of publication: 
2007
Publication: 
Education Next
Volumne/Issue: 
7(1)
Pages: 
75-79

On July 14, 2006, the U.S. Department of Education’s National Center for Education Statistics (NCES) released a study that compared the performance in reading and math of 4th and 8th graders attending private and public schools. The study had been undertaken at the request of the NCES by the Educational Testing Service (ETS). Using information from a national sample of public and private school students collected in 2003 as part of the National Assessment of Educational Progress (NAEP), ETS compared the test scores of public school students with those of students in all private schools, taken together. Separately, it compared student performance in public schools with that in Catholic, Lutheran, and evangelical Protestant schools.

According to the NCES study, students attending private schools performed better than students attending public schools. But after statistical adjustments were made for student characteristics, the private school advantage among 4th graders disappeared, giving way to a 4.5-point public school advantage in math and parity between the sectors in reading. After the same adjustments were made for 8th graders, private schools retained a 7-point advantage in reading but achieved only parity in math.

But, in fact, the NCES study’s measures of student characteristics are flawed. Using the same data but substituting better measures of student characteristics, we estimated three alternative models that identify a private school advantage in nearly all comparisons. Similar results are found for Catholic and Lutheran schools taken separately, while evangelical Protestant schools achieve parity with public schools in math and have an advantage in reading (see Figure 1). The results from our alternative models should not be understood as evidence that private schools outperform public schools. Without information on prior student achievement, one cannot make judgments about schools’ efficacy in raising student test scores. Thus, NAEP data cannot be used to compare the performance of private and public schools. However, our results clearly reveal the shortcomings of the NCES study—shortcomings so deep-seated that their purported findings lack credibility. In fact, in view of the criticisms received, NCES is reconsidering the propriety of its involvement in studies of this sort. “This is not what we should be doing.… Our job is to collect the data and get it out the door,” said Mark Schneider, the commissioner of NCES, in a recent interview with Education Week.

Problems with the NCES Model

The NCES analysis is at serious risk of having produced biased estimates of the performance of public and private schools. The study’s adjustment for student characteristics suffered from two sorts of problems: a) inconsistent classification of student characteristics across sectors, and b) inclusion of student characteristics open to school influence.

Classification Bias

To avoid bias, classification must be consistent for both groups under study. The NCES study repeatedly violates this rule when it infers a student’s background from his or her participation in federal programs intended to serve disadvantaged students. Public and private school officials have quite different obligations and incentives to classify students as participants in these federal programs: a) the Title I program for disadvantaged students; b) the free and reduced-price lunch programs; c) programs for those classified as Limited English Proficient (LEP); and d) special education, as indicated by having an Individualized Education Program (IEP). As a result, NCES undercounted the incidence of disadvantage in the private sector and overcounted its incidence in the public sector.

Title I. If a public school has a schoolwide Title I program, which is permitted if 40 percent of its students are eligible for free or reduced-price lunch, then every student at the school—regardless of poverty level—is said to be a recipient of Title I services. By contrast, private schools cannot directly receive Title I funds nor can they operate Title I programs. Instead, private schools must negotiate arrangements with local public school districts, which then provide Title I services to eligible students. Many private schools lack the administrative capacity to handle these complex negotiations or do not wish to make available services that they will not administer, making private school participation haphazard. In the 2003–04 school year, only 19 percent of private schools were reported by the U.S. Department of Education (DOE) to participate in Title I, compared to 54 percent of public schools.

Free Lunch. Access to free or reduced-price lunch is also an imperfect indicator of a student’s family income. According to official DOE statistics, nearly 96 percent of public schools participated in the National School Lunch Program in the 2003–04 school year, while only 24 percent of private schools did so. The disparities are explained in part by the greater administrative challenges the private sector faces, not just by differences in the neediness of the children it serves. The administration of the school lunch program is generally organized within the central office of each school district so that local schools are buffered from the responsibility of dealing with state officials. Private schools that seek to participate in the program usually must work directly with the state department of education, and many appear to have concluded that the burden of compliance with federal regulations governing the program outweighs any benefits low-income children might receive. Furthermore, as many as one-fifth of the public school students participating in the free lunch program may not be in fact eligible, a Department of Agriculture study has shown.

In short, using these two variables as indicators of family background undercounts the incidence of poverty among students in private schools and overcounts its incidence in public schools. In the alternative models discussed below, we employ two other indicators of family background that are less at risk of classification bias. The first, parental education, is well known to be a particularly appropriate control variable, as other studies have shown that it is the background variable most highly correlated with student achievement. Based on this indicator, 69 percent of 4th graders in public schools had parents with a college education, compared to 85 percent of those in the private school sector. The second indicator, region of the country in which the school is located, as well as its rural, urban, or suburban location, is also appropriate inasmuch as student performance is known to vary significantly by locality. Private schools are located disproportionately in central cities and in the Northeast.

Limited English Proficient (LEP). Eleven percent of the 4th graders in public schools were classified as Limited English Proficient “according to school records,” while only 1 percent of private school 4th graders were so classified. Among 8th graders, the percentages were 6 and 0 percent, respectively. While LEP was used by NCES as the indicator of students’ language skills, other information in the NAEP data suggests that sector differences in language background are not that extreme. When 4th graders themselves were asked how often a language other than English was spoken at home, 18 percent in the public sector replied “all or most of the time” as did 12 percent in the private sector. Also, the percentage of students in the public sector who were Hispanic was 19 percent, while it was 9 percent in the private sector. The percentage of students who were Asian was approximately the same in the two sectors.

To avoid undercounting those students in the private sector with language difficulties, we substitute for the LEP indicator the students’ own reports of the frequency that a language other than English was spoken in their home. While students may not always accurately report this information, there is no reason to expect errors to vary systematically by school sector.

Special Education. Fourteen percent of the public school 4th graders were reported to have an Individualized Education Program (IEP), while only 4 percent of 4th-grade students in private schools had an IEP. Among 8th graders, the percentages were 14 and 3, respectively. The NCES study assumes that these differences accurately describe the incidence of disability in the public and private sector. However, public schools must, by law, provide students with an IEP if it is determined that the student has a disability, while private schools have no such legal obligation. In addition, public schools receive extra state and federal funding for students so identified. Although some private schools also receive financial support for IEP students, the administrative costs of classifying students may dissuade private officials from seeking that aid unless disabilities are severe.

IEP participation may thus undercount the incidence of disability within the private sector. As a substitute for IEP, we use an indicator of whether the student received an IEP because of a severe or moderate disability. Six percent of the 4th graders in public schools were identified as having a severe or moderate disability while only 1 percent of those in the private sector were so identified.

Student Characteristics Open to School Influence

Characteristics influenced by the school the students are attending will bias estimates if they are included in statistical adjustments for student background. Three variables open to school influence were included in the NCES analysis: a) the student’s absenteeism rate; b) number of books in the student’s home; and c) availability of a computer in the student’s home. NCES assumed absenteeism to be solely a function of a student’s background; yet, it is not unreasonable to believe that schools have an effect on students’ attendance records. In the same way, school policies—school requirements, homework, and conferences with parents, for example—can affect what is available in students’ homes. In the third alternative model, we eliminate these variables.

Results from the Alternative Models

In order to check the sensitivity of NCES results to the particular methodology that was employed, we first replicated the results from the NCES study’s primary model. With that accomplished, it was possible to identify the consequences of relaxing the questionable assumptions that underpinned the NCES model.

Figure 1 reports the original NCES results for public and private schools (both sectors taken as a whole), and then those from the three alternative models. These models gradually exclude the NCES variables that suffered from the biases discussed above, replacing them with better measures of student characteristics. Alternative Model I substitutes parents’ education and the location of the school for the Title I and Free Lunch variables in the NCES study. In addition, Model II replaces the LEP indicator with student reports of the frequency with which a language other than English is spoken at home and replaces the IEP indicator with teacher reports of whether the child was given an IEP because of a profound or moderate disability. Finally, Model III, while keeping the other improvements, eliminates the absenteeism, computer, and books-in-the-home variables, thereby avoiding the inclusion of student characteristics that can be influenced by the school. Some may think that Model III does not include sufficient indicators of the student’s family background. Those for whom this is a concern should place greater weight on Model II.

The number of observations under study drops significantly when moving from the NCES model to Model I, in part because many students did not report the level of education their parents had attained. To ascertain whether results were influenced by the change in the size of the sample under analysis, we ran the NCES model on the same sample of observations as used in Model I. The results were reassuring, as the estimated coefficients of the effect of the private sector as a whole were never more than half a point away from those obtained from the whole sample.

According to the alternative models, in 8th-grade math, the private school advantage varies between 3 and 6.5 test points; in reading, it varies between 9 and 12.5 points. Among 4th graders in math, parity is observed in one model, but private schools outperform public schools by 2 and 3 points in the other two models; in 4th-grade reading, private schools have an advantage that ranges from 7 to 10 points.

The results for Catholic schools using the alternative models are very similar to those of the private sector as a whole. Lutheran schools are estimated to have a larger advantage in math and a similar one in reading when compared to the results of the private sector taken together. And evangelical Protestant schools are found to perform at a similar level to public schools in math but at a higher level in reading. Detailed results for these separate categories of private schools are available at www.educationnext.org.

Summing Up

Let us be clear. We do not offer our results as evidence that private schools outperform public schools but rather as a demonstration of the dependence of the NCES results on questionable analytic decisions. Although the alternative models are an improvement on the NCES analysis, no conclusions should be drawn about causal relationships from these or any other results based on snapshot NAEP test scores.

Asked by Education Week to comment on our findings, the lead author of the NCES report freely acknowledged the problems with some of the variables used in the NCES analysis, but asserted that our alternative models may be “underadjusting for the disadvantage in the public sector” because we do not control separately for mothers’ and fathers’ education. While this is desirable in principle, in practice it would have significantly reduced the number of observations available to use as fewer than half of the 4th graders, for example, reported the educational attainment of both parents. Despite this limitation, our main conclusion still stands: NAEP data are too fragile to be used to measure the relative effectiveness of public and private schools. Making judgments about causality based on observations at one point in time is highly problematic, so much so that it is surprising that NCES commissioned a study to analyze the NAEP data set for this purpose.

Fortunately, the practice seems to have come to an end. Commissioner Schneider has stated that his agency should not have initiated the study and NCES will in the future refrain from analyses of the raw data that it collects. Let’s hope that private researchers also exercise responsibility by not using NAEP data for purposes for which they are clearly not suited.

Topics: