Program to Be Evaluated
There exist many reasons, which explain the concerns about current systems of student evaluation and school performance. However, there are contracting reasons to be skeptical of the claims that measuring school and students’ performance based on standardized Academic Test Scores (SATS) results in desired outcomes. As a result, the “No Child is Left Behind” (NCLB) program requires thorough evaluation to safeguard both the teachers and students concerning school performance requirements. In this regard, the program has provided a “safe landing” for most schools, by allowing the institutions to deliver adequate annual progress despite not meeting the proficient level targets (Markowitz, 2018). Despite the safe harbor provided to schools by the NCLB, the program does not have a clear consensus on teachers’ effectiveness as a measure of both school performances and student scores using the SATS.
Purpose of Evaluation
Over the years, the NCLB has used the student test scores to assess the performance of schools in both math and reading. Though the program has had positive benchmarks as indicated in Table 1 (see Appendix A), there is available evidence of negative feedbacks for schools and the teachers whose students fail to meet the set targets regarding performance (Darling‐Hammond, 2007). For instance, the inadequate proof, which arises from the state’s involvement with the NCLB, does not accord a promising future based on the influence of test-based accountability to empower student learning. Moreover, the NCLB focuses on the percentage proficient level, which provides a partial analysis of student achievements. It does not deliver critical information concerning the progress of students who are either above or below the set target. As a benchmark achievement, the NCLB employed the use of value-added models (VAM), a strategy that ensured success because all schools and districts with resources that had been performing below the average achievements were forced to devise an effective plan of action to advance student performance.
The main reason for the use of VAM was for the model to take into account the test score trajectories at all performance levels. Furthermore, the lack of VAM indicated that percentage proficient was a challenging way to measure achievement gaps among students (Dougherty & Weiner, 2019). In this case, the recent developments have made it possible to analyze student achievements gains upon adjusting for certain specific features of a school.
The approach is much fairer in comparing the effectiveness of teachers as a measure of school performance than judging performance based on SATS. According to Ewing (2011), VAM has enabled NCLB to have robust analyses of school progress and the validity of evaluation as compared to the previous model. Nevertheless, there is a mutual agreement among researchers that SATS alone are not satisfactorily dependable and effective indicators of school performance to be used in high-end personnel decisions, even when the most advanced models as VAM are used. Therefore, the success or failure of the program can be assessed by examining and analyzing the National Assessment of Educational Progress (NAEP).
Evaluation Plan
To understand the performance based on the Standard Academic Test Scores on the use of the NCLB approach, the data in Table 1 (see Appendix A) has been used for quantitative analysis. The data contains average scores for both Math and Reading for fourth and eighth grades and is based on the Texas States in both pre-and-post NCLB that included the VAM. Even though the NCLB program is premiered on test-based accountability for bridging the performance gaps and there being predictive improvements in NAEP scores, the implementation of the program has not been much better in the post-NCLB. From the analysis, as indicated in Table 1, there are available negative performance trends, hence indicating that a school’s performance should not entirely be dependent on the student test scores.
Moreover, the analysis shows that scores rose at a much faster rate before the NCLB in fourth-grade math (242.1 from 229.6) as compared to fourth-grade reading (217.7 from 214.6). There was also a slight increase in the score of eighth-grade math (284.7 from 272.1). However, eighth-grade reading had its average score reduce from the score of 261.1 in the pre-NCLB to 258.9 in the post-NCLB. Furthermore, in both the fourth and eighth grades, math had the highest achievement gains as compared to reading after NCLB than before, though with eighth grade considerably lower. An analysis of variance in Table 2 ( see Appendix B) also showed that the effect of NCLB on the performance of students to gauge teachers’ effectiveness and school performance was not significant, F (1, 6) = 0.11169< 5.987378, p=0.750.
Recommendations
From the data analysis conducted above, it is imperative to conclude that NCLB does not support the perception that test-based accountability improves learning in students or school performance. In support of the findings, research conducted by Dee and Jacob (2011) concluded that despite the presence of gains for students in fourth-grade math and smaller gains in eighth-grade math, there was no gain in both fourth and eighth grades reading. Though the study did not compare the NCLB enactment periods, the researchers noted that the impact of NCLB has fallen short of its missions and goals.
The effect calls for quick recommendations to divert the negative pattern of academic achievements. Case in point, the relevant academic stakeholders should deliberate on the purpose of student and schools assessment before determining a suitable measure to employ. In this case, though VAM in the post-NCLB may deliver more information concerning teacher’s contribution to student performance, they would be less effective in providing teachers with guidance on how to improve their performance to bridge the achievement gaps (Koedel & Betts, 2011). Moreover, NCLB should resist the pressure of correlating school performance or student learning gains to a single standardized Academic Test Scores but should capture a single measure of overall academic achievement.
To enable easy and effective communication of the results of the analysis, workshops and seminars are the best communication strategies. According to Eales et al. (2017), necessitated workshop sessions and training provide the opportunity to resolve the major differences in mandates and requirements. Conflicting requirements can cause project delays and hence, the seminars act as opportunities for stakeholders to discuss and arrive at a compromise on how to move the student learning gains forward. To enable ease of synthesis of the results, the creation of a comprehensive student and school evaluation is significant (Brandon, 2018). Precisely, the system should contain multiple measures that capture diverse information not included in the NCLB and student evaluation systems. Therefore, during the seminar, the stakeholders should be encouraged to consider the priorities of schools or students and the intended purpose of evaluation to create a system that accomplishes its various goals, rather than use a single measure (SATS) to gauge performance.
References
Brandon, J., Hollweck, T., Donlevy, J. K., & Whalen, C. (2018). Teacher supervision and evaluation challenges: Canadian perspectives on overall instructional leadership. Teachers and Teaching, 24(3), 263-280. Web.
Darling‐Hammond, L. (2007). Race, inequality and educational accountability: The irony of ‘No Child Left Behind’. Race Ethnicity and Education, 10(3), 245-260. Web.
Dee, T. S., & Jacob, B. (2011). The impact of No Child Left Behind on student achievement. Journal of Policy Analysis and management, 30(3), 418-446. Web.
Dougherty, S. M., & Weiner, J. M. (2019). The Rhode to turnaround: The impact of waivers to No Child Left Behind on school performance. Educational Policy, 33(4), 555-586. Web.
Eales, J., Haddaway, N. R., & Webb, J. A. (2017). Much at stake: the importance of training and capacity building for stakeholder engagement in evidence synthesis. Environmental Evidence, 6(1), 1-8. Web.
Ewing, J. (2011). Mathematical intimidation: Driven by the data. Notices of the AMS, 58(5), 667-673. Web.
Koedel, C., & Betts, J. R. (2011). Does student sorting invalidate value-added models of teacher effectiveness? An extended analysis of the Rothstein critique. Education Finance and policy, 6(1), 18-42. Web.
Markowitz, A. J. (2018). Changes in school engagement as a function of No Child Left Behind: A comparative interrupted time series analysis. American educational research journal, 55(4), 721-760. Web.
The Nation’s Report Card. (n.d.). Texas review. Web.
Appendices
Appendix A
Table 1: Average standardized Academic Test Scores per year for Pre-NCLB (1992-2003) and Post-NCLB (2003-2019).
Source: The Nation’s Report Card. (n.d.).
Appendix B
Table 2: ANOVA Results.