Lessons Learned from Program Evaluation and Classroom Formative Assessment

‘Tis the season to talk about testing. Or is it? Must our focus on assessment be only at the end of the year?

As educators and researchers at WestEd, we recognize the value of continuous evaluation throughout the lifecycle of projects, as well as the importance of formative assessment throughout instruction for helping students learn. Although there are obvious differences between classroom assessment and programmatic evaluation, we can draw parallels in four key areas to highlight valuable insights.

Never stop evaluating.

Ongoing evaluation throughout a project’s lifespan is crucial. In our experience at WestEd, projects that engage in continuous evaluation are more likely to stay on track through course correction, be iteratively refined along the way, and be more efficient with time and other resources. On the other hand, postponing evaluation until the end of a project can lead to significant challenges and missed opportunities. Without ongoing evaluation, it becomes difficult to make timely and informed decisions, implement necessary changes, and maximize a project’s impact.

Similarly, embedded classroom formative assessment is most effective when it’s ongoing. Formative assessment done throughout the learning process can ensure students are progressing toward learning goals. These checkpoints along the way can support teachers to diagnose students’ progress, identify challenges, develop solutions, and adjust their instruction accordingly.

Three teachers talking in a school

Discussing assessment data and deciding how to use it is an important part of the process.

Use the data.

We’ve learned that some of our programmatic evaluation clients don’t always share the data we generate with the people it was collected with and about — the people who could use it as formative feedback. For example, when we evaluate a district’s professional learning (PL) program the district leaders might not always share the feedback with the PL providers and PL participants. Closing the loop between evaluation findings and program implementation is critical for meaningful and sustainable change. This principle applies to evaluating student learning as well. Teachers and students must have access to assessment results early enough that they’re able to make adjustments and improve.

Ensure cultural and cognitive relevance.

Designing clear and relevant assessment items is essential in both programmatic evaluation and classroom assessment. The language and concepts in the items need to make sense to the intended audience and be vetted by cultural experts who understand the context. This is especially true when administering surveys or tests to minors and across diverse cultural and instructional contexts. For example, if students are asked to predict people’s reactions to rain, their predictions will depend on their context. Students living in temperate climates might predict that rain would make people sad because it could cancel their outdoor plans. However, students living in arid, hot climates might predict that rain would make people happy.

To be useful for both learning and teaching, assessments must be culturally relevant to students and presented in a modality that they comprehend and can respond to proficiently. Ensuring cultural relevance can enhance the effectiveness of both program evaluation and classroom formative assessment. One way to do this could be to provide training on culturally responsive assessment practices for assessment developers and administrators, or to involve experts, like community elders from the students’ communities, who can help clarify the contexts that will make sense to students.

Align to goals.

Last but certainly not least, alignment between interventions, evaluation instruments, and client goals is key in programmatic evaluation. When these elements are not in sync, it can lead to confusion, inefficiencies, and misalignment of data. Alignment is similarly central to classroom assessment. The instruction, the assessment, and the vision for teaching and learning all need to be in sync to ensure that we’re measuring what is intended. For example, we don’t learn anything about students’ problem-solving skills if they are only asked to repeat what they’ve memorized about solutions.

Since we know the importance of high-quality, formative assessment for students, perhaps there are ways we can all apply the four principles above to our own work, whether in program evaluation, school administration, or curriculum development. How can we evaluate our own projects more frequently and accurately? How can we make sure all of our collaborators are getting the feedback they need and that the feedback is based on equitable measures?

What about you? Are there aspects of good assessment practices that could enhance your own work?

<< Back to the blog home page