Since the release of A Framework for K–12 Science Education a decade ago, Crosscutting Concepts (CCCs) have become part of the national conversation about science education. Almost all states now include CCCs in their state science standards, and more instructional materials include CCCs every day. There is a difference, though, between including the CCC words (e.g., patterns, cause and effect) somewhere in a document and fully supporting students to learn CCCs. Ten years down the road, it still doesn’t seem like we as a field are supporting student learning in all three dimensions equally.
As I mentioned in one of the earliest blogs in this series, CCCs have always been a part of science education. Before they were included in learning goals, though, very few students noticed them and therefore very few students were able to use them as tools to help them make sense of something (e.g., how can understanding that “a system can be described in terms of its components and their interactions” help me figure out how a solar still works to produce fresh water?). Now we’ve worked together to clear one hurdle (inclusion in learning goals), but we still have other hurdles in the way, such as inclusion in assessment.
What should we measure in classrooms?
A well-known principle in organizational development is “you manage what you measure” (i.e., if you measure it, it will come). Education systems tend to focus on what is measured, and behavior often follows metrics (even ifthose metrics aren’t the ones that matter!). So, what happens if we try to teach CCCs but don’t measure students’ progress in their CCC learning? Will all students have the same access to rich experiences with CCCs as sense-making tools?
If CCCs are important learning goals, then monitoring student progress is an important part of reaching those goals. Formative assessments can help teachers and students see progress. They can also provide information that helps teachers figure out how to modify instruction to meet student needs.
Let’s look at a specific example. A state learning goal might say that by the end of grade 8, students need to be able to use the CCC element:
“Relationships can be classified as causal or correlational, and correlation does not necessarily imply causation.”
If this really is a concept all students need to have in their mental toolbox, then teachers and students need some accurate way to determine to what degree students understand and can use this concept. Classroom formative assessments can help in this process.
Let’s compare two assessment targets to see which one might result in more useful information about student progress toward this learning goal:
Both Targets A and B relate somewhat to the CCC category of Cause and Effect, but might not relate to the same grade level learning goals. If students respond to assessments that are related to Target A, they probably won’t show understanding of the difference between causation and correlation (the middle school learning goal above in green). In that case, teachers wouldn’t have much information to use to provide helpful feedback to students. The teacher might assume that if a student uses words like “cause,” that student has mastered the CCC, and therefore they don’t need any additional instruction. Target B, though, might result in more information about student understanding of the middle school-level CCC. Related student responses might help the teacher figure out how to support the next steps of student learning.
Shouldn’t all NGSS assessment targets be 3D?
Although there is consensus in the field that assessments should be multi-dimensional, there is some debate in the field about including CCCs in assessments. As is the case with any potential learning, if CCCs are not important learning goals in a lesson, then they are not important to include in that lesson’s formative assessment. However, in any scenario, assessment prompts should match assessment claims, and assessment claims should ideally match the learning goals. To support student learning, teachers need accurate information about what assessments are measuring.
If we’re not trying to teach a CCC, and our assessments don’t intend to measure a CCC, students and teachers would be better supported by assessment claims that don’t list a CCC as an assessment target. For example, since Target A would probably only result in student use of an elementary-level CCC element (like “Events have causes that generate observable patterns“) it would be helpful if the Target A claim didn’t include a middle school-level CCC element. And that’s fine! Students have three to four years to build toward full understanding of each CCC element, so there’s room to help student understanding develop slowly over time. This includes room to let students apply their prior CCC knowledge or to start building a foundation of CCC knowledge if they’re new to Framework-based teaching and learning.
Bottom line: It’s ok if not every assessment prompt measures progress toward a CCC learning goal. But it’s important to be clear about what we really want students to learn. Once we figure that out, let’s monitor and support all students’ progress toward that learning.
What do you think? How often should CCCs be assessed?
PennSEL members Dr. Bucci, Dr. DeCarbo, Mrs. Carrizales, Mr. Carley, Diane McGaffic (IU IV) & Dr. Marino collaborating w/science teams = meaningful & effective science experiences for students @bucci_shelly @CarrizalesSci @TabithaMarino_ @nextgenscience @K12Alliance @WestEd