Do SAT Scores Reveal a Common Core Effect?

How valid is the claim that Common Core State Standards make students more college ready?

A common-sense claim made by supporters of the Common Core State Standards (CCSS) is that universities would know that high school graduates from CCSS states have met consistent high standards of academic achievement and are ready for college.
I decided to spend a couple of hours looking for data that support or refute the claim. First, I looked for comparisons of state standards from before the CCSS and found a 2008 EdNext article titled In Few States Set World-Class Standards – In Fact Most States Render the Notion of Proficiency Meaningless, by Paul E. Peterson and Frederick Hess. They ranked states against the National Assessment of Education Progress (NAEP) and reported that South Carolina, Massachusetts, and Missouri had the highest standards and Georgia, Oklahoma, and Tennessee, the lowest.
Next, I looked for how students from those six states did on the SAT, a barometer of college readiness. I looked at SAT data from 2007, the time of the Peterson and Hess report and before the CCSS and data from 2014, after all six states had adopted the CCSS. (Note that since 2014 South Carolina, Oklahoma, and Tennessee have repealed the CCSS.)
Here are the data. As in the discussion below, high standard states are in blue, low standards states in red.
Notice that across the board students’ SAT scores in a given state barely changed between 2007 and 2014, and only in Tennessee did their scores improve (by less than 0.2 %). Conceivably, the test has gotten harder, so that maintaining a score represents improvement, but absent evidence of that, there doesn’t seem to be any CCSS effect on SAT scores within a state before and after adoption. (A 2014 article in the Atlantic by Lindsey Tepe reports that a new Common Core aligned SAT test will be released until 2016.)
Comparing SAT scores across states doesn’t reveal much CCSS effect either. Among the three high participation rate states, Massachusetts students scored around 12% better South Carolina and Georgia in 2007. But Massachusetts scores dropped 5% after adopting the CCSS. In 2014 Massachusetts students scored about 8% better than those in South Carolina and Georgia.  But all of this tightening between the three states resulted from Massachusetts’ lower scores.
Among states with low SAT participation, Missouri students only scored about 4% higher than their peers in Oklahoma and Tennessee in both 2007 and 2014.
In a bit of a digression, Powerscore lists average SAT scores by colleges and universities. Their list shows that the average scores of entering freshman are nearly always higher than the scores shown above. I suppose one could try to crosscheck those scores with what states the students come from, in search of a CCSS effect.
This post is by no means a scholarly study. Rather, it’s what I found while spending a couple of hours on the internet searching for reasons to support or refute the claim that CCSS prepare students for college. I chose SAT because it’s an outside, impersonal metric, as opposed to things like GPAs, letters of recommendation, and involvement in extra-curricular activities, which could well be better indicators of a student’s college readiness.
I hope I have been fair and will absolutely reconsider my conclusion in the face of superior arguments and data. But for now I can’t find anything to support the claim, which amounts to a refutation.
Related categories:
  • ReneeMoore

    You’re My New Mentor Text

    I’m using this blog with my freshmen comp students as an example of argumentative writing.

    One caveat to your research, though. It might fairly be argued that even though the states you list have switched to CCSS, students who have taken the SATs recently have not been exposed to those standards long enough for there to be a measurable effect. What would be your response to that?

    • SandyMerz

      Great point

      I’m flattered that you’ll be using the blog, thank you. You make an excellent point. I know some elementary teachers who are extremely pleased with the depth at which their students are learning math, and equally importantly learning how to explain what their doing. It’s a great question how they’ll do on the SATs in a few years. For that matter, it’s worth repeating this excercise every year to see if and CCSS impact is reaveled. Thanks for this thought, too.

       

      • Amethyst

        Agree– perhaps more long-term exposure?

         

         

        My reaction was similar to Renee's. I wondered whether an entire k-12 exposure would make a difference.  You know, though, I would not at all be surprised if it doesn't make much difference.  I'm thinking that predictors such as childhood poverty play at least a big a role as standards do in predicting college readiness. I wonder what those numbers would look like alongside yours. Maybe that will be my next blog!

         

         

  • yoteshowl

    Thoughtful

    Sandy, if your goal was to be fair and thoughtful, you clearly have done it with a pretty good look at what’s available and although you suggest it’s not a scholarly study, I’ve seen less careful “real studies!”  That said, the two points that first popped into my head, valid or not, are…

    First, I’m skeptical (personally) that the type of thinking advocated for in CC will be reflected in current assessments, as designed.  I don’t see those assessments measuring what we’re hoping to see.  That said, I’m not a high school/college “guy,” so my exposure is limited.  Take that with a grain of sand.

    Second, and this one I do feel comfortable putting forward is that, by definition, we should not see the improvement in skills and learning for quite some time, particularly in math.  This, to me, is what makes the idea of testing kids already (and evaluating teachers!!) on content that – by design – sits upon mountains of scaffolded learning that the authors tell us begins in kindergarten, a ridiculous notion.  Theoretically, for a student to get the benefit of CC, they should have had the entire thing, starting with that extensive conceptual learning in the early grades that is supposedly going to support higher performance down the road.  I’ve seen too much already.  Ironically, teachers are emphatically told, “You cannot skip content or sequence because it will undermine (to decimate) students’ ability to work at higher levels, later.  Then, we promptly test them like they’ve got a decade under their belt with appropriately trained and supported teachers.  This is where our test-obsession really impedes our own internal logic, and it’s unfortunate.  I’m reasonably sure that we will have expectations for a 2-3 year implementation dip, when it will really be much longer.  But, we don’t do implementation dips very well, do we?  I’m just hoping teachers don’t absorb yet another inundation of “change” and then have the rug pulled – yet again.  Ideally, if there are issues, we’d fix them, rather than ditch the entire thing.  And, you can bet there are some issues.  Aren’t there always?  Thanks for a great post.  Got me thinking, obviously!

    • SandyMerz

      More to do

      Thanks, Mike for your comment. Others, like Renee above and on Twitter, have pointed out that it’s too soon to tell, which is a fair comment. But I don’t think it’s too soon to start to look. Very soon, like tonight or tomorrow, I’ll spend another couple of hours looking at year by year SAT data for states that have been using the CC the longest. 

      Others have also pointed out that there hasn’t been much scaffolding of the CC. I live that with my 8th graders who take high school credit algebra. The CC algebra standards are easily a year or two ahead of what they’ve always been. In another six years that may be fine, but right now it’s hard to convince them not to worry that the AZMerit is a fair measure of their achievement. To make matters worse, they have to take both the 8th grade assessement and the algebra end of course assessment. I told them once, right when my principal walked in the room, that there would be schools in which not a single student met the standards, whenever they figure out the cut scores. My prinicpal didn’t bat an eye and agreed completely. 

      Thanks again for your thoughtful comment – hope to see you soon.

       

  • Ed Fuller

    SAT, participation rates, and Simpson’s Paradox

    I like your approach to this problem–if only media would take the time to do this!!

    To rule out other factors that might explain the scores, you would also need to examine the percentage of students taking the SAT (a primary driver of scores) as well as changes in student demographics that can result in Simpson's Paradox. Simposon's Paradox simply says that the overall scores could go up (or down) even though scores for subpopulations of students goes down (or up). What happens in some states is that the percentage of minority students increases and the overall SAT scores remain flat even though the scores for all subgroups had increased over the same time period. I'd encourage you to explore this and integrate it into your next post about the issue so we can see what you find.