The New SAT Scores: How to Compare Apples and Oranges Using Veera
2016 brought new changes to the use of SAT scores in enrollment and retention models. In this article I’m going to show you how you can compare the new scoring method and the old methods so that your predictive models remain accurate.
For those of you who already have encountered this update, you may have started to already see an issue when comparing historical records to incoming Fall 2016 freshmen in regards to students SAT scores. If you shifted with the College Board in 2005 to look at students on a 2400-point scale, then you may notice that the scale is now set back to the original 1600. With SAT scores being something that we rely on often for enrollment and retention models, this is something to be aware of as we make the transition yet again to the newer format.
Even if you were set in your stubborn ways and resisted moving to a 2400 scale and were still using a 1600 scale, there are still some extensive changes in the format of the test such that an old SAT Composite score of 920 on the 1600 scale now translates to a 1000 on the newer test. This is based on the College Board’s Equipercentile Concordance method (which you can learn more about HERE), which relates scores on each test that have the same percentile rank. So a 1000 on the newer test is the same percentile rank as a score of 920 on the old. This complicates things when we are looking to include historical records to identify enrollment likelihood using these variables.
To explain this a little further, let’s take a look at the relationship between enrollment and SAT Composite for Fall 2015 and Fall 2016 admitted applicants using the old and new SAT scores, both consisting of a 1600-point scale. You might find that there is a strong relationship between the two:
However, upon further inspection we can see that there is in fact a distinct difference in the relationship when viewing this broken out by term:
As we noted before, a score of 1000 on the new test equates to a 920 on the old and we can see that shift here in the data. The good news is that it is very easy to compare these year-over-year using Rapid Insight software. The comparison is done using a simple function within our data blending and preparation tool, Veera. The function can convert old scores to the new scale or vice versa.
Here are a couple of screenshots from Veera that show you how easy it is to handle the SAT calculation. All that is really needed is a fairly simple transform node with a calculation, but below is how one might incorporate that using the Banner test score table:
The results show what we were hoping for. An equal value across all historic records:
With these functions added to our jobs, we can continue using SAT scores for years to come, or at least until they change them again. But when they do, we know we’ll still be able to handle it in Veera. If you are interested in having a copy of the functions, contact us at firstname.lastname@example.org.
If you are already a Rapid Insight user, you can also download the functions from the Rapid Insight Collaborative Cloud. The Collaborative Cloud is a repository of functions and jobs built by Rapid Insight staff and customers and are available for free to any user of the software.
How are you managing the SAT changes in your predictive models? We’d love to hear your insights.
Senior Data Analyst