Selected SLC Research
Policy Analysis | October 29, 2013
Passing (on) the Test: The Future of Common Core Assessments
During the past year, states' continued pursuit of Common Core State Standards (CCSS) has garnered considerable attention, and not a little criticism. Among the most often raised concern is with the assessments that are considered part and parcel of the new standards. Specifically, critics are worried about the cost, practicality, frequency, and number of new consortium-crafted assessments aligned to the Common Core Standards.
States are required by federal law to assess students annually in reading and mathematics in grades 3-8, and once in high school, and publicly report the results of these assessments. School, district and state level reporting must be done on an aggregate and sub-group level, providing the public and policymakers with a snapshot of how schools are performing.
Under current federal law there has been considerable pushback over assessments, including criticism of their validity, impact on instruction, and the amount of time students spend on testing. There also are concerns that tests designed to measure student progress or diagnose children's learning needs are being applied in ways that they were not intended, most particularly to measure teacher performance.
As states implement the CCSS in English language arts and mathematics, new assessments based upon these learning frameworks are necessary in order for states to accurately gauge student learning. A selling point advantage of the new standards was to be that states could share costs for a variety of educational services, including assessments and, thus achieve cost savings over the current model. Equally, it was envisioned that new assessments would deliver better data on student learning to help teachers adjust instruction, offer targeted supports, and plan early interventions before significant learning deficits occur, as well as provide a better picture of how prepared students are for college or careers.
In March 2010, less than a year after the standards initiative was launched by the National Governors Association and the Council of Chief State School Officers, the U.S. Department of Education announced a competitive grant for consortia of states to create new assessment systems to be used by states. The grant required a testing system that would return timely results with information that teachers could use to adjust instruction for specific students. The competition also mandated the capacity to measure individual student performance and progress and include accommodation for English language learners and students with disabilities.
From this competition, two consortia were eventually funded. The Partnership for Assessment of Readiness for College and Careers (PARCC) consists of 18 states and the District of Columbia; the Smarter Balanced Assessment Consortium consists of 24 states and the District of Columbia. Both consortia are using computer-based assessments which require each test-taker to have a computer station connected to the Internet. (An overview of the two major consortia along with consortia on assessments for students with cognitive disabilities and for English language learners is available here.)
While the technical specifications for both consortia are not especially demanding, the expectation that every student would have a computer with Internet access on which to perform the assessment has been seen as a struggle for some schools, most especially those in rural communities.
In recent months a handful of states have dropped out or reduced their participation in the PARCC consortium, including Alabama, Florida, Georgia, North Dakota, Oklahoma, and Pennsylvania. A major factor in states pulling back from the new assessments are the computer and bandwidth requirements, which would be costly (if not logistically impossible) for some states to accomplish in time to administer the tests for the first time in 2015.
The tests also pose a very different type of question for students, ones that are open-ended and more complex, which will almost certainly lead to more students identified as not on track to be college-ready. Kentucky, which became the first state in the country to use Common Core-aligned tests in 2012, witnessed a one-third drop in the number of students scoring at the proficient level in reading and math in both elementary and middle school levels. Kentucky, a partner in the PARCC consortium, relied upon a test developed for it by NCS Pearson that was completely aligned with the CCSS and is seen as a bellwether for how other states will fare as they begin to roll out the new assessments. (In the second year of the new assessments, scores in reading and math rose, albeit slightly. Kentucky Commissioner of Education Terry Holliday observed in September 2013 that it would take students possibly as much as five years to catch up to the adjustments in the standards.)
Alabama withdrew from both consortia in February of 2013, and became the first (and thus far, only) state to adopt the ACT Aspire assessment, a Common Core-aligned assessments produced by ACT Inc., that will be used as the accountability assessment for the state in grades 3-8. The ACT entry into the Common Core assessment market provides states a non-consortium based option for aligned assessments that also is anchored by the ACT, the most commonly taken college entrance examination in the country. ACT has promised that the ACT Aspire will be available for administration of summative assessments in spring 2014, with formative classroom and interim assessments available in fall 2014.
Georgia has indicated that it will develop its own assessment and work with states in the region to develop a regional assessment. In dropping the new assessment, Georgia Governor Nathan Deal cited the high cost of implementing the PARCC assessments, which were reported to be as high as $27 million, as well as technical concerns. The per pupil cost of the PARCC assessment is nearly triple what Georgia currently pays for student assessments.
Doing the numbers
A major selling point for the assessment consortia is that they would save states money through efficiencies of scale. In reality, however, most states have historically paid little for assessments. On average, states spend about $27 per pupil for assessments, according to a Brookings Institute report, or less than one-quarter of one percent of K-12 spending. For many states in the region (including Alabama, Georgia, Kentucky, Louisiana, Missouri, Mississippi, North Carolina, Tennessee, and Virginia), the cost of assessment is already below the national average. The PARCC assessments are estimated to cost $30 per pupil, with Smarter Balanced Consortium costing slightly less, although these estimates do not include costs associated with administration and scoring, which states either will have to provide directly or secure from vendors.
States going it alone on new aligned assessments will face potentially higher costs than they currently do, which was a factor with Georgia's announcement that it would seek other partners for its new assessments. According to the Brookings report, states collaborating on shared assessments would realize significant savings, particularly smaller population states with fewer students to spread the fixed costs of test development and implementation across. Sharing assessments to realize efficiencies is not a new concept. Prior to the Common Core Assessment Consortia, nine states in the American Diploma Project Network created a multi-state common assessment for Algebra II, with six additional states participating in the assessment in 2005, with a smaller group creating a common Algebra I assessment in 2007.
Cost is not the only consideration for states in selecting assessments, however. The two consortia-based assessments have levels of complexity and detail that offer a considerably greater level of data than what current assessments provide. In addition to the benefits this is anticipated to provide to teachers and students as a guide to instruction, this more fine-grained data should be better suited to serve as a component of a system of teacher evaluation. This is particularly important as states shift to using the results of student assessments to determine teacher tenure and performance bonuses. There have been a number of questions raised about how well-suited state assessments are for this purpose, which underscores the need for new tests that are able to deliver sufficient detail on student learning and performance.
Creating a high quality, highly reliable assessment system at low cost is perhaps not realistic. Both consortia have scaled back the scope of their assessments models, in large part due to concerns over costs and time spent on testing. This second matter, time spent on testing, has proven to be incendiary in state capitols, as parents, teachers and policymakers grapple with the total amount of time committed to testing and the impact it has on instructional time. Both Common Core Assessment Consortia likely would increase time spent on testing slightly, although this would vary from state to state and by districts within states. The PARCC assessment is likely to take 46 hours over nine sessions, with the Smarter Balanced tests taking roughly half of that time over several sessions. Because many districts implement some formative assessment throughout the year to determine student progress, there is a possibility for total time spent on testing to decrease over the year as the new assessments replace existing secondary tests.
In order for the consortia assessments to be ready for a national roll out in 2015, a sufficient sample of students must take them on a pilot basis to determine if they are aligned and have validity. Select schools have begun piloting the assessments already, providing the Consortia with valuable information about how the tests function, and offering a glimpse into the technological demands of a computer-based test. Among the lessons learned are the need for upgraded WiFi, expanded bandwidth, and more and newer devices, as well as a need for increased staff training on some of the basics of the system, such as logging in for the assessment. These "bumps on the highway" provide a valuable peek at how a broad roll out will proceed, and have increased concerns about glitches occurring along the way to a new, more complex assessment model.
A more comprehensive pilot test for the consortia comes in Spring 2014, with more than 3 million students included in this larger trial. The results from this administration cannot be used for accountability or student measurement purposes, placing states in a "double testing" pinch in which they would have to assess students once for the pilot as well as on a legacy test for accountability purposes, greatly expanding the costs, disruption and time spent on standardized tests for 2014.
In response to this situation, California has moved to suspend accountability testing for one year, a move that drew criticism, and a potential penalty, from the federal government. The California Assembly approved legislation in September to replace the current state tests with new common core assessments from the Smarter Balanced Consortium which, because they would be under a pilot administration, do not provide adequate measures for accountability purposes. The U.S. Department of Education has threatened to withhold funds from the state as consequence.
Other states have received waivers from the double testing requirement, allowing students to be assessed on either a legacy or pilot test, with accountability interventions placed in a holding pattern for one year. This flexibility is contingent on the state having an approved waiver from certain requirements of the No Child Left Behind Act, which California and six other states do not have. As waiver states have exited the testing consortia, they are being asked to demonstrate to the Department of Education how they will meet the requirements of their waiver to have annual, statewide, high-quality assessments in place by the 2014-2015 school year.
The Road Ahead
In most states, the 2013-2014 academic year will be the final one for assessments based on pre-Common Core State Standards in English language arts and mathematics (a few states already have made the leap; a few more will do so this year). The transition to new assessments on new standards is expected to result in a drop in overall scores, due largely to the greater rigor of the new standards as well as the disruption any new assessment system can have. As has been noted, the initial drop in Kentucky was moderated somewhat in the second year of the new assessments.
Most states nationally will implement one of two Consortia assessments next year. Among those states considering a third path, as of fall 2013, only Alabama had selected a test. In the region, Florida, Georgia and Oklahoma, which have withdrawn from the Consortia, and Texas and Virginia, which are not Common Core states, will not be participating in either. Together these six states represent more than one in five of the assessed population.
With the vast majority of states using one of two aligned assessments, cross-state comparisons of progress toward college and career readiness will be possible for the first time (although comparisons between the two consortia assessments may not be entirely reliable). For the South, however, as states investigate alternative assessments, measuring progress as a region may become more difficult. The opportunity exists for Florida, Georgia and Oklahoma to work together on a regional assessment, or to join Alabama in adopting the ACT Aspire exam, creating a larger pool for comparison.
Additionally, as other states have become uneasy with the CCSS and make movements away from the consortia assessments, these actions will add urgency to the adoption of new assessments that are aligned with state standards. As the time grows short for such shifts, the disruption and implementation difficulties increase, and the need for states to budget sufficient resources for unseen contingencies is paramount. As state legislatures head into the 2014 sessions, policymakers will seek conclusions on the outstanding questions on school testing to provide certainty to teachers, students and parents in what the assessment regimen will be going forward.