Reviewing VDOE's school performance framework (pt 1)

Reviewing VDOE's school performance framework (pt 1)
Photo by Nguyen Dang Hoang Nhu / Unsplash

The Virginia Department of Education (VDOE) recently – within the past year or so – completely revamped their school performance and accreditation system. I’m not going to describe all of the changes in detail, but the gist is that we now have 2 different systems – an accreditation system and a “school performance and support framework” (school performance system). The accreditation system is basically just a measure of inputs – are schools and school divisions providing the inputs they’re required to provide, as described by the VDOE Standards of Quality?

The performance system is a framework for measuring different kinds of outcomes. The 3 main types of outcomes the system takes into account are:

  • Mastery,
  • Growth, and
  • Readiness

There’s also graduation at the high school level.

In my next few posts, I want to give my impressions of the new system and discuss what I think it gets right, what it gets wrong, where it’s transparent, where it’s sketchy, and everything in between. 

I have this disclaimer elsewhere, but it probably doesn’t hurt to add it here as well. These are my opinions, not the opinions of my employer, my colleagues, my family, my dog, the people I politely nod at in the grocery store, or anyone else.

Mastery

In this post, I’ll start with the mastery component of the framework. Mastery refers to students mastering/demonstrating proficiency in specific content as prescribed by Virginia’s Standards of Learning (SOLs). Basically, it attempts to measure to what extent students are learning the content we expect them to learn in Virginia schools. If you check out the overview of the framework on this page, you can see that, at all school levels, mastery accounts for at least half of the system’s available points. It’s the most-heavily-weighted component of the system, usually by quite a bit.

The measure itself uses data from the spring SOL tests – Virginia’s annual state standardized assessments. Schools are awarded points based on how students perform on the tests. Generally, these point values are:

  • 1.25 points for “Pass Advanced”
  • 1 point for “Pass”
  • 0.75 points for “Fail”
  • 0.25 points for “Fail/Below Basic”

And then the final score for the mastery component is calculated by dividing the points earned by the number of tests taken (per content area). There is also a sub-component in the Mastery component related to English Learner (EL) progress, but I’m going to skip over that for now.

I have a lot of gripes with the current VDOE, and with this performance system specifically. But this component, the mastery component, feels…surpisingly reasonable to me. 


There are lots of purposes for school. We want students to learn morality. We want them to learn how to be good citizens. We want them to develop social skills and study habits and a strong work ethic. We want them to learn how to cope with both failure and success. But I’d imagine, for most people, the primary purpose of school is for students to learn academic skills. Maybe this means content-specific skills like factoring polynomials or summarizing texts or titrating solutions. Or maybe this means higher-order skills like critical thinking or creativity, which we’d probably end up measuring through content-area standards and assessments anyway. Regardless, it feels logical to me for the performance system to prioritize students’ mastering academic skills and standards.

There are certainly arguments we could make about the extent to which the current Virginia SOL tests support valid inferences about content mastery. My general take is that no test is perfect, but Pearson – the company that creates the SOL assessments and the administration platform – employs lots of people with PhDs in psychometrics whose only job is to create good tests, and they (Pearson and their psychometricians) have a financial interest in making good tests, and so the SOL assessments are probably pretty good. I'm not a professional psychometrician, but I've taken PhD-level measurement courses, and I vaguely remember reading an SOL technical report years ago and feeling like everything checked out. Of course, the VDOE has since overhauled their website (read: stripped it of information that should be publicly-available and easily-accessible), and now it seems you have to email the Office of Assessment to get a copy of the technical report.

Anyway. The real appeal of the mastery component to me is its simplicity. If we assume that the SOL assessments are reasonable measures of content mastery (which, again, is something we could debate), then the rest of the mastery component is remarkably straightforward and elegant, particularly compared to the previous system. The school earns “full credit” – 1 point – for each passing the test; it gets a small bonus for each pass advanced test, and it gets less (or considerably less) than full credit for each failing (or below basic failing) test. There’s no situation, though, in which a school would get 0 points for a test that a student took, which aligns with the contemporary view (and my own view) of zeros as overly punitive and not actually representative of the skills they purport to represent.

As a hypothetical, assume a school had 75% of its students pass their math tests, an additional 10% pass advanced, and 15% fail. If we do the multiplication, this school would earn 98.75 points (out of a “possible” 100 points, although obviously there are more points actually possible). This feels about right to me – 85% of students passed, and 10% of these passed advanced, warranting a bit of a point bump. You could maybe even argue that the point allocation here is too generous. And what’s more, it’s something we can easily calculate with some back-of-the-napkin math that only requires us to use students’ current test scores.

There are a few edge cases and unique scenarios that we’d need to account for in the wild, but very few. This component is actually remarkably straightforward and, dare I say, transparent.

This feels particularly refreshing when compared to the previous system. The foundation of this previous system was a simple pass rate – where students earned a point for passing and nothing for failing. And, notably, nothing extra for passing advanced. Then there were all sorts of arcane rules where students who failed could earn their point for growing “enough” (as prescribed via a seemingly arbitrary growth table comparing last spring’s test score to this spring’s test score), or they could get bonus points if they were designated a “recovery” test, or if they were an EL who made sufficient progress on a different test, or if they spoke exactly the correct incantation under a full moon…

Not to mention that these calculations then had to be repeated for every student demographic subset.

I’m going to be much more critical of both the Growth and Readiness components of the new performance system in later posts, but I have to give credit where credit is due. The mastery component – and the way it's operationalized – is straightforward, intuitive, and an improvement over the previous approach.

Read more