Monday, April 23, 2012

Evaluation Rubrics for Measuring Staff Skills and Behaviors

I've had to create several rubrics for managers to use to measure a set of staff skills and behaviors.

Here's the set up. A library I am working with has defined several outcomes they want to achieve for staff - staff will do such and such behavior related to excellent customer service or staff will know how to do such and such a task related to technology skills. We've decided to measure these outcomes using rubrics that managers will fill out for each staff member (think how the SAT grades writing on the 1 to 6 scale). Because the library is trying to develop new skills and behaviors for staff, the purpose of the rubrics is not to be a performance review for staff to punish or reward them, but instead to identify areas where further training or focus is needed.

Everyone at the library is busy and has too much on their plate as it is. So we wanted to triangulate between a measurement that is quick, painless, and accurate. Rubrics seemed like an interesting way to go.

I did some investigating and started following the standard format that rubrics take. (btw, I found this website to be a great resource for baseline info on rubrics: http://www.carla.umn.edu/assessment/vac/evaluation/p_7.html.) Essentially, you end up with a scoring grid with 2 axis. On the vertical access you have the different categories that behavior is measured against. If the overall outcome has to do with customer service, then the categories might be "attitude, accessibility, accuracy" (brief nod to the alliteration). On the horizontal access, you have the scoring levels. There are often 4 or 6 levels (even-numbers to avoid the tendency to put everyone in the center), for example "exemplary, "superior, very good, fair, needs work".

Looks something like this:



The final step is to fill in each grid in your table with a description of what the outcome would look like, for each category and each level. The problem is, this leaves you with a pretty dense table of text. If you have three categories and four levels, that's 12 paragraphs of text that someone has to read through and take a measurement on.

Now looks something like this:



Not the quick and painless solution we were looking for.

After wrestling with this for a while, here's the solution I came up with. Instead of describing each level for each category, I preface the table with a general description of each level for performance. Then the table describes the ideal performance for each category.

So managers start off reading something like this:

4 – Exemplary. Matches the Ideal perfectly. You would describe every characteristic with words like “always, all, no errors, comprehensive”.
3 – Excellent. A pretty close match to the Ideal, but you can think of a few exceptions. You would describe some characteristics with words like “usually, almost all, very few errors, broad”, even if other characteristics are at a 4 level.
2 – Acceptable. Matches the Ideal in many respects, but there are definitely areas for improvement. You would describe some characteristics with words like “often, many, few errors, somewhat limited”, even if other characteristics are at a 4 or 3 level.
1 – Not there yet. Some matches with the Ideal, but many areas where improvement is needed. You would describe some characteristics with words like “sometimes, some, some errors, limited”.

Then they look at the table of idealized characteristics, and jot down their ranking, which looks something like this:



This nice thing about this is that once they read through that initial description of performance levels, they can fill out any number of rubrics for various outcomes and know exactly what the scoring criteria are, without having to read something new each time. Triangulation of quick, painless, and accurate. Check!

Note: drawings done using http://dabbleboard.com

No comments:

Post a Comment