This is a true story: the assessment coordinator at a small college wanted to find an excellent sample — the best example of student work in an ePortfolio that met the rubric criteria for the institutional student learning outcome of Written Communication.
By the time she reached the bottom of her pile of faculty-selected artifacts, she came to a rather alarming conclusion: either the institutional rubric for Written Communication was thought of as merely a suggestion and faculty were improvising (instead of using the rubric to assess student work) or, worse, students were entirely missing the mark in their writing and no one was paying attention to it!
With student artifacts lacking, the assessment coordinator did the next best thing: she referred to the institutional rubric and created her own student work sample!
You are probably thinking, not necessarily incorrectly, that this assessment coordinator may be a bit of a control freak; nevertheless, this story is relevant for four reasons:
First, the assessment coordinator did her best to identify an excellent sample (student- or self-made) by looking at work samples receiving a high score on the institutional rubric. Not finding a suitable candidate, she created one.
Second, the newly created work sample magnified the criterion specified in the rubric at the “target” performance level.
Third, the new work sample exposed flaws that were inherent in the institutional rubric.
Fourth, using the rubric to assess the new work sample demonstrated to faculty and students how the rubric is, and should be, used for assessment.
Let’s break this down into steps.
Step 1: Identify the best student work sample.
When in doubt, create your own! In our story, the assessment coordinator created a sample by starting with the institutional rubric (Hopefully, a good rubric and the subject of another article!) Important: a good rubric will define the criterion by which one can either assess a student’s work or create the best work sample.
Step 2: Magnify rubric criterion at Target level.
Metaphorically speaking, the criterion as defined at the Target level of a rubric magnifies or “zooms in” on the expectations we should have for student work. What is Target level? Let’s assume that a five-level rubric is used to create the student work sample. Level five or the highest performance level of the rubric is reserved for student work that “exceeds” the Target level expectations; in this case, level four.
Put simply, very few students should receive a level five score; rather, we would expect a majority of students to ultimately attain level four or Target.
Referencing only the “target” or “competent” level of each criterion, the assessment coordinator created the student work sample using the indicator language to refine her product. Similarly, she might have also assessed a student work sample with the understanding that her rating for each criterion should never dip below level four or Target.
Step 3: Expose possible flaws in the rubric.
Due to the rigorous nature of this exercise, the assessment coordinator exposed some flaws that were inherent in the institutional rubric, which could have been the cause for discrepancies found in the faculty assessments.
She observed that the indicator language of “constructs coherent written and oral narratives for general and specific audiences” was too vague. What do “coherent written and oral narratives” look like? And should “oral narratives” be omitted, since the focus is on student writing?
Adding the following revision provided more clarity: “Specific introduction, sequenced material within the body, transitions, or conclusion is intermittently observable within the written work.” Talking with faculty also confirmed that there was, not surprisingly, some uncertainty around the ambiguous language.
Step 4: Demonstrate how the rubric is used for assessment.
The final and, perhaps, most productive step is to demonstrate how the rubric was used to create an excellent sample or to assess one.
This final step is very beneficial to students who need to understand the reasoning of faculty when assessing their work. Faculty benefit because they need to see assessment in practice. Using the margins of the rubric, the assessment coordinator jotted notes, proactive comments, citing how the student’s work meets each criterion at Target.
For the revision statement noted in step #3, she offered the following proactive comment to the student: “Use of transitional devices was evident throughout the writing, such as use of the following: therefore, similarly, in addition, and in conclusion.”
So, as we “land the proverbial plane” in this article, consider these three quick points in parting:
- Unless it’s summertime and/or you have way too much time on your hands, conduct a truly thorough search for a good student-generated work sample before you labor to make your own. The really good example is likely out there … somewhere.
- If frustration ensues because the work sample comes so close, yet ultimately misses the mark, then use it as a “teachable moment” in tomorrow’s class. Give your students the institutional rubric and then, have them labor to make the student’s work into an excellent sample. (You could also do the same exercise with faculty!)
- If point #2 utterly fails, then ask yourself, “Gee whiz! Are our expectations too high?” If the answer is, “Yes,” then it may be the outcome and the rubric, not student work that is getting in the way of a truly positive assessment experience at your institution.