As assessment professionals we are challenged to be critical, analytical, creative, and realistic about what we either do or do not know, and to mine data to support our epistemic limitations. When assessment is done well, it transforms data into information into knowledge that aids good judgment and decision-making. These judgments are used on smaller scales to solve every day operational tasks or on larger scales to determine the fate of academic programs. Both usages provide accountability to our stakeholders. But too often assessment is stuck in a rut and becomes mechanical to a point where the value of assessment itself as a key endeavor is lost.
In these instances, it feels no different than if we were staring at a difficult jigsaw puzzle with a ton of missing pieces and no picture on a box to even perceive what the end goal should resemble. How should we approach these crucial moments with such important stakes? I believe that two of the most important things to do during such a time is: 1) Come to terms with the fact that when truly done well, assessment has no end goal, and should be more realistically thought of as a complex loop rather than a linear process; and 2) Use available but appropriate tools to gather data and revolutionize our processes if and when necessary.
First, reality. I have yet to meet any assessment colleague that has developed a complete picture of what their students or professors know. Using the jigsaw puzzle analogy, some of us are missing a few pieces, possibly because we have the adequate technical, financial, and personal resources at our disposal. Yet some of us clearly do not have such luxuries and are missing quite a few more pieces. Some are in-between. Whichever camp your institution falls in, it really doesn’t matter; the jigsaw puzzle is a somewhat faulty analogy. The perfect puzzle-box picture is arguably a Platonic ideal because most assessment-based decisions are based on incomplete, imperfect information.
Assessment aims to develop valid arguments based on what we know from the available data (Kane, 2013). This concept of validity is not an all-or-nothing matter, but rather one about degrees of truth given our capabilities. I recently experienced this when reviewing a program’s data that for years had only one assessment coordinator who was responsible for all assessment-related duties. The person was apparently very trustworthy because the other program members rarely if ever reviewed the actual assessment data. The person was also very competent because the reports were exceptionally crafted. Then one day something happened: that person left the university. This highlights that realistic assessment is not merely just about what knowledge is available, but also how much of it, who has it, where it is located, how it was curated and processed, whether anyone is willing to share it, and why we even ought to believe what we see.
The diligence and hard work of assessment pays off in our constant conversations with the data and information that comes across our desk, always changing in light of the fact that our students, faculty, and yes, assessment coordinators are always changing. Thus, one of the biggest challenges is not just to find the missing pieces but also to come to a place where we know we did everything in our power to make the best out of our limitations. It leads to practiced metacognition where we improve our ability to plan, monitor, evaluate, and revise our strategies. This leads to the second point: process.
A large part of doing assessment well involves how we frame and confront problems. Some problems are simple, others complex, and more are becoming wicked and require more of a taming rather than a solution approach (Lillejord, Elstad, & Kavli, 2018). Because assessment work is timely, costly, and often frustrating, failure is not often an option. Yet assessment professionals must be encouraged to question whether our methods are effective and fair, and if necessary we must be allowed to experiment with different techniques and approaches. While some of our tasks should remain automatic and managed by well-checked data analytic tools, some questions are far too complex and nuanced and require a different paradigmatic attitude.
For example, in my experience more and more siloed academic programs are encouraged to form coalitions to both serve students in a more multidisciplinary, jobs-related fashion and show better revenue streams. Assessing these joint programs might benefit from a soft operations research technique such as the strategic choice approach (SCA), or a cognitive mapping technique such as strategic options development and analysis (SODA), in order to ascertain more information about their individual and overlapping program missions, visions, goals for student learning outcomes, and delivery tactics to achieve them. Soft operations techniques can help to tease out hidden or complex knowledge from different people or groups. Readers can learn more about these techniques in Rosenhead and Mingers (2001) or Checkland and Scholes (1990).
From what I can tell, these techniques are on the fringes of typical assessment practices, but they are not unimportant. This is not to imply that traditional assessment methods like surveys are obsolete, but rather to stress the need for us to evolve with the complexity of our tasks. Merely choosing what is comfortable or what might have worked before is conforming to an availability heuristic, which could be detrimental. Well-mined qualitative data could be just as insightful as questionnaires and data reports using sophisticated SQL queries. In fact, more recent approaches such as quantitative ethnography are finding ways of embedding qualitative approaches to large-scale datasets with greater fidelity and reliability (Schaeffer, 2017).
This all sounds heavenly. Unfortunately, it is no guarantee of assessment success. The higher-education boogeyman is real and comes in many different forms: changes to accreditation standards, lightning-pace technological advances, top-level administrative and executive personnel turnover, as well as micro and macro-economic fluctuations that directly impact the supply and demand of educational experiences and degrees. As many of us have recently experienced, even sociopolitical and cultural unrest can influence how we do educational assessment. Whichever way we spin it, I believe change is arguably always at the heart of the assessment nightmare. But we need not fear it. Life is not perfect and the problems faced by assessment professionals do not come wrapped in perfect little packages. But we must be realistic and ready to revolutionize if and when necessary.
Checkland, P., & Scholes, J. (1990). Soft systems methodology in action. Wiley.
Kane, M. (2013). Validation as a pragmatic, scientific activity. Journal of Educational Measurement, 50(1), 115-122.
Lillejord, S., Elstad, E., & Kavli, H. (2018). Teacher evaluation as a wicked policy problem. Assessment in Education: Principles, Policy & Practice, 25(3), 291-309.
Rosenhead, J. & Mingers, J. (2001). Rational analysis for a problematic world, revisited: Problem structuring methods for complexity, uncertainty, and conflict. Wiley.
Shaffer, D. (2017). Quantitative ethnography. Cathcart Press.