Assessment as Research: Getting faculty and administration involved in the science of learning

In my short career in higher education, I have witnessed several reactions to the word “assessment”. To some, assessment is compliance; another box that must be checked to appease the encroaching bureaucracy of administration, accreditors, and governments. To others, it’s a bogeyman, signifying the end of academic culture and the corporatization of higher education. For those of us in the assessment community, we see assessment simply as providing evidence of student learning, allowing faculty to make informative decisions about curriculum and instruction.

In conversations with my peers, we often find ourselves flustered that faculty would be against the idea of measuring student learning. How could educators not find the value in this? Surely, there must be miscommunication! We go to workshops and conferences and discuss how we can get faculty buy-in, but I always find myself wondering what exactly do we want faculty to buy in to? The photo is a slide from a keynote address at Watermark Engage (a conference sponsored by an assessment software company). The quote is from a respected academic and Professor of History at my institution. He is one of the most vocal assessment critics, who has published several anti-assessment articles in The Chronicle of Higher Education. I have read a few of his articles over the years, and even as an assessment professional, I can’t help but sympathize with him. While we conceptually disagree on the value of assessment, he and others make valid points that the assessment community wholeheartedly need to address.

Slide from Keynote Speaker Dr. Catherine M. Wehlburg at Watermark Engage 2018, Austin, TX.

In my experience, assessment as a continuous improvement practice has become bureaucratized and, quite frankly, tremendously dull. I’m sure this isn’t true everywhere, but how many assessment offices have combined accreditation duties? How many coordinators of assessment compile data into a spreadsheet, hand it off to faculty, and say, “Here you go! Have fun! Find something meaningful!” (I would also have a little bourbon handy, Dr. Gilbert). Worse yet, how many faculty are not informed of the assessment methodology or data in their own courses and programs?

To counter this, I offer a few suggestions. These suggestions presuppose learning as a science, and assessment as applied research.

Assessment is meaningless without methodology

An assessment cycle is typically summed up as:

Set learning outcomes->measure learning outcomes->examine data->make decisions

While this cycle is straight-forward and approachable, it does not always yield valuable information. In fact, it can be rote, boring, and uninspiring. In the assessment community, learning outcomes are often thought of as hypothesis statements, something along the lines of, “Students will [insert verb derived from Bloom’s taxonomy] XYZ.” In courses and programs, there may be 2 or 200 learning outcomes! In programs with specialized accrediting bodies, these outcomes are often dictated by accrediting agency standards. Again, this encourages a culture of compliance rather than meaningful assessment.

For those who do research, there is one glaring omission in the typical assessment cycle: What questions are we asking? What do we really want to know? Asking questions of student learning at the beginning of an assessment cycle is paramount in making assessment interesting. For example, “Does X instructional strategy increase student learning of Y as measured by Z?” Faculty and assessment professionals can work in tandem to conduct literature reviews of prior research. A methodology can be determined, which will likely go beyond the timeline and scope of producing semesterly data dumps.

Currently, assessment uses principles of action research. However, borrowing from the learning sciences, a design-based methodology might be a more useful approach. While similar to action research, design-based methodology incorporates collaboration between a researcher (such as an assessment professional) and a practitioner (such as teaching faculty), and uses an iterative approach to assess the effectiveness of learning strategies. For more information on design-based methodology, see Brown (1992)Cobb, et. al. (2003), and Sandoval & Bell (2004).

Assessment is meaningless without proper data analysis

Analyzing data for a semester or year’s worth of exam or rubric scores, or differentiation between pre- and post-tests, without a clear understanding of quantitative analysis encourages shot-in-the-dark decision making. I would argue that it would be better not to make decisions at all if they are based on incorrectly analyzed data.

Yet, assessment professionals may lack the skills or resources to effectively analyze data using statistical methods and tools. In addition, faculty may not have the skills or comfort level to analyze their own data, particularly at teaching colleges, or if the faculty focus on qualitative research. Therefore, I once again propose a collaboration between qualified assessment professionals skilled in quantitative analysis and faculty.

Making assessment meaningful is difficult and messy

The simplicity of the typical assessment cycle is what makes it attractive to many assessment professionals. However, assessment is real-world research, and real-world research is messy. We try to standardize it, put the data in software, tick the boxes indicating students have learned what they are supposed to learn (whether we really know or not), and declare what we are doing is working and the curriculum is sound, all based on poorly analyzed data and no methodology.

To confidently make decisions from research in the field (especially in the field), assessment professionals and faculty need to take time to develop good methodology, which may differ depending on the discipline and questions asked. In addition, it may require replication. Likely, it will also take more than a semester’s worth of data. The difficulty is that meaningful assessment is not very practical. It takes more time, more energy, and more resources—but isn’t quality, actionable insight worth the trade-off?

Meaningful assessment is transparent

It is natural to want to hide anything that may show us in a negative light. However, findings from assessment should be made available and shared. Not only does it give a program or institution integrity, it also allows others to learn from instructional interventions that did or did not work. Therefore, regardless of the results, cross-departmental and even public sharing of assessment data displays a commitment to learning.

Conclusion

In summary, higher education institutions, assessment professionals, and faculty can do better in ensuring assessment is meaningful and relevant.

Steps administration can take:

  • Invest in quantitatively adept assessment professionals and professional development in research methods
  • Value assessment professionals like faculty, as is the case with librarians and instructional designers
  • Change the narrative on assessment from accreditation compliance to research on student learning

Steps assessment professionals can take:

  • Collaborate—and build trust—with faculty to create research designs
  • Focus on methodology
  • Work with faculty to use empirical evidence and theories of learning to build the foundation of assessment research
  • Work with faculty on data analysis; raw data and descriptive statistics are not always helpful
  • Discuss the results with faculty

Steps faculty can take:

  • Push for a research-based assessment agenda within your program and the institution
  • Trust the expertise of assessment professionals (we also have credentials)
  • Work with assessment professionals to accommodate goals for student learning
  • Share your bourbon

Instruction and learning are the sine qua non of higher education, and research is a prominent activity in many higher education institutions. Let’s use the tools of our trade.

Leave a Reply