THE FOLLOWING IS AN EXCERPT FROM ”Standards, Standards, Standards: Mapping Professional Standards for Outcomes Assessment to Assessment Practice” ORIGINALLY PUBLISHED IN THE IN VOLUME 56, ISSUE 3 OF THE JOURNAL OF STUDENT AFFAIRS RESEARCH AND PRACTICE.
To facilitate familiarity and comfort with outcomes assessment, we map three sets of student affairs professional standards to the outcomes assessment cycle. Although the standards differ in focus (i.e., individual competencies versus program characteristics), the mapping showcases the immense similarity in expectations regarding the gathering of empirical evidence of program effectiveness. We hope this mapping bolsters informal professional development, in addition to being a resource for higher education and student affairs (HESA) graduate programs.
As student affairs professionals are keenly aware, some of college students’ most transformational learning experiences occur outside the classroom. Influential works, such as Learning Reconsidered (Keeling, 2004) and Learning Reconsidered 2 (Keeling, 2006), among others (e.g., Upcraft & Schuh, 1996), aided in emphasizing the importance of student affairs programming to the enrichment of the student experience. However, how do we know if our programs are of the quality that enriches the lives of our students? And, how do we know if we have the skills necessary to create, evaluate, and modify those high quality programs?
Professional standards provide one means for evaluating if our programs and personnel are functioning according to best-practice guidelines for the profession (Arminio, 2009; L. A. Dean, 2013; Henning, Mitchell, & Maki, 2008). In addition to the provision of benchmarks for quality, standards provide a common language for communicating best practices within and across disciplines and universities. Standards designed by student affairs professionals themselves, as opposed to externally imposed regulations, promote self-determination to meet those standards.
Purpose of Mapping Standards to the Assessment Process
Given the call for student learning and development outcomes assessment, our goal is to spotlight the profession’s assessment-related standards by demonstrating their alignment with the assessment process. Specifically, we compare and contrast two personal competency standards (ASK Standards, ACPA-NASPA Competencies) and one program-related set of standards (CAS) that focus on outcomes assessment. We do this by mapping the standards to the commonly presented outcomes assessment cycle. This mapping highlights two related, yet distinct, concepts central to engaging in outcomes assessment. First, the mapping demonstrates that individual- level standards (i.e., skills required of all student affairs practitioners, regardless of functional area) are necessary to engage in the steps of the assessment cycle. Second, the mapping demonstrates that program-level standards (i.e., characteristics of student affairs programs, regardless of functional area) can be met if one engages in the steps of the assessment cycle.
Moreover, this mapping of the standards to the assessment cycle is an explicit response to the call for professionals to “buy-in” to evaluating program effectiveness and to making empirically based decisions regarding programming (Barham, Tschepikow, & Seagraves, 2013; Henning & Roberts, 2016; Hersh & Keeling, 2013). In short, this mapping debunks the misconception held by some that the assessment process is novel, foreign, or just “an added-on assignment” (Schuh, Biddix, Dean, & Kinzie, 2016, p. 329) for student affairs professionals; the assessment cycle is simply a visual of the standards student affairs professionals promote. If the standards are supported and embraced by professionals, then assessment practice should be supported and embraced. In fact, in addition to engaging in outcomes assessment, the standards call for student affairs professionals to be leaders in assessment, who promote a culture of and education in assessment (American College Personnel Association [ACPA], 2006; ACPA & National Association of Student Personnel Administrators [NASPA], 2015). Fortunately, there is a strong, yet small, community of student affairs assessment leaders (SAAL) who regularly engage in discourse related to assessment. Although this mapping may not be necessary for those leaders, we believe it will facilitate the development of the next generation of leaders.
Additionally, this mapping of the standards to the outcomes assessment cycle has the potential to spur cross-division partnerships. Often professionals on the academic “side of the house” are unaware of the assessment-related competencies of professionals on the student affairs “side of the house.” Many faculty outside of the higher education and student affairs fields do not know that student affairs professionals are guided by several sets of professional standards regarding the development and assessment of evidence-based programs to improve student learning and development. There is a striking similarity between the student affairs standards and the evidence-based decisions and practices expected of faculty. Given the mutual goal of learning improvement across divisions, this mapping should be shared with faculty, deans, and provosts for three main reasons. First, many faculty are unaware of outcomes assessment, much less trained to conduct it. This mapping could provide a tremendous resource for universities to support the training of faculty. That is, what faculty (or anyone) needs to know, think, or do regarding outcomes assessment can be guided by the student affairs professional standards. Second, by overtly showcasing how faculty and student affairs professionals are engaging in identical processes, emphasis can be placed on sharing assessment-related education and training opportunities across divisions. Third, these shared resources across divisions are not only efficient but also foster collaboration (e.g., design and assessment of cocurricular programming) with respect to promoting students’ learning and development.