On December 19th, the framework for President Obama’s College Ratings System was released to the public. Aimed at strengthening college and university performance and building on the consumer information that students have available to inform college choice decision-making, the college ratings system measures are broadly organized into three areas: access, affordability, and outcomes.
Noting that the inaugural ratings will be released prior to the 2015-16 academic year, a key theme of the framework is that many of the metrics are still under development. Another key theme is that attention is still being given to balance the comparability of performance measures with the uniqueness of institutional missions. Using data from existing federal databases (IPEDS and NSLDS), the outcomes being considered for inclusion are below. Please read the metric descriptions, found on pages 7-13 of the framework, for more detail and noted limitations.
The U.S. Department of Education (DOE) also noted that other technical issues under consideration are how institutional ratings on each measure should be presented visually and whether the system should also incorporate institutional performance over time. In particular, the DOE plans to create a ratings system that recognizes both low- and high-performing institutions, with most institutions falling in the middle. The incorporation of performance over time – the timeframe yet to be determined for each measure – would offer insight about the extent to which an institution has improved (or declined) in performance from one point in time to the next. Of course, some measures of performance described above will likely take longer (such as Completion Rates) to improve upon than others (such as Percent Pell) at many institutions.
The institutional groupings for comparison purposes will be an important decision point for the ratings system. Thus far, the U.S. DOE is looking to group by 2- and 4-year institutional categories. Though established and widely used, these groupings provide only limited points of comparison to account for the contextual challenges that may impact institutions differently across states and institutional types. In addition, consideration is being given about how to compare institutions on other mission-specific characteristics such as academic program mix and selectivity. An alternative grouping might be to compare performance by Carnegie Classification and sector (public or private), which would enable consumers and policymakers to examine institutions according to a closer reflection of mission than sector (2- or 4-year) alone.
What is unclear is the true purpose of the ratings system – at least for the moment – given the heavy objection by Congressional opponents to tie Title IV federal aid to institutional performance. And we’ll have to wait until the ratings system’s roll out later this year to know the extent to which students use this tool – one of an already existing litany of related information resources – to aid in college-choice decision making.
The ratings system’s promise as an accountability and transparency mechanism – at least in the short term - might be to act as an additional resource in response to longstanding calls by institutional, state, and national governing authorities for readily available and nationally comparable measures of access, affordability, and outcomes. And despite the longstanding push for evidence of student learning, the U.S. DOE acknowledges that the complexity of measuring such evidence is too great to capture and report in a comparable and meaningful manner.
Of importance, too, is the need for the higher education community and our governance stakeholders to ask ourselves whether looking at performance measures alone offers a sufficiently comprehensive approach to affirm or build upon attainment of performance expectations. Two questions help us to reflect on whether dialogue and policy actions have been sufficient thus far: Have higher education leaders and governance stakeholders had sufficient conversation and identified effective interventions to address public disinvestment in higher education? Such conversations are necessary to balance what institutions can do to meet access and affordability expectations when the consonant trend across states has been to provide fewer resources to do so. Next, is the persistent push for efficiency met with a consonant focus on the duty of care we owe to our diverse student bodies regardless of the time it takes to do so? The environment in which our administrators and educators carry out their shared commitments to student success is just as important as the performance of institutions at achieving expectations – and serves as a point of consideration without which we cannot achieve our collective expectation for attaining and sustaining mission excellence.
An important way that we as student affairs professionals can help to inform the conversation is to take the U.S. DOE up on its offer to consider feedback from the higher education community. To do so, please send your suggestions on the ratings system to [email protected], post under the Comments section of the U.S. DOE’s blog, or attend a structured discussion session to be announced early this month on the U.S. DOE webpage