A better way to measure higher education quality and aid international comparisons is needed. The Teaching Excellence Framework isn’t it.
Quality is measured in higher education for three reasons. First, to show that qualifications are of a high standard and are internationally comparable and transferable -- especially important in a globalized world in which students and graduates are mobile and employers recruit internationally. Secondly, because government and students -- and other stakeholders -- are increasingly aware of the need for value-for money; government tries to get more for less by achieving what it regards as efficiencies, students associate high quality higher education with salary, career and lifestyle. Finally, the spectacular growth in the number and range of educational programmes, and alternative providers, including for-profit and transnational or cross-border providers, requires greater clarity internationally about standards and accountability, and regulation.
While rankings have been widely used to establish quality, their methodology is unsuitable, indicators insufficiently meaningful, and data unreliable. They primarily measure research, which promotes prestige and reputation, rather than the teaching and learning of undergraduates, who comprise more than 75% of UK students. This suggests a need for more appropriate international approaches.
Traditionally, monitoring and assuring academic quality has been overseen by academic peers or expert review. But the public has become concerned that students are graduating with insufficient or inappropriate skills, and quality assurance reports are difficult to use to make international comparisons. While, over the years, different governments and organisations have sought better ways to define and measure quality, their answers often depend upon who is asking the question, and why.
The diversity of students, educational programmes and providers, has also prompted questions about whether there is – or should be – a single standard of academic achievement. The Organisation for Economic Cooperation and Development developed AHELO (Assessment of Higher Education Learning Outcomes) but ran into methodological problems. The EU developed U-Multirank, and is currently sponsoring CALOHEE, which focuses on learning outcomes. Australia created QILT (Quality Indicators for Learning and Teaching), using data from the student experience survey, graduate outcome survey, and employer satisfaction survey. And the United States created the College Scorecard, linking information about educational programmes, price and employment.
What’s clear from all these approaches is that there is no single internationally agreed definition of quality. There are some commonly used indicators and formats, but each has in-built and hidden biases and perversions. Each country also has different objectives and underpinning values, which are reflected in their choice of measure.
Studies have tried since the 1930s to define the key characteristics that aid student learning. Initially, they looked at the amount of time spent on task and the quality of that effort. More recently, they have turned to the impact of the college experience on student engagement, including how far students work collaboratively with peers to solve problems, undertake research with faculty, participate in a learning community, and engage in service learning or study abroad programmes. What matters most is a student-centered learning environment, in which students are actively engaged and motivated, an emphasis on higher-level critical skills, and the expertise of those who teach. This approach has been aided by the Bologna Process, which moved attention away from measuring inputs -- such as staff/student ratios, credit hours, class teaching, entry grades, and investment -- to looking at outcomes.
But assessing quality learning environments takes time. Best practice includes a combination of qualitative and quantitative methodologies, and this can be expensive. Governments are therefore tempted to rely on indicators with hidden and perverse biases.
The Teaching Excellence Framework is part of this pattern. Considerable discussion is taking place about the meaningfulness of the various indicators it uses and heavy reliance on satisfaction surveys, which have had their reliability constantly questioned. Worse, the link between the TEF and permission to increase fees will ensure the process becomes a driver of behaviour, and not necessarily in a good way. Using an Olympic medal style structure is unlikely to reduce problems associated with ordinal or hierarchical rankings.
The TEF also coincides with other significant changes affecting UK higher education, such as Brexit and the new Higher Education and Research Bill. It could be a helpful guide for international students, but when linked to proposed guidelines about international recruitment, the effect is unlikely to be positive. In fact, these developments are likely to result in long-term restructuring of the system, and negatively impact on those universities and their communities that serve first-in-family learners and other under-represented groups, with corresponding impacts on their local and regional economies. Finally, having two separate regulatory systems, for teaching and for research, will undermine the overall coherence of the higher education system and drive social stratification. The Matthew Effects, by which the rich get richer and the poor get poorer, will apply.
Inevitably, the TEF will become integrated into other UK-based rankings; rankings everywhere are desperate for data, and will use whatever they can. If it replaces other cruder indicators of students learning, such as staff/student ratios, and entry scores, this will be a good thing.
But it is likely to have very little impact or influence on rankings globally; the data is too localised, and therefore not internationally comparable. Research universities will continue to gravitate towards global rankings -- not least because international students and research funding organisations use them – and, while prospective students may use the TEF as a cross reference to gauge overall quality, it is still early days.
We need better means to assess the quality of teaching and learning – and we are still a long way away from that.
Published in Research Fortnight, 16 February 2017