Blog: Subject-level TEF consultation

21 May 2018

Sarah Stevens, Head of Policy, writes that subject-level TEF aims at clarity but risks confusing prospective students.

The Government is taking the next step in developing the Teaching Excellence and Student Outcomes Framework (TEF). As of last year, participating higher education institutions are awarded a gold, silver or bronze, with the ratings intended to provide applicants with information about teaching provision and student outcomes on undergraduate courses. Now pilots are underway to see how a similar rating system might be applied to individual subjects.

The Department for Education’s consultation on “subject-level TEF” closed today and the Russell Group has submitted a detailed response.

undefined

The aims of subject-level TEF are laudable. Providing prospective students with helpful information about the courses and institutions they are considering is extremely important and we welcome any moves which support our focus on teaching and maximising the student experience. There are however, real challenges in evaluating subject areas in a way which provides future applicants with clear, accurate and useable assessments. We want to help the government get this right and, as it stands, we have serious concerns.

The first relates to granularity. In an ideal world, prospective applicants would be provided with ratings of the individual courses they are considering, at the specific universities they are looking at. But providing assessments at this level would be near impossible. The number of students on a single course, at a particular institution, in a given year would be too small to provide meaningful results and the scale of the exercise would become unmanageable. 

The Government recognises this and so, in its pilots, has aggregated courses into larger subject groups for the purpose of assessment. The problem is that this takes distinct and potentially disparate courses which are different in their structure, design and approach to teaching and learning – and so in the experience students can expect to receive – and treats them as though they are the same. It may seem natural, for example, to group together creative arts and design subjects. But in reality, a student studying a music degree may well have a different experience from another student on a drama or an art course at the same institution. These subjects may even be taught in different departments, or even on different campuses.

This aggregation could therefore produce misleading results. For prospective students, the TEF rating for the most relevant subject grouping could be unrepresentative of what they can expect on the specific course they wish to undertake. This is a fundamental flaw which it will be difficult to overcome.

The second major challenge is benchmarking. TEF judges performance against the outcomes you would expect for an institution, based on its student intake, rather than on performance in absolute terms. This means that the better institutions do against the metrics, the higher their benchmark and the harder it is to achieve a good rating.

The ratings which are produced by this method can be misleading for applicants. Take drop-out rates. In the TEF trial year, one institution where around one in 15 students drop out after the first year was awarded a positive flag on this particular metric (and so had a better chance of a positive overall TEF outcome). At the same time, an institution with a different student profile, where only around 1 in 50 students drop out, was not awarded a positive flag. To prospective students, it can therefore seem as if the institution with the higher drop-out rate is actually better than the one more able to retain its graduates.

At subject-level, this confusion is likely to be amplified. In medicine and dentistry for example, the benchmark for employment after graduation is close to 100% for many institutions, meaning it will be near impossible to achieve a positive flag – and so have a better chance of receiving a positive TEF outcome.

Research undertaken by a consortium of students’ unions in 2017 found no evidence that students understand that provider-level TEF ratings are based on benchmarking, and not absolute performance. The extremely complex methodology for subject-level TEF is likely to be equally difficult to explain. The research also found that students may not interpret TEF ratings as intended, with potentially negative consequences for social mobility: 6% of students say they would reconsider applying to, or not have applied to, their current institution if it had been rated gold, and this proportion is higher for BME students, at one in 10. It would therefore be helpful to assess the extent to which applicants understand what TEF measures and test whether the information provided is meeting their needs.

Our response to the consultation seeks to help Government tackle these problems. We welcome Ministers’ commitment to a second year of pilots, but we recommend extending the pilot period further to ensure that any new approach is sufficiently tested. Indeed, experience with the pilots so far suggests that alternative models should also be considered.

The independent review of TEF planned for 2019 will be an important moment to judge whether or not the exercise, at both institution and subject levels, is working. Throughout this process we should keep the key objective clear in our minds: helping prospective students make better-informed decisions.

 

Policy areas

Related case studies

Media Enquiries
Policy Enquiries

Follow us on Twitter