By Pfungwa M. Nyamukachi
University rankings always generate huge interest globally. Turned into marketing fodder, they give ‘bragging rights’ to winners, leaving others with a tinge of envy, perhaps. What is undeniable is that they always cause a stir; it is a competitive world after all.
Africa was the only continent with no dedicated rankings. It was like a peg in a square hole but in July, Times Higher Education (THE) announced the first African universities ranking that pitted 30 of Africa’s ‘flagship’ universities.
Measuring research performance, research productivity and impact is not an easy task. Most challenging is defining what constitutes great performance, and added to that, what is great performance across a myriad of scientific disciplines, each unique to their own intricate workings and social environments.
Nevertheless, universities are under increasing pressure to account for how research income is used and show the societal impact, contributions of the research undertaken. For many African universities the pressure is to get sufficient funding and bigger budget allocations from their national governments.
Bibliometric studies on which rankings are based, are quantitative measures that are used to quantify and analyse academic literature – by counting the number of journal articles, books, reviews and any written forms of academic, scientific output produced by scientists and researchers of a faculty, university or research institute in a given time period.
All this in an endeavour to draw qualitative indicators such as quality, influence, impact based on number citations for example, measure the productivity of individual researchers firstly, then collectively the accumulation of all research which contributes to faculty performance, and feeds into university research performance assessments and ultimately national performance.
As such, rankings require quality data – as comprehensive and accurate as possible. Even where there is data, the politics of academic publishing, access, ghost journals, how editorial decisions are made, how articles are chosen, co-publishing, how to count and credit, the many factors that affect citation behaviour, the author’s ‘social status’ and the author’s institution all affect the numbers. Moreover, capturing bibliometric data is also a process fraught with data quality and technical challenges. Misspelling of author names, common names and inaccurate spelling of institutional names can easily swing the numbers.
Granted the African higher education broadly has lagged behind its global counterparts in terms of record keeping, but the data sets/sources on which rankings are based only cover less than 10% of journals. Uses of such analyses, can only paint a partial, fuzzy picture of research activity.
Africa’s research output in these international journals is dismally low in any case accounting for less than 2 per cent of global research output.
With such a small percentage how much weight should we put on these rankings? Are these rankings are of any real significance for the continent?
Dr. Rob Drennan, director of research at University of Witwatersrand is of the opinion that: “Rankings do not impact on the academy in any way. The measure of excellence of scholars is the content of their scholarship. Number of publications and citations only scratch the surface of their contribution. The National Research Foundation (NRF) rating system comes a little closer to measuring their excellence as it is a peer review process considering the impact of their research on the academy or on society”.
This speaks to the important issue of quality vs. quantity. An even more significant element is that of relevance.
Prof. Gerald Ouma, head of academic planning at the University of Pretoria, argues that “by classifying and positioning universities using particular ‘gold‘ standards, rankings essentially seek to homogenise universities. Rankings fail to understand the different roles that individual institutions play within their societies. Why should universities pursue a common “gold” standard, when they have different missions, different capacities, and are confronted with varying developmental imperatives?” Ouma rightfully questions.
I concur with Ouma’s proposition for subject area rankings instead. Subject area rankings can help African universities to find discipline areas of strength. These insights are more useful at institutional/individual university planning level. At national level, they can help motivate for budget allocations based on national research priorities in as far as they impact communities and society at large and address challenges – poverty alleviation, economic growth, job creation, health etc.
At regional or continental level, better insights into areas of excellence can stimulate more intra-Africa research partnerships. Partnerships are important and can certainly help build research capacity. However, research partnerships remain biased towards European and American partners.
The rankings, however, do tell of an even greater challenge that many African universities face which is funding. Universities that will rank highly are the better endowed, well-resourced universities as they are also able to attract cream of the crop researchers and in the case of South Africa are able incentivise publications with financial rewards. Exceptions, such as Makerere University for example, have had to become creative and strategically seek partnerships in their research pursuit.
Despite the rhetoric to apportion at the very least 1 per cent of each country’s GDP to higher education and research, this has not materialised and much of Africa’s research remains foreign funded and true to the maxim ‘who pays the piper dictates the tune’. Yet, research has to address challenges of the continent and contribute to the development agenda.
For as long as investments in research remain marginal, this will continue to stifle research performance, lead to continued brain drain, as many scientists go in search of greener pastures and well equipped labs. African governments must put the money where their mouth is and bank on Africa’s knowledge economy.
Rankings as they are designed currently are not contextualised for Africa. In their current format and methodology are limited as they do not give a nuanced understanding of the context within which universities operate and the various systemic challenges such as limited access for many to higher education, the poor state of secondary education in some countries, the quality of undergraduate scholarship, graduation rates, funding, research relevance and priorities – basic vs. applied etc.
So then, what other indicators are important at this stage of Africa’s development? Recognising research, science and technology as key drivers of innovation and economic growth – what other indicators give us a more robust picture of what is working and what is required for a relevant, vibrant, strengthened higher education, research sector in Africa?
This is the discourse we need to be having.
N:B: https://www.timeshighereducation.co.uk/news/top-30-african-universities-times-higher-education-reveals-snapshot-univeristy-ranking
Ms. Pfungwa M. Nyamukachi, a scholar in science policy, is the Regional Subscriptions Executive for Africa Regions at Research Africa, based in Cape Town, South Africa.