Quality or deception? Gaming the university ranking system
4 weeks ago |

Flawed methodology and metrics demote top Indian institutes to lower ranks, rendering rankings arbitrary. A case study of THE World University Rankings 2025 for Indian institutions.

Indian academic institutions are vying for higher global rankings, fueling a competitive race to secure top spots. This ambition has heightened the popularity of global university ranking systems like QS (Quacquarelli Symonds, UK), ARWU (Academic Ranking of World Universities, Shanghai), and THE (Times Higher Education, UK) among academic leaders, executives, and policymakers.

Trustworthiness and credibility in question

The recently released Times Higher Education (THE) World University Rankings (WUR) 2025 reveal several anomalies that contradict ground realities and public perception regarding Indian institutions.

The Indian Institute of Science (IISc), Bangalore, renowned for its research excellence in India and globally for decades, has been ranked 50th in ‘research quality’ and 30th in ‘international outlook’ in the latest rankings. These placements raise serious questions about the credibility and objectivity of the rankings. Equally puzzling is IISc’s placement at 30th in ‘international outlook,’ further undermining the rankings’ legitimacy.

Such discrepancies risk reducing the rankings to a subject of public ridicule. This article examines the methodology and scoring processes underlying these anomalies to evaluate their credibility and reliability. It raises critical questions: How can THE rankings deviate significantly from public perception and the core principles of academia—teaching and research? Are these ranking scores genuinely meaningful, or are they simply arbitrary figures?

Shockingly, as claimed on its website, THE rankings are trusted by governments and universities and serve as a critical resource for students in choosing where to study, as claimed by the agency. THE also positions itself as the world’s most influential university ranking body, boasting over five decades of expertise in higher education analysis and insights, and a deep understanding of global university performance trends.

Performance indicators from THE source

Since its inception in 2004, THE world university ranking has periodically updated its methodology to reflect the changing dynamics of global higher education. According to THE, the 2025 rankings, currently under scrutiny, are based on a revised methodology introduced in 2023. This approach evaluates universities across 18 calibrated performance indicators grouped into five key areas: teaching (learning environment) – 30%, research environment (volume, income, and reputation), and research quality (citation impact, research strength, excellence, and influence) – 59% combined for research-related metrics; international outlook (staff, students, and research) – 7%; and industry engagement (income and patents) – 4%. Teaching and research dominate the scoring, accounting for 89%, while international outlook and industry engagement comprise 11%.

The falsity of the overall ranking

While IISc Bangalore is rightly recognized as the top Indian institution, aligning with public perception and ground realities, the placement of institutions like Anna University (Chennai), Mahatma Gandhi University (Kottayam), Saveetha University (Chennai), and Shoolini University (Solan) in the top five raises severe concerns about the objectivity and credibility of the rankings. A credible ranking should resonate with the informed perceptions of academic stakeholders, which are often validated over time.

Notably absent from the rankings are India’s five older IITs—Bombay, Delhi, Kanpur, Kharagpur, and Madras—along with other premier institutions. This omission suggests that several of India’s top institutions opted out of the THE 2025 rankings, likely due to concerns about the fairness and transparency of the evaluation process. To maintain credibility, the ranking agency should prominently disclose the non-participating institutions. Users relying on these rankings for decisions must be made aware of the absence of key institutions, accompanied by a disclaimer clarifying the context of the rankings.

Furthermore, the listed institutions display stark disparities in size and scope. Some serve only a few hundred students, focusing on niche areas like Information Technology, while others cater to tens of thousands across diverse disciplines. Applying a uniform ranking framework to such heterogeneous institutions undermines the validity of the assessment.

Ranking for teaching

In the teaching area, IISc Bangalore secured the top position, followed by BHU (Varanasi), Acharya Nagarjuna University (Guntur), Mahatma Gandhi University (Kottayam), Shiksha’ O’ Anusandhan (Bhubaneswar), and IIT Indore. Apart from IISc, including these institutions does not align with public perception. The teaching is assessed using five parameters — teaching reputation (15%), Student-faculty ratio (4.5%), doctorate-to-bachelor ratio (2%), PhD faculty (5.5%), and institutional income (2.5%).

The dominant factor, teaching reputation, is highly subjective, relying on perceptions gathered through surveys. Additionally, IISc, primarily a graduate and research-focused institution, scores exceptionally well in the doctorate-to-bachelor ratio, though this metric has the least weight.

Moreover, most prestigious Indian institutions are public-funded or not-for-profit, rendering the institutional income parameter largely irrelevant. The methodology’s lack of consideration for these contextual nuances further undermines the credibility of the teaching rankings.

Arbitrary research rankings with 59% weight

Research rankings, which constitute 59% of the overall score, are determined by two major components: the research environment and research quality. The research environment is assessed based on reputation, income, and productivity, while research quality is evaluated through citation impact, research strength, excellence, and influence. Research reputation holds the highest weight at 18%, citation impact accounts for 15%, and the remaining factors—research strength, excellence, and influence—contribute approximately 5% each.

Shockingly, IISc Bangalore, the nation’s undisputed leader in research for decades, is ranked 50th in research quality, while institutions with negligible research footprints—Chitkara (Chandigarh), Saveetha (Chennai), Shoolini (Solan), Lovely (Phagwara), and Thapar (Patiala)—occupy the top five spots. IISc scored 51.5, compared to these institutions, which achieved scores of 88.9, 88.6, 87.2, 84.7, and 83.3, respectively.

Ironically, while IISc is ranked first in the research environment, the top-ranked institutions for research quality scored dismally in this category, with scores of 11.4, 16.0, 21.3, 14.6, and 13.9, respectively. This glaring inconsistency highlights a fundamental contradiction: a strong research environment is universally acknowledged as a prerequisite for high research quality. The lack of alignment between these metrics undermines the foundational principle that robust environments foster quality research.

Such anomalies suggest that these rankings are arbitrary, lack sound methodology and transparency, and lack a credible explanation to justify their methods or objectivity.

International outlook: A questionable metric

The globally renowned IISc Bangalore, widely respected for its contributions to research and academia, is ranked 30th in the “International Outlook” category with a modest score of 31.6. In contrast, Saveetha University (Chennai) secures the top spot with an impressive score of 72.5. This stark disparity raises serious questions about the parameters used to evaluate international outlook. Institutions like IISc, known for attracting top-tier global researchers and collaborations, are ranked so low that it suggests a disconnect between the ranking criteria and ground realities.

Interestingly, IISc excels in the “industry” category, retaining its rank as the top institution, underscoring its strong ties to innovation and applied research. It further highlights the inconsistency in methodologies that underrate its research quality and international presence while recognizing its industrial impact.

The rankings, scores, and observations undermine the credibility of the Times Higher Education (THE) system, turning it into a subject of ridicule among the public and stakeholders. These discrepancies highlight the need for greater transparency and a re-design of the methodology, normalization procedures, and weightage applied. The flawed metrics raise concerns about the system’s objectivity, fairness, and potential for manipulation, which could mislead students and young individuals seeking educational and career guidance. The ranking agency should strengthen its methodology and standard operating procedures (SOPs), establish robust data verification processes, and enhance transparency.



Linkedin


Disclaimer

Views expressed above are the author’s own.



END OF ARTICLE