When the National Science Library (NSL), Chinese Academy of Sciences (CAS) failed to release its annual journal ranking table this March, the significance was not immediately apparent outside China.
Inside the country’s academic system, however, the moment carried weight. The disappearance of the rankings was, in practical terms, akin to Clarivate discontinuing the Journal Citation Reports (JCR)—a benchmark widely used across global academia.
Yet the comparison only goes so far. While JCR serves as a reference tool, China’s domestic ranking system has often functioned as something more powerful: a quasi-official standard embedded in hiring, promotion, and funding decisions.
Now, that system has been formally withdrawn—only to reappear, in altered form, under a new and independent banner.
From official benchmark to independent platform
For years, the CAS journal ranking table occupied a unique position in China’s research ecosystem. Although technically defined as a research output, it became deeply embedded in institutional practice. Universities, funding bodies, and hiring committees routinely treated it as an authoritative guide to journal quality.
Its influence extended far beyond bibliometrics. In many cases, the tier of a journal could shape the perceived value of an individual paper, reinforcing a system in which publication venue acted as a proxy for research quality.
That authority was closely tied to institutional identity. Backed by the Chinese Academy of Sciences, the rankings carried an implicit official status—even if they were never formally mandated as policy.
With the withdrawal of the National Science Library from the process, that institutional anchor has now been removed.
Same team, new name
Almost immediately following the suspension, a new ranking system was released by Xinrui Scholar.
Despite being positioned as an independent, third-party platform, Xinrui Scholar represents significant continuity. Its core team includes several individuals who were deeply involved in developing the original CAS rankings, in some cases for more than a decade.
The shift, then, is less about intellectual reinvention than institutional repositioning.
What has changed more substantially is the data infrastructure. Earlier versions of the ranking system relied heavily on proprietary commercial databases. The new platform is moving away from those sources, exploring open citation data and alternative datasets as part of a broader shift toward more accessible research infrastructure.
This transition reflects both practical constraints and global trends—but it also introduces new questions about consistency, comparability, and long-term reliability.
Public good or commercial service?
Xinrui Scholar has indicated that access to the ranking itself will remain free, echoing the public-facing nature of its predecessor. However, its operational model is more explicitly hybrid.
Revenue is expected to come from value-added services, including data access, APIs, and research analytics tools targeted at universities and research institutions. At the same time, the organization has stated it will avoid financial relationships with journals or publishers, in an effort to maintain independence.
Even so, concerns persist.
In a system where rankings can directly influence academic careers and institutional outcomes, the boundary between public service and commercial activity is not easily managed. Credibility will depend not only on methodology, but on perceived neutrality.
Is this the end of journal-based evaluation?
The withdrawal of the CAS rankings has prompted some observers to suggest that China may be moving away from evaluating research through journal-based metrics.
In reality, the picture is more complex.
As Xinrui Scholar’s leadership acknowledges, the reliance on journal rankings reflects structural challenges: the scale of research output, the uneven development of peer review systems, and the administrative need for standardized evaluation tools.
In this context, rankings are not the root of the problem but a response to it.
Even without the CAS system, alternative classification frameworks—domestic and international—are likely to continue playing a central role. The key issue is not whether such systems exist, but how they are used.
Ongoing debates over metrics and methodology
The transition has also revived longstanding debates about how journals should be evaluated.
Critics point to controversial reclassifications, including the downgrading of well-established titles and the perceived inflation of others. Supporters argue that newer indicators better capture overall citation distribution, rather than relying solely on averages such as impact factor.
Methodological transparency remains a particular concern. While core principles may be publicly described, replicating results requires access to large-scale data and computational resources that are not widely available.
At the same time, policies aimed at supporting domestic journals continue to divide opinion. Advocates see them as essential for strengthening China’s publishing ecosystem and global visibility. Detractors argue they risk distorting competition and reinforcing systemic bias.
Beyond the rankings
More fundamentally, the episode highlights a deeper transition within China’s research evaluation system.
With the removal of an “official” benchmark, institutions face increasing pressure to diversify how research quality is assessed. There is growing recognition of the need to move beyond single metrics toward more holistic approaches, including peer review and qualitative evaluation.
Yet progress remains uneven. Institutional capacity, incentive structures, and administrative demands continue to favor simplified, quantitative tools.
In that sense, the re-emergence of the ranking system—under a different name but familiar leadership—suggests that continuity may prevail, at least in the near term.
A system under pressure
The end of the CAS journal ranking table does not mark a clean break. Instead, it exposes the tensions within a system that has long relied on a single, influential metric to manage complexity at scale.
For international observers, the comparison with JCR offers a useful reference point—but also highlights a key distinction. Where global rankings tend to inform, China’s system has often been used to decide.
That difference is now under scrutiny.
As Xinrui Scholar takes on a more prominent role, the question is not simply whether it can replicate the influence of its predecessor. It is whether China’s research evaluation system is ready to evolve beyond it.
For now, the answer remains uncertain.
- Log in to post comments