I’ve just published a #PreprintReview of “Gaming the Metrics? Bibliometric Anomalies and the Integrity Crisis in Global University Rankings” on @prereview https://prereview.org/reviews/15772738
#UniversityRankings #ResearchIntegrity
#bibliometrics
I’ve just published a #PreprintReview of “Gaming the Metrics? Bibliometric Anomalies and the Integrity Crisis in Global University Rankings” on @prereview https://prereview.org/reviews/15772738
#UniversityRankings #ResearchIntegrity
#bibliometrics
On my way home after two intense days at CWTS | Leiden University discussing data sources and indicators for bibliometric analysis. A huge thanks to Clara Calero Medina, Ludo Waltman and Vincent Traag for the inspiring conversations and useful tips that will soon be integrated into the Université de Lorraine #bibliometrics support services!
Open scholarly metadata is actually such a huge mess, especially for monographs, edited volumes, grey lit, i.e. anything that isn't an english STEM journal article.
Which actually explains so much about why the "metascience" scene is how it is....
The #INNBI project keeps gaining momentum — our third meeting just wrapped up with energy and shared purpose!
Partners from
are working together to move open science, high-quality metadata, and responsible bibliometrics forward — grounded in national contexts, but globally connected.
Together, we’re building a future where research infrastructures are open, interoperable, and inclusive.
Project info: https://www.dzhw.eu/en/forschung/projekt?pr_id=756
Unlock the Power of Open Research Data!
Join our hands-on OpenAIRE Graph API Workshop on June 20, 14:00 – 16:00 CEST.
Limited to 30 participants for an interactive experience.
Dive into real-world applications on:
- Bibliometric analysis
- Scholarly discovery &
- Open Science monitoring
Register now: shorturl.at/QZVD2
This Wednesday, I'll be presenting part of my PhD research on the history of neuroscience in Argentina. The talk brings together scientometric analysis and qualitative research to explore how the field has taken shape, and the factors that influence it. Let me know if you're interested, I'd be happy to share the link
#neuroscience #STS #historyofscience #scientometrics #bibliometrics #Argentina
Release of selected and curated #OpenAlex data on German research institutions https://www.open-bibliometrics.de/posts/20250507-OpenDataRelease/ #bibliometrics
Journal Article: “Data Sources Used in #Bibliometrics 1978–2022: From Proprietary Databases to the Great Wide Open”
Stable pattern with #webofscience and #Scopus
Current emphasis on #opensource
Are we entering the great wide open, or will established proprietary databases remain a dominating source?
https://www.infodocket.com/2025/05/16/journal-article-data-sources-used-in-bibliometrics-1978-2022-from-proprietary-databases-to-the-great-wide-open/
[Veille] déclinaison du BSO national par l'ANR : "Baromètre science ouverte de l’ANR : 88,6 % des publications en accès ouvert en 2024" => https://anr.fr/fr/actualites-de-lanr/details/news/barometre-science-ouverte-de-lanr-886-des-publications-en-acces-ouvert-en-2024/
#BSO #openscience #monitoring #assesment #openaccess #dashbording #bibliometrics
"Clarivate has announced that, from 2025 (2024 data), citations to and from retracted articles will no longer contribute towards the Journal Impact Factor." - Research Information
https://www.researchinformation.info/news/clarivate-removes-citations-to-and-from-retracted-articles-from-jif/
Being a "Highly Cited Researcher" has gone from a sign of having impact as a researcher to a potential indicator of misconduct.
"Manipulations have been so obvious and large that, in 2024, over 2,000 researchers were removed from a HCR list containing some 6,600 names." - Lauranne Chaignon
Everyone talks about nanotech, but who digs into how nanostructures are made?
We do — with 35,000+ papers analyzed!
Our new paper compares electrochemical etching vs. deposition through a bibliometric lens.
Our #INNBI project just had its first official meeting — and it’s already taking off on the wings of scientific progress!
Bringing together partners from
, we're building a network of national bibliometric infrastructures to share harmonized, curated data and elevate the visibility of diverse research systems.
Project info:
https://www.dzhw.eu/en/forschung/projekt?pr_id=756
I just reviewed a paper. I sent a detailed 3 page critique, mostly of the Bayesian analysis used in the paper. Reviewer 1 wrote 2 sentences, "The current manuscript is a very well written". With reviewers like that, it's no wonder that science has problems. Too many papers are being submitted, so it's impossible to find competent reviewers.
I blame #bibliometrics.
MDPI as a corruption indicator? A new preprint shows a striking trend across Europe : more MDPI papers → higher perceived corruption → lower innovation.
https://arxiv.org/abs/2411.06282v1
It’s not that MDPI = bad. But when it dominates, it signals a broken system chasing quantity over quality.
Ukraine? Not in the study, but we see the same rise of #MDPI. We could build better. Instead, we copy the worst.
I’m excited to be part of #INNBI - a new project connecting national bibliometric infrastructures across
. Together, we aim to boost the visibility of national research systems and improve bibliometric data quality through local expertise.
https://www.dzhw.eu/en/forschung/projekt?pr_id=756
A small step toward a fairer, smarter research ecosystem. Let’s see if we can turn good intentions into real impact.
ResearchFish Again
One of the things I definitely don’t miss about working in the UK university system is the dreaded Researchfish. If you’ve never heard of this bit of software, it’s intended to collect data relating to the outputs of research grants funded by the various Research Councils. That’s not an unreasonable thing to want to do, of course, but the interface is – or at least was when I last used it several years ago – extremely clunky and user-unfriendly. That meant that, once a year, along with other academics with research grants (in my case from STFC) I had to waste hours uploading bibliometric and other data by hand. A sensible system would have harvested this automatically as it is mostly available online at various locations or allowed users simply to upload their own publication list as a file; most of us keep an up-to-date list of publications for various reasons (including vanity!) anyway. Institutions also keep track of all this stuff independently. All this duplication seemed utterly pointless.
I always wondered what happened to the information I uploaded every year, which seemed to disappear without trace into the bowels of RCUK. I assume it was used for something, but mere researchers were never told to what purpose. I guess it was used to assess the performance of researchers in some way.
When I left the UK in 2018 to work full-time in Ireland, I took great pleasure in ignoring the multiple emails demanding that I do yet another Researchfish upload. The automated reminders turned into individual emails threatening that I would never again be eligible for funding if I didn’t do it, to which I eventually replied that I wouldn’t be applying for UK research grants anymore anyway. So there. Eventually the emails stopped.
Then, about three years ago, ResearchFish went from being merely pointless to downright sinister as a scandal erupted about the company that operates it (called Infotech), involving the abuse of data and the bullying of academics. I wrote about this here. It then transpired that UKRI, the umbrella organization governing the UK’s research council had been actively conniving with Infotech to target critics. An inquiry was promised but I don’t know what became of that.
Anyway, all that was a while ago and I neither longer live nor work in the UK so why mention ResearchFish again, now?
The reason is something that shocked me when I found out about it a few days ago. Researchfish is now operated by commercial publishing house Elsevier.
Words fail. I can’t be the only person to see a gigantic conflict of interest. How can a government agency allow the assessment of its research outputs to be outsourced to a company that profits hugely by the publication of those outputs? There’s a phrase in British English which I think is in fairly common usage: marking your own homework. This relates to individuals or organizations who have been given the responsibility for regulating their own products. Is very apt here.
The acquisition of Researchfish isn’t the only example of Elsevier getting its talons stuck into academia life. Elsevier also “runs” the bibliometric service Scopus which it markets as a sort of quality indicator for academic articles. I put “runs” in inverted commas because Scopus is hopelessly inaccurate and unreliable. I can certainly speak from experience on that. Nevertheless, Elsevier has managed to dupe research managers – clearly not the brightest people in the world – into thinking that Scopus is a quality product. I suppose the more you pay for something the less inclined you are to doubt its worth, because if you do find you have paid worthless junk you look like an idiot.
A few days ago I posted a piece that include this excerpt from an article in Wired:
Every industry has certain problems universally acknowledged as broken: insurance in health care, licensing in music, standardized testing in education, tipping in the restaurant business. In academia, it’s publishing. Academic publishing is dominated by for-profit giants like Elsevier and Springer. Calling their practice a form of thuggery isn’t so much an insult as an economic observation.
With the steady encroachment of the likes of Elsevier into research assessment, it is clear that as well as raking in huge profits, the thugs are now also assuming the role of the police. The academic publishing industry is a monstrous juggernaut that is doing untold damage to research and is set to do more. It has to stop.
An entertaining, informative and overall really well done video about the h-index and why you shouldn't use it, by @stefhaustein, @carey_mlchen et al.
I really will be sharing this video a lot: "What is the h-index and what are its limitations? Or: Stop using the h-index"
https://www.youtube.com/watch?v=HSf79S3XkJw
#hIndex #bibliometrics #researchEvaluation #researchAssessment #publishOrPerish
Since the 1980s, scholarly journals have increasingly denationalized & anglicized their titles to boost international visibility. This trend aligns with indexing policies of Web of Science & Scopus, favouring English-language journals.
https://doi.org/10.1002/asi.24989
Is global reach worth the loss of local identity?
Please show me your favorite network graphs about scholarship! Topic modeling, bibliometric, scientometric, whatever.
You get bonus points if the figure comes with a caption that explains what the node size/edge thickness/colors/clusters/? mean.
Bonus points = pictures of my cute dog