AIRON Event: AI, Research Evaluation, and REF 2029 — What We Need to Understand Now
🗓 22 January 2026 | 🕓 4pm GMT
🔗 Event link: (3) AI, Research Evaluation, and REF 2029: What We Need to Understand Now | LinkedIn
In this session, Mike Thelwall will present recent evidence on how both public models (e.g., ChatGPT) and smaller private models perform when assessing research quality. The discussion draws on REF-evaluated journal articles and also considers where AI may be applied across outputs, impact case studies, environment statements, and pre-publication peer review.
We will also explore a practical, responsible stance: LLMs can be persuasive and still wrong, but when used carefully they may provide supporting insights — for example by averaging multiple outputs and using rankings rather than absolute scores — to support, not replace, expert judgement.
This event is part one of a two-part series. We will continue the conversation in February with a follow-up panel featuring Mike Thelwall, Elizabeth Gadd, and Helen Young.
If you can’t attend live, you can still stay connected:
📩 AIRON newsletter (summary + Q&A): Newsletter sign up page
🔗 AIRON LinkedIn community: (4) AI in Research Operations Network (AIRON) | Groups | LinkedIn
![AIRON Event: AI and Research Evaluation [Jan 22, online, 16:00 GMT]](https://dariah.ie/wordpress/wp-content/uploads/2025/08/cropped-cropped-Picture-1-scaled-1.png)
