Funders’ use of AI must start with people and problems

Funders’ use of AI must start with people and problems

 Image source: Leo Garbutt

International project shows tech’s potential—if the structures and values are right, says Denis Newman-Griffis

Artificial intelligence is already affecting how researchers conduct projects and write publications and grant applications. But where do AI and machine learning technologies fit into the funding and assessment of research? And how do funders make sure AI is used responsibly in their work, to best support the systems they help shape?

Over the past two years, the Research on Research Institute (RoRI) in London has been working with an international consortium of funders to answer these questions.

The Grail project (short for getting responsible about AI and machine learning in research funding and evaluation) has brought together 13 public and private funders from 11 countries across Europe, North America and Australia to better understand how funders are exploring and applying AI and machine learning, and to build a community of practice to achieve responsible futures.

On 20 June, 2025, we launched Funding by Algorithm: A handbook for responsible uses of AI and machine learning by research funders. This sets out the key information funders need to know for their work, and the key lessons learnt from in-depth discussions of AI in funding and assessment.

Need to know

The handbook outlines core concepts that anyone working with AI needs to understand, including the long history and wide variety of technologies beyond ChatGPT and other large language models (LLMs) that fall under its umbrella. It highlights the policy contexts motivating funders to explore AI, and the organisational challenges involved in bringing it into practice.

The handbook is grounded in real-world practice, describing the steps involved in AI applications and showing case studies from participating funders, including the Swiss National Science Foundation (SNSF), the Novo Nordisk Foundation in Denmark, Spain’s La Caixa Foundation, the Research Council of Norway and UK Research and Innovation.

Exploring AI and bringing it into responsible practice is, we’ve found, not really a question of technology. The key lessons are about the people, processes and practices involved in AI use: the structures put in place around it, the teams that implement and manage it, and putting the right policies, values and culture in place.

Rather than seeking nails to hit with the AI hammer, the key to using the technology effectively is to start with your problem and goals, and work from there to understand if and where AI technologies might help.

With technologies changing so quickly, it can feel like any kind of guidance or best practice will be obsolete within six months. But we’ve found that exploring or applying AI draws on skills and questions that don’t change with the technologies, and which actually build resilience to the shifting winds of AI.

AI thinking

Our model of AI Thinking, drawing on funders’ experiences, describes the key competencies that teams need to use these technologies responsibly and with practical impact. Beginning by matching real problems to appropriate technologies in a specific context helps to understand how to work in the current landscape and how to respond as new technologies emerge.

In general, funders are cautiously optimistic about AI. They see its potential to help them work more efficiently and effectively, putting the right information in front of the right people at the right time, and even to bring new insights about their work.

For example, one funder in Grail highlighted how exploring AI in peer review helped to identify unexpected patterns in how reviewers were matched to proposals and what they might pick up on in reviews. But funders are also well aware of the organisational, reputational and regulatory risks around issues like data security, as well as the effects AI use is already having on research cultures.

There is a spectrum of attitudes and approaches to AI. Some funders are early adopters, such as the SNSF and La Caixa, which have used AI for matching proposals to reviewers. The Research Council of Norway has used AI to analyse societal impacts of projects in their funding portfolio.

Other funders are just beginning their journeys, exploring how to get started and what applications AI might be useful for. Even those not currently exploring AI in their own processes—about a third in our recent Global Research Council survey—need to understand how their applicants are using it and set standards and guidance.

The AI landscape is evolving rapidly in funding and assessment. The near future will probably see chatbots to help applicants navigate funding resources, AI summaries of reviewer feedback, and even generation of non-academic summaries of funded research.

These will draw on many technologies, including general-purpose LLMs as well as bespoke machine learning. RoRI’s handbook offers a starting point to explore future applications, and a strong grounding in how to ensure AI is used ethically and responsibly to benefit applicants, funders and research systems.

Denis Newman-Griffis is a senior lecturer and theme lead for AI-enabled research at the Centre for Machine Intelligence, University of Sheffield, and a co-leader of the Grail project. They are speaking at the Metascience 2025 conference on 30 June. 

Research Professional News is media partner for Metascience 2025, taking place 30 June-2 July in London.

link

Leave a Reply

Your email address will not be published. Required fields are marked *