ChatGPT is continuing to show the cracks in established documentation processes and “gate-keeping” systems…

Historically the written word has been used as part of the application process to judge the quality of the applicant(ion) and to dissuade casual applicants by requiring a certain level of effort.  Along with the enormous problems around GenAI authored student essays (which most academics would consider cheating) it appears that academics are getting on in the act by experimenting with GenAI to write proposals.

Is this cheating or simply using the most modeern tool for the job?   

In a recent article in Nature, the author (J.M. Parilla)  expresses a dislike for writing grant applications due to the extensive amount of work involved: Grant applications, he explains,  often require various documents, such as a case for support, a lay summary, an abstract, mulitple CV’s, impact statements, public engagement plans, project management details, letters of support, data handling plans, risk analysis, and more. Despite this extensive (expensive) effort, there’s a very high chance of rejection (90–95%).

The author suggests that the system is flawed, time-consuming, and cumbersome. The focus during the review process, he argues, is often on whether the proposal ticks a number of boxes in cluding whether it aligns with the call brief (including the format), if the science is good and novel and if the candidates are experts in the field.

The author decided to use ChatGPT as a tool that could assist in writing grant proposals which, he claims,  reduced the workload significantly. The author therefore questions the value of asking scientists to write documents that AI can easily create, suggesting it might be time for funding bodies to reconsider their application processes.

He notes that a recent Nature survey indicates a significant number of researchers (>25%) are already using AI to aid in writing manuscripts and (>15%) asdmit to using it for grant proposals. Whilst the article acknowledges that some may view using ChatGPT as “cheating” it argues that it underscores a larger issue in the current grant application system.

It concludes that the fact that artificial intelligence can do much of the work makes a mockery of the process and argues that it’s time to make it easier for scientists to ask for research funding.