GenAI tools highlight potential flaws in Grant Applications

ChatGPT is continuing to show the cracks in established documentation processes and “gate-keeping” systems…

Historically the written word has been used as part of the application process to judge the quality of the applicant(ion) and to dissuade casual applicants by requiring a certain level of effort.  Along with the enormous problems around GenAI authored student essays (which most academics would consider cheating) it appears that academics are getting on in the act by experimenting with GenAI to write proposals.

Is this cheating or simply using the most modeern tool for the job?   

In a recent article in Nature, the author (J.M. Parilla)  expresses a dislike for writing grant applications due to the extensive amount of work involved: Grant applications, he explains,  often require various documents, such as a case for support, a lay summary, an abstract, mulitple CV’s, impact statements, public engagement plans, project management details, letters of support, data handling plans, risk analysis, and more. Despite this extensive (expensive) effort, there’s a very high chance of rejection (90–95%).

The author suggests that the system is flawed, time-consuming, and cumbersome. The focus during the review process, he argues, is often on whether the proposal ticks a number of boxes in cluding whether it aligns with the call brief (including the format), if the science is good and novel and if the candidates are experts in the field.

The author decided to use ChatGPT as a tool that could assist in writing grant proposals which, he claims,  reduced the workload significantly. The author therefore questions the value of asking scientists to write documents that AI can easily create, suggesting it might be time for funding bodies to reconsider their application processes.

He notes that a recent Nature survey indicates a significant number of researchers (>25%) are already using AI to aid in writing manuscripts and (>15%) asdmit to using it for grant proposals. Whilst the article acknowledges that some may view using ChatGPT as “cheating” it argues that it underscores a larger issue in the current grant application system.

It concludes that the fact that artificial intelligence can do much of the work makes a mockery of the process and argues that it’s time to make it easier for scientists to ask for research funding.

Legal challenge over £330m NHS Palantir Deal

Campaigners are taking legal action to stop a data-sharing deal between the NHS and Palantir, a US tech company, due to worries about patient privacy. Palantir secured a £330 million contract to create a Federated Data Platform (FDP) for the NHS, aiming to improve information sharing and address a patient backlog. Concerns include granting such a critical project to a company with ties to US intelligence. Palantir will manage specific hospital data, but won’t own it, requiring NHS permission for access. Legal groups allege no legal foundation for the FDP and criticize its emergency procurement without competitive tender. Campaigners emphasize the need for parliamentary approval and proper rules for lawful handling of NHS data. If the NHS fails to prove the FDP’s legality, campaigners plan a judicial review. The NHS disputes the concerns, stating that the FDP will use legally collected existing data for direct patient care, complying with data protection regulations.