There’s been a bit of a protest over the current “pilot” project scheme1 grant competition at CIHR. In fact, it’s so bad that the federal Minister has told them to meet with the scientists and sort it out.
The CBC does a surprisingly good job of explaining this to people who may not be scientists, but you might want a bit more context and background on the kerfuffle, which is where this post2 comes in.
Grants Background: In brief, the Canadian Institutes of Health Research (CIHR) is one of Canada’s main source of research grants, funding basic, translational, and clinical research related to human health. A grant is some money given to a research team to pay for the costs of a research project — reagents and software, grad student and post-doctoral fellow stipends, etc., etc. There is never enough money from funding agencies to go around: human curiosity knows no bounds (and the research enterprise is big). To get a grant, typically a scientist will write3 a proposal describing what they intend to study, explaining why it’s important, demonstrating the need for the research (what questions will it answer, what problems will it solve, what impact will it have on health care), describing the methods they will use in the study, and justifying why they are the ones to get the money and do the research (track record, expertise).
This proposal — along with dozens or hundreds or thousands of others — gets reviewed by the funding agency and expert external peer reviewers, who point out potential flaws in the methodology, or other issues with the proposed work. It gets scored and ranked by a review panel, based on the input from the external reviews and discussions at the panel4. In this way, the best, most promising research usually gets funded. It’s not a perfect system, but (without bothering to look up the research) the top ~10% of proposals are generally consistently funded, the bottom portion generally rejected, with some random chance elements in the middle as to what made it over the funding cut-off and what didn’t.
Funding Shortfall: Funds are tight in research, there isn’t nearly enough money to go around. This has been a long-standing problem that has been getting worse and worse over the years, particularly under the Harper conservatives in Canada and the post-GFC/sequestration in the US. There are many ways to deal with a shortfall of funds, and none of them are perfect: NSERC, for example, maintains a high success rate in their core Discovery grants, but cuts the requested budget of all but the top few score ratings so that most awardees don’t receive enough money to pay the costs of even a single full-time trainee. For other grant competitions where budgets are not cut, the success rate becomes abysmal.
That funding crunch is part of the background to the changes that CIHR made to its funding programs.
Early Career Researchers: On top of the general funding tightness and peer review issues, another issue with the reforms is the effect on early career researchers. Cancelling a year’s worth of applications can lead to a gap in funding for many, which can be deadly for a career. Plus the change in the funding model on the foundation grant side reduced the amount of money going to early career researchers — will there be an increase on the project scheme side to offset that? (Unlikely)
The Changes: CIHR made a bunch of sweeping changes at once, from combining a bunch of separate programs into a single competition, to changing the application format, to changing the way peer review was handled. All aspects are drawing fire in one way or another, but it’s the changes to peer review in particular that are the centre of the current unrest and the open letter (and Ministerial response).
Oh, and all these changes were implemented at once, after cancelling a few competitions so there was added pressure to apply now. CIHR called this a “pilot”, but that suggests a partial, limited-scale test — this post covers that aspect of the affair.
Peer Review: Here’s a great idea: rather than spending money in an already stretched environment to fly peer reviewers from all over the world to Ottawa for face-to-face meetings, let’s use the technology of the internet to do virtual conferences. Sounds brilliant, like one of those obvious cost savings measures that you can’t believe they’re not doing already. But what’s hilarious/tragic in reading the background to this story is that it’s been tried before (in actual pilots) and has been a massive failure — indeed, back when the changes were first proposed, an open letter from a group of Universite de Montreal scientists in 2012 predicted exactly this outcome of virtual peer review.
The core problem is that scientists doing peer review are humans. Incredibly busy humans. So yes, reading other people’s grants and scoring them is important for science and the integrity of the peer review system… but lots of things are important today. What is it that ultimately makes a scientist sit down and start poring over research proposals? Usually, it’s the knowledge that they’re going to have to sit at a table with their colleagues to discuss these grants and the pressure to not be the only one who didn’t do their homework. Plus, that puts a hard deadline on the process to activate the panic monster, and gives them a nice plane ride to sit down and actually do the reviews in a panicked sweat. When things are virtual, they don’t have to look their peers in the eye (they may never even know who the slacker is), especially when the instructions acknowledge that they’re busy people and suggest they can do it, like, whenever. So the reviews aren’t as good, there’s less peer pressure to be timely, and that’s what’s driving a lot of the uproar here: a large number of grants still did not have all their reviews in as the virtual conferences were nearing their end, and many of the reviews were not of high quality.
On top of that, the circumstances of this particular competition have exacerbated the problem: to avoid conflicts-of-interest, people who have a grant in the current competition aren’t invited to serve as peer reviewers. However, everyone and their dog is applying to this competition — sometimes with more than one proposal — because of the pent-up demand caused by the poor funding environment and two cancelled rounds of the previous open operating grant. So more applications, and very few people left as eligible reviewers. Plus, figuring out who has the expertise to be an expert reviewer has changed, and many are finding that the system (which I believe is now automatic and keyword-based) is matching people to grants that they are not fully qualified to review — though it’s not clear to me if that’s because the system is inherently broken, or a unique feature of this round where there are so many applications and a relative paucity of reviewers (though I’m sure it’s something that’s on the agenda for the meetings with CIHR).
The virtual reviews has also created a point of contention around how the reviews are ultimately combined and averaged: the formula hasn’t actually been released.
The Future: The next round of the competition was supposed to have been announced last week, with applications to be due in the fall. For the moment that’s on hold while CIHR meets with some scientists to sort this out and possibly re-jig how peer review is handled. Given the uproar, it’s likely something will be tweaked.
However, the funding success rates continue to be poor. That’s part of the background to the story, but the reforms (and possibly reverting them) will not be able to change that — there still isn’t enough money to go around. Though success rates were over 30% in the not-too-distant past, they have plunged below 20% in recent years, and the estimates are that this competition will see a ~13% success rate (likely about 500 grants from about 3800 applications). If there can’t be more funding, people want to be sure that the awards that do get made are fairly — and clearly, observably fairly — given to the best grant proposals. With the current system it looks like there is more noise and randomness, so it’s not so clear that the best-ranked grants are truly the best applications, because of the issues happening with the quality of peer review. In other words, low success rate + random scores = lottery. Indeed, I have in the past made the quip that when grant competitions have heartbreakingly low success rates, you may be better off spending the time you would have been writing an application on a 2nd job flipping burgers, and using that money to buy scratchers at the convenience store to fund your research program.
Of course, more money for research would really help here, so take a moment to write your MP and ask them to increase tricouncil5 funding for research.
1. Wayfare says: “Calling it a scheme was their first mistake. Nothing good is called a scheme.”
2. A quick note: I work in developing grants for CIHR and other agencies as part of my day job. It is not done to openly criticize the people who give you money, unless it’s very constructive. The criticisms I’m posting are those of others, for context on the controversy.
3. If they’re very lucky, they’ll have someone like me help to write/edit it. {/self-promote}
4. And that this point I’ll have to say much of the mechanics of this is a bit of a black box to me — I likely know more than most of my readers, but I have never seen a review panel in action.
5. Tricouncil refers to Canada’s three core federal research funding agencies: NSERC, CIHR, and SSHRC. There are other funding bodies, some of which have received increases in targeted funding even in the black Harper years, but these are the ones that could really use a letter of support from the electorate. A letter I suppose I should draft and post here… stay tuned.