On writing a strong NSF pre-proposal

Since 2013, NSF DEB has used a two-step merit review system–a 5 page pre-proposal, followed by a 15 page full proposal–to inform the process of awarding grants. The full proposal had long been standard. The pre-proposal, just four years old, is still relatively new. I have served on three DEB pre-proposal panels. Over the course of that service I have made some observations and formed some opinions as to what makes for a successful pre-proposal. I’d like to share some of those observations here.

As panel work is anonymous, I scrub any reference to specific panels, panel members, or program officers. My intent is to walk you through the process of being picked for a panel, writing reviews, and the panel work itself, adding some observations and suggestions where appropriate. I then step back with some conclusions about why some pre-proposals are invited for a 15-page full proposal, and others aren’t. I finish with some suggestions as to how best to craft a strong pre-proposal.

Caveat emptor: these are my opinions. To paraphrase Anne Elk these impressions that I have, that is to say, which are mine, are mine.

First some data

DEBrief ‘s recent post included a graph that summarized success rates versus average scores for both pre-proposals and full proposals. The x-axis shows the average score of the reviews (n=3 for pre-proposals, n>3 for full proposals), where a reviewer score of Poor=1, Fair=2, Good=3, Very Good=4, and Excellent=5 (more on that scoring system below). One clear lesson is that the key challenge for pre-proposals is to achieve an average score of ca. 4 or above. That is, scores dominated by VG’s and E’s.

DEBBrief

To see how, let’s first examine how the proposals are reviewed.

Getting selected for a pre-proposal panel

Around the beginning of the year NSF emails a bunch of folks asking if they would be available to serve on a pre-proposal panel, noting that at this point they are just establishing a pool of potential panelists. You are asked if you are interested. After replying in the affirmative, a subsequent email asks which of three panel dates fit your schedule. The next email about a week later says “Thank you serving on a panel service; please fill out these forms, and arrange hotel and airfare through our designated travel agency.”

Observation: Note NSF pays a stipend for this work, out of which you pay for your hotel in Arlington (not cheap) plus airfare, and expenses. Note also that the following January you will receive an IRS W9 for this stipend, which means it’s miscellaneous income, which means that if you are reasonably prudent, after paying some of this stipend back to Uncle Sam as income tax, you will about break even financially when serving on an NSF panel.

Identifying the pre-proposals you would like to see

Soon thereafter you are emailed a list of ca. 150 pre-proposals listed by title, Principle Investigator, and the PI’s affiliation. You are asked to speedily 1) identify any conflict of interests that weren’t already apparent from your working address; and, 2) assign each pre-proposal into one of three categories: effectively “1. I would like to review”, “2. I’m OK with reviewing”, and “3. I don’t want to review”. This process is a little mind-bending, and takes about and hour and a half.

Observation: The title of your pre-proposal is important. If you want to give the Program Officers at NSF their first, positive clue as to how the scientific community views your project, craft a title that interests non-specialists in your field. The flipside is just as true: if you want to send the opposite signal, load your title up with jargon and make it obtuse enough that you earn loads of “3”s. The POs will then have to work to scrape together three panelists. Some practical advice (which also applies to manuscripts): craft 5 titles for your pre-proposal and market test them–in your lab or among your colleagues–to find which one sparks the most interest. If you are like me, you will be surprised as to how often the consensus is not your favorite.

Reviewing the Pre-Proposals

About one month before your panel meets, you are sent a list of 20 or so pre-proposals to review. They consist of a summary page, a personnel page, and a 4-page project description.

Criteria

NSF instructs reviewers to consider the following criteria sequentially when judging the quality of a pre-proposal.

A. Is there a clear and compelling question(s) of general interest to the field?

B. Is the question(s) well motivated and justified within a broader conceptual framework?

C. Does the experimental design/approach logically and feasibly address the question(s) posed?

D. Are the senior personnel qualified to conduct this research?

E. Is there a credible plan for broader impacts?

Scores

NSF’s instructs reviewers to integrate these criteria and generate a score from the following menu. It is OK to include two scores (i.e., VG/E, F/G). The scores and their official NSF meaning:

Excellent–Outstanding proposal in all respects; deserves highest priority for support.

Very Good–High quality proposal in nearly all respects; should be supported if at all possible.

Good— A quality proposal, worthy of support.

Fair— Proposal lacking in one or more critical aspects; key issues need to be addressed.

Poor— Proposal has serious deficiencies.

The challenge for reviewers

Three reviewers, all members of the panel that meets in Arlington, are assigned to each pre-proposal. There are no outside, ad hoc reviewers on pre-proposal panels. It is my experience that the NSF program officers pretty much unfailingly assemble a panel who take this job very seriously, and who, collectively, represent the field the NSF subprogram is tasked to serve.

Each panelist has one month to review and write substantive comments on ca. 20 proposals; on about a third of those, you are expected to lead the discussion and write up the final panel summary. This service is squeezed into the usual academic spring schedule: the student committee meetings, job interviews, field season planning, and general end of academic year chaos. Luckily (or unluckily, as the case may be) you can always dip into Spring Break to finish your reviewing obligations.

As a panelist, you know that at least one Program Officer will read you review, as will the two panelists who are also assigned that pre-proposal. Unlike ad hoc, anonymous reviews, I have rarely encountered a panelist review that was not thorough and helpful. (Think of the difference between anonymous comments on social media versus face-to-face discussions). 

Observations: Most reviewers read the Project Summary and Project Description together in one sitting and then power through their review. In my experience, that typically means 2 hours of concentrated work. In that time you must grok the project, often on a topic well outside you area of expertise, judge its importance, and write a constructive review. The chief task of a pre-proposal review is to detail the strengths and weaknesses of the proposed work so as to improve subsequent efforts by the PI, whether they involve an invited full proposal or not.

Just to emphasize, you as a PI will be lucky to have one person from your specialty among the three scientists tasked with reviewing your proposal. I will return to this later, but suffice to say that the first task of a PI is to craft pre-proposal’s entry point into the problem. This entry point must entice three people and a program officer from an audience as broad as the field served by your panel. If you have some chops as a teacher, this will serve you well in writing a pre-proposal.

The Project Summary of your pre-proposal is key. It has to sing. By its conclusion, the reviewer, with a folder full of pre-proposals still to review, will already be taking notes and beginning the intellectual process of figuring out what score to assign. After reading the Project Summary, you want that reviewer on your side. This leads to another small but useful bit of advice.

DON’T copy and paste paragraphs from your Project Description into your Project Summary (or visa versa). At best, it looks a little lazy, and at worst it denies you precious space to explain in greater detail or in a complementary way what you propose to do and why. Again, the reviewer is reading this in one sitting, and, unlike a 15-page full proposal, can comfortably keep the whole thing in her head. The Project Summary and Project Description should complement and reinforce each other, not parrot each other.

Panel service

One month later you fly to DC on a Tuesday, take the metro to Arlington, check into your hotel, and go out for a beer and dinner with a colleague or two. There are a number of benefits to panel work. The biggies: you are helping in a critical communal enterprise, and you learn an enormous amount about the nuts and bolts of NSF. There are also two more fringe benefits. First, for a foodie, Arlington ain’t half bad, especially now that food trucks park near the NSF building (though I suggest you avoid the fusion truck that serves the Borscht BBQ Burrito). Second, you see old friends, make new friends, and invariably are introduced to folks–serving on the same panel–whose work you’ve long admired.

The panel process goes something like this. A long rectangle constructed from 4 tables dominates a conference room. One side of the room has a steady supply of not entirely unhealthy food and coffee; opposite is a plate glass window. Twenty-five or so panelists sit on three sides of the rectangle; at the front of the room sit five program officers. Three administrative assistants work the back of the room. Our collective job for the next 2.5 days is to caucus, decide, present, and write up summaries of ca. 150 proposals, with a broad guideline of recommending 25% for Invites for full proposal. As panelists, our primary job is to write constructive reviews that will help anybody who submitted a proposal to get a sense of the strengths, weaknesses, and how to improve their work.

The three panelists assigned each proposal, have already uploaded their reviews to Fastlane. One panelist, the “scribe”, typically gathers the other two outside the conference room to discuss their scores and decide on a ranking of “Do Not Invite” or “Invite” for full proposal. Those discussions can be relatively brief if everyone ranked a proposal “F”. They can go on for half an hour if there is strong disagreement or if everybody liked the proposal. In the former case, the trio needs to come to a decision by talking it out (in all but handful of proposals, the three reach a consensus). In the latter case, all three are interested in seeing the proposal succeed and spend that extra time making as strong a case as possible for presentation to the full panel.

Back at the table, the Program Officers hold court, bringing up proposal after proposal for discussion (after first temporarily exiling panelists with conflicts of interests). Multiple program officers are typically taking notes during the discussions. For each proposal, the scribe presents a short summary of the group’s conclusions, with interested parties around the table listening in. The other two members of the trio add their two cents. Sometimes another panel member may do so as well. The Program Officers ask a question or two, and the “Invite” or “Do Not Invite” recommendation is added to the big screen.

That done, it is up to the scribe to summarize the panel’s recommendation, using NSF’s Fastlane Panel System. The goal is to constructively identify the strengths and the weaknesses of the proposal, highlighting the consensus points of the three reviewers, as well as any new insights that arose as a result of panel deliberations. NSF is freakishly obsessed with quality control at this point–Fastlane allows the two other reviewers to comment, and the panel summary is subsequently tweaked; an amazing set of administrative assistants wordsmith further; the summary is further tweaked; a Program Officer then edits and makes queries and edits. By the time the scribe “Formally Submits” the panel summary to Fastlane, it has seen at least four editors.

Then it is up to the Program Officers who use the reviews, panel summary,  their notes, the history of the proposal, and other criteria (Hello, first time investigators!) to decide which PIs get invited to submit full proposals. The panel, whose work is now done, doesn’t see this part.

Observation: One of the strengths of NSF’s approach is that everybody who formats their proposal correctly and submits it on time earns the time and efforts of the panel. In that sense, NSF’s process is scrupulously egalitarian. However, one consequence is that, while the panel may end up classifying 25% of the pre-proposals as “Invite”, many, many of those in the “Do Not Invite” category sample the “P, PF, F, FG, G” end of the gradient (see the DEBrief post above). Keep that in mind when funding percentages are discussed. Sure, U.S. science is underfunded. Those numbers are disturbing enough. But although only 1 in 4 or so pre-proposals may advance to “Invite”, at least another 1 in 4 were not that competitive in the first place.

How pre-proposals make the cut

Let’s review the criteria set out by NSF for evaluating pre-proposals:

A. Is there a clear and compelling question(s) of general interest to the field?

B. Is the question(s) well motivated and justified within a broader conceptual framework?

C. Does the experimental design/approach logically and feasibly address the question(s) posed?

D. Are the senior personnel qualified to conduct this research?

E. Is there a credible plan for broader impacts?

Based on my observations of fellow panelists, this is how I translate the scoring system, incorporating the above criteria.

Excellent (Rare): Wow, I really want to hear what the PIs find out, and they clearly have the chops to do it. This is as close to a transformational proposal as I’ve seen.

Very Good (Uncommon): Solid all around, OR an Excellent for which B and C need strengthening.

Good (Common): Nothing really wrong with it but suffers in comparison with those receiving an E or VG. Often a weak A and B, or a great A or B.

Fair (Uncommon): A and/or B missing, OR A or B present but C is an absolute disaster.

Poor (Rare): Aggressively bad: A and B missing.

Summing it all up:

The Probability of an “Invite” is proportional to
(A2 + B2)*(C+D+E).

That is, a strong A and B followed by a competent C, D and E are the ingredients of successful pre-proposal. Given that, let’s deconstruct criteria A through E.

A. Big question? Is there a clear and compelling question(s) of general interest to the field?

There are three key words here.

Clear–The question, pitched thoughtfully, needs to show up in the first three lines of the Project Summary, and the first half page of the Project Description. You are entering P/F/G territory if the three panelists can’t agree on your question.

Compelling–This is where the magic lies, the difference between an ecology course taught by a competent and a master teacher. When your panelists caucus, they agree, “we need to know the answer to this”.

General Interest–The three panelists assigned your proposal will include, if you are lucky, one person who would be giving a talk in the same Oral Presentation as you at an international meeting. That is to say, the panelists represent a diversity of expertise, and likely two of them don’t eat/breathe/sleep your interest. Your project must be pitched to appeal to, and be understood by, any good Ecologist/Evolutionary Biologist. Imagine you study the community ecology of microbial biofilms: how would you explain what motivates your work to a behavioral ecologist who works on sexual selection in birds, a paleo-ecologist examining climatic regulation of forests, and an ecosystem ecologist who studies the how a dead leaf is transformed into CO2 and minerals? A really cool question that is pitched primarily to mammalogists, mycologists, or myrmecologists, will likely not score particularly high.

B. Broad Concept? Is the question(s) well motivated and justified within a broader conceptual framework?

You may notice that Criteria 1 and 2 share the notion of General and Broad. That’s not a coincidence: what you do must appeal to a wide audience in part, by using the concepts (theory and hypotheses) that structure the field. Once you have posed your question, lay out a suite of hypotheses–complementary and/or contrasting–that outline out your grand plan to deconstruct the question, structure the data you collect, and provide a way of interpreting your results.

Hypotheses are scenarios: statements of a possible reality. The guts of an hypothesis are its assumptions, joined by logic, that yield predictions. You can test hypotheses by testing their assumptions and predictions.

Hypotheses are not a bothersome technicality. They show the PI’s ability to break down the problem into chunks. They have the handy feature of motivating and structuring your data collection.

It is common for a great “A” to ignore or skim through “B”. One result is that the reviewer gets lost in the “Proposed Work” section.

C. Feasible? Does the experimental design/approach logically and feasibly address the question(s) posed?

This is a difficult section to write well, and one of the main reasons why we still have 15-page full proposals. Luckily, the pre-proposal needs just a good first draft of the proposed work, leaving out details of experimental design, time lines, sample sizes, power analyses, etc. That said, the Proposed Work section has to be clear. A common problem in grant proposals also clouds many Materials and Methods sections–details of the work are laid out but without reference to how they answer components of the question and the hypotheses to be tested. The Proposed Work is a good place to brush up on your topic sentences (i.e., To test Hypotheses 2, we will….).

One way to show that something is feasible is to show that you’ve done something like it before. Another is to have pilot data. Pilot data suggests you just didn’t whip this proposal out over Christmas break. The more ambitious the project (and “ambitious project” is a phrase often associated with scores of “VG” and “E”), the greater the need for pilot data.

D. Competent? Are the senior personnel qualified to conduct this research?

Make sure your NSF Bio highlights your expertise and that of your collaborators. Refer a couple of times to this work in the Project Description.

E. Broader Impacts: Benefit society or advance desired societal outcomes

This is still a hard one for me to get a handle on. It is rare for a weak Broader Impacts to sink a great Intellectual Merit. It is not uncommon for a fantastic Broader Impacts to nudge a borderline Do Not Invite into Invite.

A few specifics go a long way. This is a great place to use the Project Summary to  outline what you will do, and the Project Description to give some specifics.

Avoid being vague. It takes as much space to say “I have mentored undergraduates (including 5 females and 5 from underrepresented groups)” as it does to say, “My lab has a long tradition of mentoring females and underrepresented groups.”

If a broader impact can help resource managers or policy makers, contrast briefly how differing results would yield to different advice (in other words, say why this will be helpful). If you are developing K-12 curricula, having a working relationship with your local school district is a big plus.

A few closing thoughts

Program Officers are your allies

Program officers are like Statistics Experts–they’d much rather talk to you as you craft your pre-proposal than help you perform the post-mortem. If you are in the DC area, make an appointment to visit. Volunteer for panel service.

Chasing deadlines

A good pre-proposal needs time to gestate. It will likely benefit from your carving out some writing and thinking time on a monthly basis, rather than investing that same total amount of time to the two or three weeks of Xmas break.

The importance of getting some eyes on your pre-proposal

As the mid-January DEB deadlines approach, you and a lot of your colleagues are in the same boat. You’ve worked hard, crafting each section of your proposal. But with only a week to go, you and only you (and your co-PIs) have likely seen the product. You need to fix that. Here’s one way that works. It is especially germane for senior colleagues in your department who are on the record that “they don’t want pre-proposals to penalize junior colleagues” and are looking for some tangible way to help.

Here’s what we do. A week before the NSF deadline, we assemble a list of department colleagues that are submitting a pre-proposal. We set one day as Review Day. The bargain is that anyone agreeing to review 2-3 proposals–one read-through, about an hour each–gets 2-3 other sets of eyes on their proposal in exchange . The goal here is to identify the big stuff–jargon, poor word choice, “red flags”–that can easily turn a “VG” into a “G” (while you’re at it, market test your title!). The beauty of this system is that the diversity of colleagues in your department is a feature, not a bug, as it comes very close to the diversity of expertise you’d find on a typical NSF panel. I guarantee our “24 hour buddy” system will give you lots of ways, small and large, to craft a better proposal.

Your mileage may vary…

I have been fortunate to serve on a number of NSF panels in my career, and have always found them intensely rewarding experiences. As I said at the beginning, my observations and advice makes sense to me, but should be taken with a grain of salt. I hope this essay catalyzes a conversation. We need that conversation. For, in the end, what makes a quality proposal, like the apocryphal elephant, must to some extent be in the eye of the beholder.

elephant and scientists by Debby Kaspari

Three reviewers trying to understand a very Broader Impact indeed. By Debby Kaspari.

Good luck, and happy grant writing.

Mike