Doing experiments
evolving decision-making in an era of uncertainty
Any discussion of what it takes to produce a scientific breakthrough requires a few words about the current system for supporting scientific discovery, specifically, how that system developed and why it’s no longer performing as intended. Some very thoughtful people have made good suggestions about how to fix it, but I’ll finish by sharing mine, which is a little bit different.
Let me start with a brief summary of how the current system evolved. Federal dollars are the largest source of research and development funding for academic institutions in the United States, and have been since 1953 [1]. There are many agencies that distribute and administer federal research grants, but the two largest, the National Institutes of Health (NIH) and the National Science Foundation (NSF), share a common ancestor, the Office of Scientific Research and Development (OSRD). OSRD was established in 1941 by executive order and Vannevar Bush, at that time the President of the Carnegie Institution of Washington, was named as Director, reporting directly to President Franklin D. Roosevelt.
In November 1944, Roosevelt wrote a letter to Bush, asking what could be done to maintain the wartime pace of scientific advancements after peace was secured. In response, Bush wrote the now-famous report, Science, the Endless Frontier, persuasively arguing that advances in science lead to new industries, more jobs, and a higher quality of life. Any money the government spent advancing science would be worth it, he argued, because the return on investment would be so high – and subsequent analysis has consistently shown this to be true [2].
To help make his case, Bush also pointed to two discoveries – penicillin and radar – that helped save Allied lives in the war against fascism. No one set out to discover either; both were byproducts of scientists following their curiosity about the world. Lucky accidents. Serendipity. We couldn’t rely on luck, Bush said. We needed to invest, and because we couldn’t know where the next penicillin would come from, we needed to invest broadly.
Thoroughly convinced by The Endless Frontier, Congress voted to establish the NSF in 1950, largely following Bush’s design. A notable exception was the rescission of biomedical research, which Bush had intended to be within the remit of NSF but which Congress instead assigned to the nascent National Institutes of Health. By any measure, the enterprise that Bush dreamed of has grown to be wildly successful. To date, NIH has supported 174 Nobel laureates, while NSF has supported 268 [3,4]. The benefits haven’t only been academic. NIH funding has been essential in the development of nearly every new drug that goes to market [5], while NSF research undergirds the lasers in LASIK eye surgery, the touchscreens and batteries in smartphones, the computer-aided design that 3D printing relies on, and so much more.
But there was a flaw in Vannevar’s plan – a problem he didn’t foresee that has grown over time. The scientific frontier is endless; our budget is not. It’s easy to say Congress should provide more funds, but just like widening highways actually worsens congestion, pumping in more money expands the frontier and recreates the original problem. Nor does it address the downstream effects of a hyper-competitive research environment, many of which were succinctly described by Mike Lauer, the former Director of Extramural Research at NIH, in a recent interview with Santi Ruiz of Statecraft. As Dr. Lauer acknowledges, these include compromised rigor, reduced innovation, and a raft of perverse incentives related to training young scientists. Two things can be true at once: the current system has enabled great achievements, and we’re leaving advances on the table. We need to do better.
Midway through the interview, Ruiz suggests experimenting with the way we fund science, and Lauer agrees. Their discussion focuses on a single proposal: shifting away from smaller project grants to individual investigators in favor of large block grants. I have an alternative proposal: use data [6], rather than relying on expert opinion alone, to identify potential breakthroughs and other emerging areas that would benefit from early investment.
Figure. The development of research on fluorescent proteins and super-resolution fluorescence microscopy in the form of a trajectory, from 1995 (the first year the topic existed as a discrete cluster) through 2017. The area of each cluster (circle) is proportional to the number of publications it contains, benchmarked to the size of the largest (2016) cluster of 4,279 publications. The blue asterisk indicates the cluster in which the breakthrough papers, which were published in 2006, first appear. From 2025 Davis et al. [6].
Arguably, this is where the existing system is most broken: it’s well established that unfamiliar and anti-paradigmatic ideas are viewed with skepticism. Studies have found that as grant proposals become more novel, reviewers uniformly and systematically assign them worse scores [7]; recognition of novel research papers is delayed, and is more likely to come from investigators working outside their field of origin [8]. Most seriously, seeing their peers provide negative evaluations drives reviewers to be more critical themselves, so that the actual process of peer review is the source of its conservatism [9].
To truly address our current problems and accelerate scientific progress, we need to do more than change who gets funded; we need to change how funding decisions are made.
References
[1] https://web.archive.org/web/20260110144801/https://ncses.nsf.gov/
[2] T. Gullo, B. Page, D. Weiner, and H. L. Williams. Estimating the economic and budgetary effects of research investments. NBER working paper 33402 DOI 10.3386/w33402 (2025)
[3] https://web.archive.org/web/20260111024113/https://www.nih.gov/about-nih/nih-almanac/nobel-laureates
[4] https://web.archive.org/web/20260111024929/https://www.nsf.gov/about/history/nobel-prizes
[5] E. G. Cleary, J. M. Beierlein, N. S. Khanuja, L. M. McNamee. and F. D. Ledley. Contribution of NIH funding to new drug approvals 2010-2016. PNAS 115: 2329-2334. (2018)
[6] M. T. Davis, B. L. Busse, S. Arabi, P. Meyer, T. A. Hoppe, R. A. Meseroll, B. I. Hutchins, K. A. Willis, and G. M. Santangelo. Prediction of transformative breakthroughs in biomedical research. bioRxiv, doi:https://doi.org/10.64898/2025.12.16.694385 (2025)
[7] K. J. Boudreau, E. C. Guinan, K. R. Lakhani, and C. Riedel. The Novelty Paradox & Bias for Normal Science: Evidence from Randomized Medical Grant Proposal Evaluations. HBS Scholarly Articles, http://nrs.harvard.edu/urn-3:HUL.InstRepos:10001229 (2012)
[8] J. Wang, R. Veugelers, and P. Stephan. Bias against novelty in science: A cautionary tale for users of bibliometric indicators. Research Policy 46:1416-1436. (2017)
[9] J. N. Lane, M. Teplitskiy, G. Gray, H. Ranu, M. Menietti, E. C. Guinan, and K. R. Lakhani. Conservatism Gets Funded? A Field Experiment on the Role of Negative Information in Novel Project Evaluation. Management Science, 68:4478-4495. (2022)



![Figure. The development of research that discovered green fluorescent protein and developed super-resolution fluorescence microscopy in the form of a trajectory, from 1995 (the first year the topic existed as a discrete cluster) through 2017. The area of each cluster (circle) is proportional to the number of publications it contains, benchmarked to the size of the largest (2016) cluster of 4,279 publications. Blue asterisk indicates the cluster in which the breakthrough papers, which were published in 2006, first appear. From 2025 Davis et al. [6] Figure. The development of research that discovered green fluorescent protein and developed super-resolution fluorescence microscopy in the form of a trajectory, from 1995 (the first year the topic existed as a discrete cluster) through 2017. The area of each cluster (circle) is proportional to the number of publications it contains, benchmarked to the size of the largest (2016) cluster of 4,279 publications. Blue asterisk indicates the cluster in which the breakthrough papers, which were published in 2006, first appear. From 2025 Davis et al. [6]](https://substackcdn.com/image/fetch/$s_!H0gz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc1c64ae2-e6ba-4188-8082-bd43f4acb001_1578x367.heic)