About
Background
I graduated from Tufts University in 2016 with a bachelor’s in cognitive science and philosophy. In 2017-2018, I earned a graduate diploma and master’s in economics from the University of Cambridge. After the fourth year of my PhD at the Wharton School of Business, I’m taking a leave of absence to work at Ravio, a compensation benchmarking startup.
Research
I research statistics and experimental methods. Specifically, I study multiple inference - the problem of comparing many “things” at once. While the statistical and experimental tools I’ve developed apply to many scientific fields, I’m most interested in applying them to forecasting. For example, how might we compare the predictive accuracy of many forecasters to assemble a team of superforecasters? Or, how should we run forecasting tournaments testing the effectiveness of many interventions to improve accuracy and persuasion?
Research highlights:
- Simple models forecast behavior at least as well as professional behavioral scientists.
- Megastudies - in which researchers test many treatments in a single, large-scale study - haven’t been very effective. They only appear effective because of statistical errors. To run megastudies more effectively - for example, a forecasting tournament testing many interventions to improve accuracy - researchers should use adaptive assignment. If that sounds difficult, here is a Qualtrics-like software I designed to help.
- A new statistical estimator for inference after ranking. For example, how accurate are the top-performing forecasters in a forecasting tournament?