This is a collection of my most recent publications.

You can also filter my publications by tag:

This study explores how researchers’ analytical choices affect the reliability of scientific findings. Most discussions of reliability problems in science focus on systematic biases. We broaden the lens to emphasize the idiosyncrasy of conscious and unconscious decisions that researchers make during data analysis. We coordinated 161 researchers in 73 research teams and observed their research decisions as they used the same data to independently test the same prominent social science hypothesis: that greater immigration reduces support for social policies among the public. In this typical case of social science research, research teams reported both widely diverging numerical findings and substantive conclusions despite identical start conditions. Researchers’ expertise, prior beliefs, and expectations barely predict the wide variation in research outcomes. More than 95% of the total variance in numerical results remains unexplained even after qualitative coding of all identifiable decisions in each team’s workflow. This reveals a universe of uncertainty that remains hidden when considering a single study in isolation. The idiosyncratic nature of how researchers’ results and conclusions varied is a previously underappreciated explanation for why many scientific hypotheses remain contested. These results call for greater epistemic humility and clarity in reporting scientific findings.

Towards Automated Computational Auditing of mHealth Security and Privacy Regulations

2021 ACM SIGSAC Conference on Computer and Communications Security (CCS ’21)

    Nov. 13, 2021

The growing complexity of our regulatory environment presents us with a hard problem: how can we determine if we are compliant with an ever-growing body of regulations? Computational legal auditing may help, as computational tools are exceptionally good at making sense of large amounts of data. In this research, we explore the possibility of creating a computational auditor that checks if mobile health (mHealth) apps satisfy federal security and privacy regulations. In doing so, we find that while it is challenging to convert open-ended, generally applicable, complicated laws into computational principles, the use of non-legal, authoritative, explanatory documents allows for computational operationalization while preserving the open-ended nature of the law. We test our auditor on 182 FDA/CE-approved mHealth apps. Our research suggests that the use of non-legal, authoritative, guidance documents may help with the creation of computational auditors, a promising tool to help us manage our ever-growing regulatory responsibilities.