Publications
This is a collection of my most recent publications.
You can also filter my publications by tag:
Towards Automated Computational Auditing of mHealth Security and Privacy Regulations
2021 ACM SIGSAC Conference on Computer and Communications Security (CCS ’21)
Nov. 13, 2021The growing complexity of our regulatory environment presents us with a hard problem: how can we determine if we are compliant with an ever-growing body of regulations? Computational legal auditing may help, as computational tools are exceptionally good at making sense of large amounts of data. In this research, we explore the possibility of creating a computational auditor that checks if mobile health (mHealth) apps satisfy federal security and privacy regulations. In doing so, we find that while it is challenging to convert open-ended, generally applicable, complicated laws into computational principles, the use of non-legal, authoritative, explanatory documents allows for computational operationalization while preserving the open-ended nature of the law. We test our auditor on 182 FDA/CE-approved mHealth apps. Our research suggests that the use of non-legal, authoritative, guidance documents may help with the creation of computational auditors, a promising tool to help us manage our ever-growing regulatory responsibilities.
Observing Many Researchers Using the Same Data and Hypothesis Reveals a Hidden Universe of Uncertainty
Working Paper
How does noise generated by researcher decisions undermine the credibility of science? We test this by observing all decisions made among 73 research teams as they independently conduct studies on the same hypothesis with identical starting data. We find excessive variation of outcomes. When combined, the 107 observed research decisions taken across teams explained at most 2.6% of the total variance in effect sizes and 10% of the deviance in subjective conclusions. Expertise, prior beliefs and attitudes of the researchers explain even less. Each model deployed to test the hypothesis was unique, which highlights a vast universe of research design variability that is normally hidden from view and suggests humility when presenting and interpreting scientific findings.
How Many Replicators Does It Take to Achieve Reliability? Investigating Researcher Variability in a Crowdsourced Replication
Working Paper
The paper reports findings from a crowdsourced replication. Eighty-four replicator teams attempted to verify results reported in an original study by running the same models with the same data. The replication involved an experimental condition. A “transparent” group received the original study and code, and an “opaque” group received the same underlying study but with only a methods section and description of the regression coefficients without size or significance, and no code. The transparent group mostly verified the original study (95.5%), while the opaque group had less success (89.4%). Qualitative investigation of the replicators’ workflows reveals many causes of non-verification. Two categories of these causes are hypothesized, routine and non-routine. After correcting non-routine errors in the research process to ensure that the results reflect a level of quality that should be present in ‘real-world’ research, the rate of verification was 96.1% in the transparent group and 92.4% in the opaque group. Two conclusions follow: (1) Although high, the verification rate suggests that it would take a minimum of three replicators per study to achieve replication reliability of at least 95% confidence assuming ecological validity in this controlled setting, and (2) like any type of scientific research, replication is prone to errors that derive from routine and undeliberate actions in the research process. The latter suggests that idiosyncratic researcher variability might provide a key to understanding part of the “reliability crisis” in social and behavioral science and is a reminder of the importance of transparent and well documented workflows.
The Crowdsourced Replication Initiative: Investigating Immigration and Social Policy Preferences
Working Paper
In an era of mass migration, social scientists, populist parties and social movements raise concerns over the future of immigration-destination societies. What impacts does this have on policy and social solidarity? Comparative cross-national research, relying mostly on secondary data, has findings in different directions. There is a threat of selective model reporting and lack of replicability. The heterogeneity of countries obscures attempts to clearly define data-generating models. P-hacking and HARKing lurk among standard research practices in this area.
This project employs crowdsourcing to address these issues. It draws on replication, deliberation, meta-analysis and harnessing the power of many minds at once. The Crowdsourced Replication Initiative carries two main goals, (a) to better investigate the linkage between immigration and social policy preferences across countries, and (b) to develop crowdsourcing as a social science method. The Executive Report provides short reviews of the area of social policy preferences and immigration, and the methods and impetus behind crowdsourcing plus a description of the entire project. Three main areas of findings will appear in three papers, that are registered as PAPs or in process.