My research attempts to measure the transparency and integrity of researchers’ decisions, and to identify the factors that shape these decisions (e.g. norms, commitments, information, incentives, etc.). I am committed to open access and to fully disclose datasets, coding books, syntax, etc. I am also training myself on reproducibility. Substantively, my interest is in the following disciplines: labour economics, criminology and education research.
So far I have worked on the following topics:
(1) (Cluster) sampling bias;
(2) Outcome reporting bias;
(3) Time discounting and time preference for evidence;
(4) Sponsorship bias.
However, I am interested in all areas of meta-research. Please get in touch if you would like to co-author papers with me.
1. (Cluster) sampling bias
For pilot or experimental employment programme results to apply beyond their test bed, researchers must select ‘clusters’ (i.e. the job centres delivering the new intervention) that are reasonably representative of the whole territory. More specifically, this requirement must account for conditions that could artificially inflate the effect of a programme, such as the fluidity of the local labour market or the performance of the local job centre. Failure to achieve representativeness results in Cluster Sampling Bias (CSB).
This paper makes three contributions to the literature. Theoretically, it approaches the notion of CSB as a human behaviour. It offers a comprehensive theory, whereby researchers with limited resources and conflicting priorities tend to oversample ‘effect-enhancing’ clusters when piloting a new intervention. Methodologically, it advocates for a ‘narrow and deep’ scope, as opposed to the ‘wide and shallow’ scope, which has prevailed so far. The PILOT-2 dataset was developed to test this idea. Empirically, it provides evidence on the prevalence of CSB. In conditions similar to the PILOT-2 case study, investigators (1) do not sample clusters with a view to maximise generalisability; (2) do not oversample ‘effect-enhancing’ clusters; (3) consistently oversample some clusters, including those with higher-than-average client caseloads; and (4) report their sampling decisions in an inconsistent and generally poor manner.
In conclusion, although CSB is prevalent, it is still unclear whether it is intentional and meant to mislead stakeholders about the expected effect of the intervention or due to higher-level constraints or other considerations.
2. Outcome reporting bias
The reporting of evaluation outcomes can be a point of contention between evaluators and policymakers when a given reform fails to fulfil its promises. Whereas evaluators are required to report outcomes in full, policymakers have a vested interest in framing these outcomes in a positive light – especially when they previously expressed a commitment to the reform. The current evidence base is limited to a survey of policy evaluators and observational studies investigating the influence of industry sponsorship on the reporting of clinical trials.
The objective of this study was twofold. Firstly, it aimed to assess the risk of outcome reporting bias (or ‘spin’) in pilot evaluation reports, using seven indicators developed by clinicians. Secondly, it sought to examine how the government’s commitment to a given reform may affect the level of spin found in the corresponding evaluation report.
To answer these questions, the content of 13 evaluation reports were analysed, all of which found a non-significant effect of the intervention on its stated primary outcome. These reports were systematically selected from a dataset of 233 pilot and experimental evaluations spanning three policy areas and 13 years of government-commissioned research in the UK.
The results show that the risk of spin is real. Indeed, all studies reviewed here resorted to at least one of the presentational strategies associated with a risk of spin. This study also found a small, negative association between the seniority of the reform’s champion and the risk of spin in the evaluation of that reform. The publication of protocols and the use of reporting guidelines are recommended.
Status: revised paper submitted to PLOS One.
3. Time discounting and time preference preference for evidence
4. Sponsorship bias