Group Details
- Status: Recognised and Endorsed
- Chair(s): Jiban K. Pal, Christian Pagé, Leyla Jael Castro, Daniela Gawehns
- Secretariat Liaison: Bridget Walker
- TAB Liaison: Daniel Bangert
The goal of a discipline-agnostic, standardized reproducibility assessment checklist is to promote scientific integrity by providing a practical framework for researchers, scientists and organizations across all disciplines to systematically assess and document the reproducibility of data and computational science to help ensure that research and scientific outputs are reliable.
Reproducible data and code means that the final data and code are computationally reproducible within some tolerance interval or defined limits of precision and accuracy, ie a 3rd party will be able to verify the data lineage and processing, reanalyze the data and obtain consistent computational results using the same input raw data, computational steps, methods, computer software & code, and conditions of analysis in order to determine if the same result emerges from the reprocessing and reanalysis. “Same result” can mean different things in different contexts: identical measures in a fully deterministic context, the same numerical results but differing in some irrelevant detail, statistically similar results in a non-deterministic context, or validation of a hypothesis. All data and code are made available for 3rd-party verification of reproducibility. Note that reproducibility is a different concept from replicability. In the latter case, the final published data are linked to sufficiently detailed methods and information for a 3rd-party to be able to verify the results based on the independent collection of new raw data using similar or different methods but leading to comparable results.
The aim of the checklist is to standardize and simplify the process of documenting computational reproducibility in research and science across disciplines. By fostering a common understanding, transparency, and accountability, a checklist enhances trust in research outputs, supports compliance with open science mandates, and streamlines reproducibility assessments for researchers and scientists, institutions, funding agencies, publishers, conference organizers, and reviewers. Ultimately, the checklists aim to make reproducibility an integral and achievable standard in research and science.