Pilot 1 - Decision Aid

TIER2's Decision Aid will provide clarity on the meaning, relevance, and feasibility of ‘reproducibility’ for researchers to aid them in identifying what type of reproducibility is relevant for their research and indicate what they must consider regarding how feasible such ‘reproducibility’ would be for them. The tool will be piloted with two researcher groups (qualitative and machine learning researchers).

Stakeholders: Researchers, publishers, funders

Timeline: -

Objectives: 

  • To explore through piloting to what extent the decision aid (tool) is useful and efficient.

  • The ultimate goal of the tool is to facilitate ‘reproducibility’ where it is relevant, as well as feasible and prevent demands for ‘reproducibility’ where it is irrelevant and/or unfeasible.  

Pilot 2 - Reproducibility Management Plan (RMP)

The Reproducibility Management Plan (RMP) Pilot aims to create a prototype of key thematic subjects and questions that will serve as the starting point to support reproducibility at the planning stage of research. Work involves defining what an RMP is, integrating it into the ARGOS service, and testing its effectiveness with feedback from the community. The pilot addresses researchers, beneficiaries and funders for its adoption.

Stakeholders: Researchers, research communities, funders, and service providers

Timeline: -

Objectives: 

  • To emphasise reproducibility activities within a research output management lifecycle;

  • To streamline reproducibility practices in publicly funded research projects;

  • To provide tools and guidance to adopt best reproducibility practices;

  • To generate case studies to promote a common understanding of reproducibility across various domains.

Pilot 3 - Reproducible Workflows

Computational experiments are essential for modern research, yet their complexity often hinders reproducibility. The TIER2 Pilot 3 publication “A Virtual Laboratory for Managing Computational Experiments” introduces SCHEMA lab, an open-source virtual environment that enables the design, execution, and tracking of containerised experiments with full provenance. By capturing configurations, datasets, software environments, and performance metrics, SCHEMA lab enhances reproducibility and transparency across disciplines. It supports individual researchers and research infrastructures in organising, comparing, and reusing computational workflows, fostering credible, reusable, and FAIR digital science practices.

Stakeholders: Life scientists, computer scientists 

Timeline: -

Objectives: The main goal was to customise and evaluate tools/practices for reproducible workflows in life and computer sciences, with underlying objectives of extending the SCHEMA open-source platform to support reproducibility in both fields by leveraging software containerisation, workflow description languages (CWL, Snakemake), and experiment packaging specifications (RO-crate), particularly emphasising machine learning in computer science.

Pilot 4 - Reproducibility Checklists for Computational Social Science Research

In this Pilot, we aim to provide a structure of well-defined checklists and templates that can help review data and code reproducibility for computational social scientists. The checklists and review templates cater for the specific needs of the three research phases, i.e., planning and data collection, process and analysis and finally sharing and archiving the research resources. It results in building trust and authority in the social science research community.

Stakeholders: Computational Social Scientists (Research Producers and consumers)

Timeline: -

Objectives: The goal of this Pilot is to enhance reproducibility for code and data through checklists in research lifecycle. 

Pilot 5 - Reproducibility Promotion Plans for Funders

The Reproducibility Promotion Plan for Funders (RPP) provides a comprehensive policy template with actionable recommendations to help funders foster reproducible research practices across three key areas of their work: evaluation and monitoring, policy and definitions, and incentives. The RPP serves as both guidance and inspiration for research funders to develop robust internal practices that strengthen their funding processes, while also managing external expectations by clearly communicating reproducibility requirements to researchers. Designed to be adaptable, the RPP offers best practice examples and practical tools that funders can tailor to their specific institutional contexts and levels of experience. Ultimately, the RPP aims to catalyse cultural and procedural change within funding organisations, empowering them to support researchers in conducting rigorous, transparent, and reproducible research.

Stakeholders: Funders

Timeline: -

Objectives: This Pilot aimed to help funders improve the research quality of the projects and researchers they fund to build and sustain trustworthy outcomes.

Pilot 6 - Reproducibility Monitoring Dashboard

The Reproducibility Monitoring Dashboard Pilot aims to develop tools that enable funding agencies to track and monitor the reusability of research artifacts across various projects, programs, topics, and disciplines. This auto-generated dashboard assesses the impacts of policies related to data and code sharing. Furthermore, we are establishing essential requirements to make the dashboard user-friendly for publishers.

Stakeholders: Research Performing Organisations (RPOs), Funders, Publishers and Researchers

Timeline: -

Objectives: Develop tools that allow funding agencies to track and monitor the reusability of research artifacts:

  • Develop, extend, and test a suite of tools for tracking major research artifacts in Computer Sciences such as datasets, software, and methods, with a particular focus on Artificial Intelligence.

  • Quantify and estimate reusability indicators based on various types of artifacts.

  • Design and implement a dashboard that allows funding agencies to track and monitor the reusability of research artifacts created in funded projects.


Pilot 7 - Editorial Workflows to Increase Data Sharing

This Pilot is aimed at increasing data sharing in published work. Data sharing is an important building block for increased reproducibility & transparency, but current rates of sharing are low.

Stakeholders: Publishers

Timeline: -

Objectives: The pilot will improve our knowledge on data sharing with two activities:

- A randomised controlled trial of an intervention targeting data availability statements with the aim to increase deposition of data in trusted repositories.

- A Delphi-study to gather consensus on the most pressing issues and best paths to improve sharing of research data underlying publications.

Pilot 8 - An Editorial Reference Handbook for Reproducibility and FAIRness

The Editorial Reference Handbook informs and assists scholarly publishers in supporting the sharing of digital research objects and in operationalising findable, accessible, interoperable and reusable (FAIR) research practices by addressing gaps in editorial workflows, policy implementation and stakeholder alignment. The Handbook comprises three interrelated components—a checklist, detailed guidance, and a flowchart—intended primarily for in-house editorial staff while also providing value to reviewers, authors, and service providers.  

The Pilot included representatives of Cambridge University Press, Cell Press, EMBO Press, F1000 (Taylor & Francis), GigaScience Press, Lancet, Oxford University Press, PLOS, Springer Nature, Wiley.

Stakeholders: Publishers

Timeline: -

Objectives:  The Handbook is created to help put the requirements of the journal data policy in action:

  • journals that already have their own internal guidance will be able to use the handbook to validate and refine their existing methodology;
  • journals that do not yet have their own internal guidance should use it as an opportunity to define their own process.

This work will be of use to in-house editorial staff managing the manuscripts, but also benefit reviewers, authors on what compliance to the journal data policy may require, as well as developers to drive their service provisions to publishers.


Back to the top