Pilot 1 - Decision Aid

TIER2's Decision Aid will provide clarity on the meaning, relevance, and feasibility of ‘reproducibility’ for researchers to aid them in identifying what type of reproducibility is relevant for their research and indicate what they must consider regarding how feasible such ‘reproducibility’ would be for them. The tool will be piloted with two researcher groups (qualitative and machine learning researchers).

Stakeholders: Researchers, publishers, funders

Timeline: -


  • To explore through piloting to what extent the decision aid (tool) is useful and efficient.

  • The ultimate goal of the tool is to facilitate ‘reproducibility’ where it is relevant, as well as feasible and prevent demands for ‘reproducibility’ where it is irrelevant and/or unfeasible.  

Pilot 2 - Reproducibility Management Plan (RMP)

The Reproducibility Management Plan (RMP) Pilot aims to create a prototype of key thematic subjects and questions that will serve as the starting point to support reproducibility at the planning stage of research. Work involves defining what an RMP is, integrating it into the ARGOS service, and testing its effectiveness with feedback from the community. The pilot addresses researchers, beneficiaries and funders for its adoption.

Stakeholders: Researchers, research communities, funders, and service providers

Timeline: June 2023 - October 2025


  • To emphasise reproducibility activities within a research output management lifecycle;

  • To streamline reproducibility practices in publicly funded research projects;

  • To provide tools and guidance to adopt best reproducibility practices;

  • To generate case studies to promote a common understanding of reproducibility across various domains.

Pilot 3 - Reproducible Workflows

The Reproducible workflows Pilot focuses on enhancing reproducibility in life sciences and computer sciences, by adapting the SCHEMA open-source platform, using technologies like software containerisation, workflow description languages, and experiment packaging specifications to fit specific epistemic needs.

Stakeholders: Life scientists, computer scientists 

Timeline: September 2023 - November 2025

Objectives: The main goal is to customise and evaluate tools/practices for reproducible workflows in life and computer sciences, with underlying objectives of extending the SCHEMA open-source platform to support reproducibility in both fields by leveraging software containerisation, workflow description languages (CWL, Snakemake), and experiment packaging specifications (RO-crate), particularly emphasising machine learning in computer science.

Pilot 4 - Reproducibility Checklists for Computational Social Science Research

In this Pilot, we aim to provide a structure of well-defined checklists and templates that can help review data and code reproducibility for computational social scientists. The checklists and review templates cater for the specific needs of the three research phases, i.e., planning and data collection, process and analysis and finally sharing and archiving the research resources. It results in building trust and authority in the social science research community.

Stakeholders: Computational Social Scientists (Research Producers and consumers)

Timeline: November 2023 - November 2025

Objectives: The goal of this Pilot is to enhance reproducibility for code and data through checklists in research lifecycle. 

Pilot 5 - Reproducibility Promotion Plans for Funders

The Pilot will develop a policy template with recommendations for funders to foster reproducible practices both in the research they fund (evaluation and monitoring) and their internal practices.

Stakeholders: Funders

Timeline: March 2024 - April 2025

Objectives:  This Pilot aims to help funders improve the research quality of the projects and researchers they fund to build and sustain trustworthy outcomes.

Pilot 6 - Reproducibility Monitoring Dashboard

The Reproducibility Monitoring Dashboard Pilot aims to develop tools that enable funding agencies to track and monitor the reusability of research artifacts across various projects, programs, topics, and disciplines. This auto-generated dashboard assesses the impacts of policies related to data and code sharing. Furthermore, we are establishing essential requirements to make the dashboard user-friendly for publishers.

Stakeholders: Research Performing Organisations (RPOs), Funders, Publishers and Researchers

Timeline: January 2024 - December 2025

Objectives: Develop tools that allow funding agencies to track and monitor the reusability of research artifacts:

  • Develop, extend, and test a suite of tools for tracking major research artifacts in Computer Sciences such as datasets, software, and methods, with a particular focus on Artificial Intelligence.

  • Quantify and estimate reusability indicators based on various types of artifacts.

  • Design and implement a dashboard that allows funding agencies to track and monitor the reusability of research artifacts created in funded projects.

Pilot 7 - Editorial Workflows to Increase Data Sharing

This Pilot is aimed at increasing data sharing in published work. Data sharing is an important building block for increased reproducibility & transparency, but current rates of sharing are low.

Stakeholders: Publishers

Timeline: January 2024 - September 2025

Objectives: The pilot will improve our knowledge on data sharing with two activities:

- A randomised controlled trial of an intervention targeting data availability statements with the aim to increase deposition of data in trusted repositories.

- A Delphi-study to gather consensus on the most pressing issues and best paths to improve sharing of research data underlying publications.

Pilot 8 - An Editorial Reference Handbook for Reproducibility and FAIRness

This Pilot will co-create and test an Editorial Reference Handbook that contributes towards a common understanding and what is required to assist reproducibility and FAIRness. The Handbook, identified as a priority in a workshop with publishers,  will include two components. A structured section will include educational and practical set of checks, defined by reviewing existing material, harmonising and operationalising them. Some journals have internal checks, but the type, richness and stringency vary, and there is little/no consensus among publishers. A narrative component with a general  framework will help improve internal processes, defined by describing an ideal process where checks should be applied. There are a variety of internal processes, and how, when and by whom these checks are done vary, and this can also affect the results. 

The Pilot includes representatives of Cambridge University Press, Cell Press, EMBO Press, F1000 (Taylor & Francis), GigaScience Press, Lancet, Oxford University Press, PLOS, Springer Nature, Wiley.

Stakeholders: Publishers

Timeline: January 2024 - September 2025

Objectives:  The Handbook is set to help put the requirements of the journal data policy in action:

  • journals that already have their own internal guidance will be able to use the handbook to validate and refine their existing methodology;
  • journals that do not yet have their own internal guidance should use it as an opportunity to define their own process.

The planned intervention will target in-house editorial staff managing the manuscripts, but also benefit reviewers, authors on what compliance to the journal data policy may require, as well as developers to drive their service provisions to publishers.