The challenging FAIRisation route: how could assessment help?

You are here

27 Jun 2019

The challenging FAIRisation route: how could assessment help?

Group leading the application: 
Meeting agenda: 

Collaborative session notes: https://docs.google.com/document/d/1c4m2G-FhjjNZKZdCMIqkyhMqeyRmYeQuiZSz-EYhXZU/edit?usp=sharing

 

We will run a mixed informative and interactive session with speakers and audience.

  • Objectives of the joint meeting;   Goals of SHARC group’s project; towards recommendations / guidance, A. Cambon-Thomsen, L. Mabile 5  min

  • FAIR criteria assessment survey results, Romain David 10 min

  • How can the FAIR criteria best be employed in guiding the researcher in the pre-FAIRification stage? Edit Herczog, RDA- FAIR Data Maturity Model group 10 min

  • FAIRsharing and the FAIR evaluator - how to assess the right standards, and the right repository for your data, P. McQuilton,  10 min

  • Potential of FAIR principles as tools in creating Open Science assessment frameworks: two examplesHeidi Laine  , University of Helsinki, FI

 

Interactive  discussion with audience (30 min), chaired by all speakers

Meeting objectives: 

We will run an informative and interactive session, aiming at introducing the audience to the latest assessment work, the challenges and opportunities, as well as the resources that work to help assisting the process of FAIRisation - FAIRisation includes prefairification, assessment before, during and after project implementation,  evaluation processes and support resources on the long term.

The scope of this session is to discuss the feasibility and methods for FAIR criteria adoption in all the fairisation steps related to evaluation processes, partially based on the results of the SHARC IG’s survey. This focus is complementary to the ongoing work in the FAIR data maturity model WG.

The following questions will be addressed:

  • What is the place of FAIR assessment as regards other elements of scientific activities involved in data sharing?

  • What are the steps required by the various actors in the fairisation preparation and how to choose processes and related tools?

  • What is the role of repositories and standards providers (assessment for evaluation)? How can they ensure that their resources are visible and used by the community to enable FAIR data? How can FAIRsharing help? 

and specifically:

  • Should all data be FAIRified

  • Should FAIR assessment criteria be part of the scientific evaluation process (grant applications, call for projects, individual scientist evaluation of activities, recruitment and career steps, teams or laboratories evaluation, institution policies' assessment, other)

  • Can criteria of FAIR assessment be considered as a part of data quality assessment

 

  • Which stakeholders need to be involved to prepare researchers for making their data FAIR ? Which aspect of FAIR will each stakeholder cover? 

  • How can researchers best be prepared in the pre-FAIRification stage for making their data FAIR? Is this a once off or will there be aspects that will need to be revisited throughout the research lifecycle?

 

  • There is no agreed naming convention for classifying standards for reporting and sharing data, metadata and other digital objects. Do you think we need such common classification?

  • How can a repository increase it’s visibility to researchers and policymakers?

  • To measure the use and adoption of standards, showing which repositories implement them is essential, but how else can the adoption of standards be measured?

Short Group Status: 

RDA-SHARC IG: ongoing interest group since july 2017

FAIRsharing WG; maintenance group

FAIR Data Maturity Model WG, ongoing working group

Brief introduction describing the activities and scope of the group(s): 

The RDA-SHARC interest group is an interdisciplinary group set up to unpack and improve crediting and rewarding mechanisms in the data/resource sharing process. The main objective is to encourage the adoption of data sharing activities-related criteria in the research evaluation process at the institutional, national and European/international levels.

As a step forward, a FAIR sharing assessment grid has been built by RDA-SHARC IG members to be as much as possible understandable by scientists including those who are not experts in data science. The clarity and usability of this tool have been assessed through a survey which results will be used as a basis to co-construct practical evaluation tools and processes with relevant users. The input from FAIRsharing wg is essential to drive towards implementable criteria in real practise. 

FAIRsharing WG has produced one of the 12 outputs recommended by RDA, encompassing: a registry of curated and interlinked records of standards (for identifying, reporting, and citing data and metadata), databases, repositories (and knowledge-bases) and data policies (from journals, publishers, funders and other organizations); and related recommendations to guide users and producers of standards and databases to select and describe these resources, or to recommend them in data policies. With a growing adoption list (a ‘live’ updated version can be found here: https://fairsharing.org/communities), the FAIRsharing WG is focusing on refining and expanding connections with other RDA IGs and WGs relevant to its mission and scope, as has already happened with several domain-specific (e.g. ELIXIR, Biodiversity, IGAD) and generic groups (e.g. Standardization of Journal Policies). The link with the SHARC IG is a natural progression, given the role FAIRsharing already plays with FAIR assessment tools.

Type of Meeting: 
Working meeting
Remote participation availability: 
Yes