Results of an Analysis of Existing FAIR Assessment Tools

    You are here

FAIR Data Maturity Model WG

Group co-chairs: Edit HerczogVassilios PeristerasKeith Russell

Supporting Output title:  Results of an Analysis of Existing FAIR Assessment Tools

Authors: FAIR Data Maturity Model WG

DOI: 10.15497/RDA00035

Citation:  TBC

Note: More information on the development of this document can be found in the WG's github repository.

 

Abstract

This document is a first output of the FAIR Data Maturity Model WG. As a landscaping exercise, the editorial team of the WG analysed current and existing approaches related to FAIR self-assessment tools. The analysis was made based on publicly available documentation and an online survey. Questions and options stemming from theses different approaches were classified according to the FAIR principles/facets. Comments were collected and incorporated. This resulted in five slide decks, combined in this pdf document, that make up this preliminary analysis.

 

 

 

Reccs Status: 
Recommendations with RDA Endorsement in Process
Review period start: 
Monday, 27 May, 2019 to Thursday, 27 June, 2019
Group content visibility: 
Use group defaults
Primary Domain/Field of Expertise: 
Primary WG Focus / Output focus: 
  • Emilie Lerigoleur's picture

    Author: Emilie Lerigoleur

    Date: 15 Jun, 2019

    This is a very nice initiative to compare and analyze these existing FAIR assesment tools.

    Few comments:

    - it will be interesting to describe the target audience in the background

    - please explain the first term "IRI"

    - what does it mean "X4" page 10?

    - the question page 27 "Are standard vocabularies..." is truncated!

    - the question page 29 "Please provide the URL..." is truncated!

    - it appeared to be quite difficult to find an answer to the following question page 34: " Granularity of data entities in dataset is appropriate in Respect of Meta-Data Granularity"

    - the question page 35 "Does the researcher provide..." is truncated!

    Next step is to identify core elements without duplicates for the evaluation of FAIRness, isn't it? I hope the maximum of the core common metrics will be automatically measured by machine as far as possible to ease the FAIRness assesment process.

     

  • Christophe Bahim's picture

    Author: Christophe Bahim

    Date: 12 Aug, 2019

    Dear Emilie, 

    Many thanks for your comment and apologizes for the late reply. 

    Please find below the questions that were truncated in the document; 

    • Are standard vocabularies, thesaurus or ontologies used for all data types present in datasets, to enable interdisciplinary interoperability between well defined domains? If not, is a well-defined open data dictionary provided?
    • Please provide the URL to a formal Linkset or copy/paste the content of a formal linkset that describes at least a portion of the content at RESOURCE ID
    • Does the researcher provide information on methods and tools that permit the understanding, integrity, value and readability of data intended to be kept on the long-term ? (e.g. versioning, archival and long term reuse issue for protocols, softwares, required methods and contexts to create, read and understand data)

    Besides, this exercise served the sole purpose of comparing existing methodologies to measure FAIRness. We looked at the questions and options they were proposing. Thus, I would suggest if you have questions, such as your first or second to last bullet, to directly ask them on the dedicated GitHub where the WG is very active. 

    Indeed, the next step is to identify core elements for the evaluation of FAIRness, which is an exercise we are currently doing. 

    I remain at your disposal for further clarifications. 

    Best, 

    Christophe

submit a comment