Data Versioning WG


Group details

Jens Klump
Secretariat Liaison: 
Stefanie Kethers
TAB Liaison: 
Tobias Weigel

The demand for reproducibility of research results is growing, meaning that it will become increasingly important for a researcher to be able to cite the exact extract of the data set that was used to underpin their research publication. The capacity of computational hardware infrastructures have grown it is now common to have online petabyte data stores, This has encouraged the development of concatenated seamless data sets where users can use web services to select subsets based on spatial and time queries. Further, the growth in computer power has meant that higher level pre-processed data products can be generated in really short time frames. 


This means that data sets and data products are needing some form of systematized way of being able to reference the exact version of the data that was used to underpin the research findings, and/or was used to generate higher level products. This was recognised by the RDA Working Group on Data Citation, whose final report recognises the need for Data Versioing. However, there were no specifics on best practice for data versioning, particularly for large volume multi-terabyte and even petabyte scale data sets.


There are two use case for dynamic data. Firstly nothing is done to the existing data sets, and new data are simply being appended at identifiable occurrences. For this case, versioning is more straight forward.


The second use case is more complex and involves existing data sets, models and derivative products being revised with new data, or the data itself revised as processing methods are improved there does not appear to be agreed principles on how data should be versioned. 


Versioning procedures and best practices are well established for scientific software and can be used enable reproducibility of scientific results. Are these suitable for data sets or do we need a separate suite of practices for data versioning?


Ultimately versioning will need to be attached to persistent identifiers.

The BoF initially emerged at Plenary 8 in Denver through the discussion available here:

Recent Activity

03 Apr 2017

Remote participation in IG Data Versioning RDA 9th Plenary meeting

Dear Members of the IG Data Versioning,
For those of you who cannot participate in person in the RDA 9th Plenary we have arranged the option of remote participation.
To access the remote meeting link for this session on April 5 from 14:00-15:30 titled "RDA Plenary 9: Data Versioning Interest Group" please go to

23 Mar 2017

Research Data Alliance DDPIG Interim Outputs for review and comment

Dear RDA Interest Group members,

We wish to share with you the draft outputs created by three of
the Task Force teams of the RDA Data Discovery Paradigms Interest
Group. We think one or more of these outputs are relevant to the
work your IG is doing. Your thoughts and feedback on the three
interim documents will be greatly appreciated:

15 Mar 2017


My name is Benno Lee. I am a PhD student at Rensselaer Polytechnic
Insitute studying data set versioning. I was wondering if I could join
into the conversation about new best practices for data sets. I am working
to produce a linked data model that may be useful.