Skip to main content


We are in the process of rolling out a soft launch of the RDA website, which includes a new member platform. Existing RDA members PLEASE REACTIVATE YOUR ACCOUNT using this link: Visitors may encounter functionality issues with group pages, navigation, missing content, broken links, etc. As you explore the new site, please provide your feedback using the UserSnap tool on the bottom right corner of each page. Thank you for your understanding and support as we work through all issues as quickly as possible. Stay updated about upcoming features and functionalities:


Tomasz Miksa

The idea to model processes using BPMN was to showcase how certain activities that relate to information contained in DMPs can be automated. In other words, it is a way to demonstrate machine-actionability of DMPs.
When our model is in place, then institutions rolling out maDMPs will have to think about the process in which maDMPs are used – that’s also why we had the first consultation to find out who uses which information and when.
Depending on the research field and existing infrastructure and practices, the processes will be implemented differently. However, very likely most of the processes will be to some extenet similar, e.g. getting recommendation on repositories, getting information on costs, etc. For this reason, a set of BPMNs is just a useful set of ideas that enable us now to have a more focused discussion on requirements for the model.
Yes, it is not the ambition of this working group to define processes – we do the model.
From: paul=***@***.*** [mailto:***@***.***] On Behalf Of paulwalk
Sent: Thursday, September 27, 2018 9:00 AM
To: jrocha; DMP Common Standards WG
Cc: ***@***.***
Subject: Re: [dmp-common] Machine-actionable DMPs – a disengaging administrative burden?
I’m not sure that i agree with expanding the scope of this activity to include developing a standard for describing business processes relating to DMPs. We can make a real difference if we focus on our mission to establish a standard data interchange.
I think you are right to emphasise the importance of understanding the business processes which will use mDMPs, but I think we should stop short of trying to model these formally with something like BPMN. It is too soon to be able to predict with any confidence precisely how mDMPs will be used and we should, in my opinion, avoid over-engineering a solution at this stage. Rather, we should be aiming for improving broad interoperability, and for maintaining flexibility & future extensibility.
Paul Walk
Founder and Director, Antleaf Ltd
Antleaf provides Management Services to DCMI
On 27 Sep 2018, at 01:48, jrocha wrote:
Hi all,
I did not have the opportunity to attend the TPDL 2018 workshop, so at the risk of being outdated, here are my two cents on this until we can meet in Gaborone and discuss this in person.
1. maDMPs need an ecosystem of “automatable” software (Rules 1 and 2 of the 10 Simple Rules for maDMPs )
One important goal is to have the maDMP automate repetitive work, but people should be able to keep their own repositories and other software platforms in the maDMP workflow, or, at most, install an upgrade to these existing solutions. However, automation across different systems requires interoperability.
Thus, interoperability with existing data management tools is a must, because a maDMP is basically worthless if no software knows how to execute what it prescribes or enforces. An off-the-shelf workflow engine should be able to fire off the proper events to the data management software, with the necessary payload, which would itself comply with a set interoperability standard (this is where we could come in, I think). In a sense, we could work towards an API specification covering a core set of operations that should be supported by any repository or data staging platform that wants to be “maDMP Ready” in the RDA sense.
2. maDMPs should be modular
Every research project has its own way of handling data but some needs are common, as shown by the recent tools that aid in building a DMP. A very strong interest of maDMPs thus lies in possible reuse either as a whole or as building blocks. The maDMP should expose the subprocesses inside it in a modular way, with certain elementary validation and processing workflows being modeled and shareable using existing modeling languages, such as BPMN , as shown at the workshop. Other modeling languages could be used as long as they have both a standard visual representation for being included in the “printed” DMP, much like UML, and a machine-processable representation in XML. Such maDMP “building blocks” so to say, could then be reused in a project’s maDMP, and published in a maDMP “directory ” for others to access.
People interested in a maDMP for their project could then:
a) Download and reuse an existing maDMP for a project that they know went well
b) Reuse only the maDMP “building blocks” for metadata validation, dataset availability, repository compliance, etc. (modeled using BPMN )
c) Build their own from scratch and share it to this “directory” of maDMPs for others to reuse
3. The code that actually executes the actions in the maDMP’s processes should travel with them (Rules 6, 7, 9)
If possible, the small pieces of code behind every step of the modeled workflows should be open-source and retrievable as needed by the workflow engine as it runs the BPMN processes. Like this, vendors could fork and adjust them to the existing APIs of their repository software or other software that the maDMP needs to “remote-control” to execute the automation steps specified in the modeled process. This is somewhat similar to ETL tools, but fetching platform-specific code as needed. A commercial example of a graphical ETL tool is Pentaho Data Integration here — by the way, i am not affiliated with them, only used the tool myself.
João Rocha da Silva
Invited Assistant Professor — Dendro Lead Developer — Research Data Management
Faculty of Engineering of the University of Porto, Portugal
ORCID: GitHub:
No dia 26/09/2018, às 12:35, ***@***.*** escreveu:
Thanks, Rob and Tomasz, for valuable feedback!
I perfectly agree with the idea and rationale behind making DMPs machine actionable. If this is implemented in a meaningful way, it surely will reduce the administrative burden for researchers. As you say, Rob, this way “a complicated question can be split up into more easily answered closed questions that together provide the information after a proper consideration of all the options and issues”. I had a look at the Questionnaire demo of the Data Stewardship Wizard from ELIXIR. Here, researchers are guided through the whole lifecycle of research data. But without such guidance and sensible grouping of questions, filling in a DMP form may turn into a disengaging and frustrating duty. If you read my feedback on the easyDMP tool, you’ll probably understand what I mean.

Full post:
Manage my subscriptions:
Stop emails for this post:

Full post:
Manage my subscriptions:
Stop emails for this post: