AW: [rda-datamanagplans] A short note on DMPs

17 Aug 2015

That is why I have said it is multidimensional planning with late binding of resources.
During the proposal the DMP has to comply with the funding framework.
During kick-off and enactement, the early project has to comply with work plan and mangerial structures as finalized in the grant agreement
and ist attachements, as e.g. DOW.
During execution the DMP has to copmly with the available resources which have to be assigned in terms of infrastructure, services,
and human resources.
Best regards
Prof. Dr.-Ing. Matthias L. Hemmje
FernUniversität in Hagen – Fakultät für Mathematik und Informatik – Lehrgebiet Multimedia und Internetanwendungen
Universitätsstrasse 1 – D-58097 Hagen – Germany
Email: ***@***.*** – Web:
Phone: +49 (2331) 987-304 – Mobile: +49 (172) 6840262 – Fax: +49 (2331) 987-4487 – Skype: Matthias.Hemmje
Von: herman.stehouwer=***@***.*** [mailto:***@***.***] Im Auftrag von HermanStehouwer
Gesendet: Montag, 17. August 2015 11:00
An: ***@***.***; rduerr <***@***.***>; Active Data Management Plans IG <***@***.***>
Cc: mlangset <***@***.***>
Betreff: Re: [rda-datamanagplans] A short note on DMPs
Hi David,
As Jamie already noted the demands are not always consistent.
Furthermore, what a researcher needs from a DMP is quite different from what the funder needs.
That is, if a researcher has an overarching DMP for her research it would still have to be adapted for every grant. Which reduces the direct usefulness. (though having one would still be useful and helpful for developing good data praxis).
On 14/08/15 22:11, ***@***.*** wrote:
That funders seek a DMP does not necessarily mean a one to one mapping with their grant no? Cannot a single DMP be portable and reusable for all funders that request it?
Sent while mobile.
On Aug 14, 2015, at 3:55 PM, rduerr <***@***.*** > wrote:
The biggest problem I’ve always seen with even the concept of an active/adaptable DMP is the concept that data sets and projects are related 1 to 1 or maybe many to 1; but not in the many to many fashion which is the way things really work. If I am a researcher who has bene pursuing a line of research for 20 years (e.g., how does volcano plumbing work or what’s going on with the Greenland ice sheet); I may well have a sizable collection of materials that intellectually speaking are one continuous, cohesive collection (e.g., XYZ’s geological study of Antarctica’s Dry Valleys or Joe Blow’s 30 year record of XYZ measurements at Summit Greenland); yet the odds of there only having been one grant and one funding agency involved is probably identically zero. Yes, sure maybe today being able to pursue a single line of research to a meaningful conclusion is more difficult; but I am not convinced that makes the situation better - I think it might actually make this disconnect worse!
Back pre-digital era when that researcher retired and all of their stuff was handed to an archive, it would have been treated as a single collection. When descriptions of it are put on-line now (perhaps involving digitizing some analog materials), they probably would have been split into sub-collections, not by grant but by categories based on scientific utility. For example, in the Antarctic case, Dry Valleys rock samples, Dry Valley’s thin slices; Dry Valley’s chemical assays; etc. In the Greenland case, something like 30 year temperature record at Summit Greenland; 30 year snow albedo Summit, Greenland. Why would anyone want the data split into stuff collected using grant X, stuff collected using grant Y, etc.? Yet that is exactly what this active/adaptive DMP stuff tries to do, which is I think exactly what Herman was saying in the first bullet… What researchers do and what DMP’s aim to do are rather orthogonal at the moment… OK, yes funding agencies might like to see things organized by grant; but that certainly would not make it easy to re-use those data - in fact, organization by grant rather defeats the purpose of maximizing that data’s value.
Now if I had a DMP that actually discussed my line of research that was updated not only during a grant; but to include stuff coming in under any new grants; that might be more realistic.
My 2 cents…
On Aug 14, 2015, at 11:19 AM, mlangset <***@***.*** > wrote:
A couple of things that I have been thinking about with respect to active/adaptable DMPs:
It would be great if various disciplines could map DMP elements to corresponding elements in their metadata standards. That way if the DMP is kept up-to-date, the end product could be a metadata record describing the data. Tools would likely need to be developed with targeted questions for initially writing the DMP, an interface for updating the DMP, and a utility for outputting standard XML metadata records.
With respect to keeping the DMPs updated, the tools could incorporate a schedule tracker. Prior to the start of the project, PIs would input an anticipated schedule for key milestones in the project where deviations from the initial DMP tend to happen. The tool could push notices to the project team asking if certain elements are still accurate and providing an easy way to edit.
Full post:
Manage my subscriptions:
Stop emails for this post:
Full post:
Manage my subscriptions:
Stop emails for this post:
Full post:
Manage my subscriptions:
Stop emails for this post:
Dr. ir. Herman Stehouwer
Max Planck Computing and Data Facility (MPCDF)
RDA Secretariat
***@***.*** 0031-619258815
Skype: herman.stehouwer.mpi

  • Herman Stehouwer's picture

    Author: Herman Stehouwer

    Date: 17 Aug, 2015

    Yes, but with early declaration of expectations wrt. used infrastructure
    and needed resources.
    Some of the parties involved (at least STFC, but I am sure others as
    well) do forward planning based on the DMPs for projects in their
    However, overall this is starting to sound quite complex.
    How can we keep it manageable and useful to the average researcher?
    (after all he/she is usually not an expert on data matters, let alone
    DMPs, and usually has no desire to become such an expert).
    You'll have to excuse me for starting to sound like a broken record.

  • David Baker's picture

    Author: David Baker

    Date: 17 Aug, 2015

    In the CASRAI process we manage scope by limiting to one use case at a time. As more use cases are covered you build up a modular data profile and can use each use case to test the requirements of any module there will be a temptation to tackle all use cases at once but this should be resisted as it becomes too complex and ignores the fact that different subject experts may be needed for different use cases.
    Sent while mobile.

submit a comment