Skip to main content

Notice

We are in the process of rolling out a soft launch of the RDA website, which includes a new member platform. Visitors may encounter functionality issues pertaining to group pages, navigation, missing content, broken links, etc. As you explore the new site, please provide your feedback using the UserSnap tool found throughout the site. Thank you in advance for your understanding and support as we work through all issues as quickly as possible.

IEA Wind Task 43 Q&A workshop on the topic of “data review” on March 26th

  • Creator
    Discussion
  • #64385

    Sarah
    Participant

    Hi all,
    On March 26th IEA WInd Task 43 is running an online workshop on the topic of “data review” with members of the EU project consortia MERIDIONAL, FLOW and AIRE. If you are interested in this topic, or know of anyone who could help the project consortia how to carry out “data review”, please let me know and I’ll send you the invite.
    Here’s some more information about what the consortia are looking for:

    As part of our EU MERIDIONAL project, we are setting up a ‘Knowledge and Data Hub’ jointly with two other EU projects (FLOW and AIRE, led by Jakob Mann (DTU) and Beatriz Mendez (CENER) respectively). This will both contain and link to datasets that will be used when implementing model chains for wind resource assessment, AEP prediction, loading calculations, etc. We considered that peer review of the data would be an important aspect of this so that users have confidence in the data. Do you have advice and/or guidelines that we can build on given your heavy involvement with digitalisation in the wind energy sector?
    Examples of data sets: Lidar data at AWES test sites, kite performance data, LES simulation data, drone based measurements.
    How we understand “peer review”: this first popped up in terms of data quality, e.g. if it’s kite-based data, how accurate is it, how accurate are the wind speed measurements inferred? But indeed, the other items are important to potential users of the data.
    How to measure how well the data fulfills the needs? The purpose of our Knowledge and Data hub is primarily to provide the boundary conditions (e.g. inflow) required to run the tool chain that we are to develop. A use case might be to accurately simulate the output of an offshore wind farm and the loads on the turbines in the presence of convective rolls for example. So we need to ensure that the users are able to extract and use the data for a purpose such as this easily and have confidence that the results they obtained have a given level of accuracy.
    Includes Jupyter notebooks for use cases.

    Hope to hear from you soon!
    Regards,
    Sarah

Log in to reply.