Skip to main content


The new RDA web platform is still being rolled out. Existing RDA members PLEASE REACTIVATE YOUR ACCOUNT using this link: Please report bugs, broken links and provide your feedback using the UserSnap tool on the bottom right corner of each page. Stay updated about the web site milestones at

Research assessment and science integrity: A global approach with local impact for research policy and the development of digital tools

  • Creator
  • #133858

    Francis Crawley

    Draft Meeting Agenda, speakers are identified and will be later confirmed.


    Title and Speaker(s)


    Opening Remarks from the Session Chairperson
    Francis P. Crawley
    Co-chair, EOSC-Future/RDA Artificial Intelligence & Data Visitation Working Group (AIDV-WG)
    Co-chair, CoARA’s Working Group on Ethics and Research Integrity Policy in Responsible Research Assessment for Data and Artificial Intelligence (ERIP)


    Perspectives on the need for revising research evaluation
    Two early career researchers
    A global perspective from Europe
    A local perspective from Africa


    Developing responsible ethics and research integrity policy in research evaluation
    A perspectives from a national endeavor to transform research in Turkey
    A perspectives from an international endeavor to transform research in Latin America


    A first interactive discussion with the audience on the need for revising research assessment with regard to data and AI


    Various proposals on tools for reforming research assessment
    From an open science approach
    From an industry approach
    From a cross-disciplinary approach


    A second interactive discussion with the audience on pathways for integrating AI governance models into research assessment reform


    Summary of the Session


    Close of the Working Session

    Additional links to informative material

    A Pathway towards Multidimensional Academic Careers: A LERU Framework for the Assessment of Researchers (Prof. Bert Overlaet, LERU position paper, January 2022)
    A Science|Business Special Report. Confidence in Science: How to ensure sustainable and trustworthy channels of scientific information? (July 2023)
    ACOLA Report. Research Assessment in Australia: Evidence for Modernisation. Australian Council of Learned Academies (2023)
    AI in Education: Enhancing Learning or Diminishing Reliability?
    AI, Machine Learning & Big Data Laws and Regulations 2023 | India
    Alberts B., Kirschner M. W., Tilghman S., Varmus H. (2014). Rescuing US biomedical research from its systemic flaws. Proceedings of the National Academy of Sciences of the United States of America, 111(16), 5773–5777.
    Alberts, Bruce, Marc W. Kirschner, Shirley Tilghman, and Harold Varmus. 2014. “Rescuing US biomedical research from its systemic flaws.” Proceedings of the National Academy of Sciences 111(16): 5773–77. (April 16, 2014).
    ALLEA European Code of Conduct for Research Integrity (revised edition 2023)
    Ancion, Zoé, Borrell-Damián, Lidia, Mounier, Pierre, Rooryck, Johan, and Saenen, Bregt. ‘Action Plan for Diamond Open Access’ (March 2022)
    Are numerical scores important for grant proposals’ evaluation? A cross sectional study [version 1; peer review: awaiting peer review]
    Assessing Ethics Education in Science and Engineering, Special Collection. Science and Engineering Ethics. Forthcoming.
    “Australian Council of Learned Academies (ACOLA). Research Assessment in Australia:
    Evidence for Modernisation (November 2023).”
    Bakiner, O. What do academics say about artificial intelligence ethics? An overview of the scholarship. AI Ethics 3, 513–525 (2023).
    Banks, G.C., Rogelberg, S.G., Woznyj, H.M. et al. Editorial: Evidence on Questionable Research Practices: The Good, the Bad, and the Ugly. J Bus Psychol 31, 323–338 (2016).
    Brey, P., Dainow, B. Ethics by design for artificial intelligence. AI Ethics (2023).
    BRIDGE2HE H2020 Project 101005071. Guiding notes to use the TRL self-assessment tool. No date.
    Bringula, R. What do academics have to say about ChatGPT? A text mining analytics on the discussions regarding ChatGPT on research writing. AI Ethics (2023).
    Capetown Statement Working Group. The Cape Town Statement on Fostering Research Integrity through Fairness and Equity (May 2019)
    CESAER: Conference of European Schools for Advanced Engineering Education and Research: ‘Keeping science open? Current challenges in the day-to-day reality of universities (White paper 18 October 2023)
    CHAI: Center for Human-Compatible AI ‘Coordinated global action on AI safety research and governance is critical to prevent uncontrolled frontier AI development from posing unacceptable risks to humanity’ (Ditchley Park, United Kingdom, 31 October 2023)
    Charisi, V., Chaudron, S., Di Gioia, R., Vuorikari, R., Escobar Planas, M., Sanchez Martin, J.I. and Gomez Gutierrez, E., Artificial Intelligence and the Rights of the Child : Towards an Integrated Agenda for Research and Policy, EUR 31048 EN, Publications Office of the European Union, Luxembourg, 2022, ISBN 978-92-76-51837-2, doi:10.2760/012329, JRC127564.
    “ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations
    Chaudhry, M.A., Kazim, E. Artificial Intelligence in Education (AIEd): a high-level academic and industry note 2021. AI Ethics 2, 157–165 (2022).
    Chaudhry, M.A., Kazim, E. Artificial Intelligence in Education (AIEd): a high-level academic and industry note 2021. AI Ethics 2, 157–165 (2022).
    Checco A., Bracciale L., Loreti P., Pinfield S., Bianchi G. (2021). AI-assisted peer review. Humanities and Social Sciences Communications, 8(1).
    Chiarelli, A., Johnson, R., & Loffreda, L. (2022). Discussion Document – Indicators of Research Integrity: An initial exploration of the landscape, opportunities and challenges. Zenodo.
    COAR & SPARC. Good Practice Principles for Scholarly Communication Services (2019).
    CoARA: Agreement on Reforming Research Assessment (20 July 2022)

    Applicable Pathways
    Data Infrastructures – Organisational to Environments

    Avoid conflict with the following group (1)
    RDA / CODATA Data Systems, Tools, and Services for Crisis Situations WG

    Brief introduction describing the activities and scope of the group
    The EOSC-Future/RDA Artificial Intelligence and Data Visitation Working Group (AIDV-WG) began work in September 2022 with the aim of addressing ethical, legal, and social challenges of Artificial Intelligence (AI) and Data Visitation (DV) affecting of state-of-the art data technology impacting scientific exchange in the context of data sharing and the European Open Science Cloud (EOSC). The AIDV-WG was established through a competitive call for proposals for RDA Working Groups focusing on the development of solutions for the European Open Science Cloud (EOSC), working in conjunction with the European Commission-funded project EOSC-Future.
    Higher education institutions, academic/scientific journals, and generally all institutions involved in scientific research as well as the academics/scientists themselves share responsibility for ensuring adequate scientific and ethical standards in academic authorship, scientific integrity, and the production of knowledge.
    CoARA ERIP applies the deliverables of the AIDV-WG to ELSI, policy, and governance models where decisions regarding the use of data and AI in scientific research and its outputs, including publications, are based on well defined roles, uses, and attributions of these new technologies to the development of the sciences, their uses, and their communication. Decisions to employ new AI technologies must be supported by an adequate understanding of their impact on the scientific method, scientific processes, and the results generated. In this context, questions can be posed concerning the use of AI in scientific research, its outputs, and its publications. ERIP examines the primary and fundamental values for the use of AI in the scientific research and publication process with a focus on transparency, honesty, and diligent care. These three values and the principles derived from them provide the necessary ethical framework for the use of AI in the scientific field. At the same time, using the experience of the AIDV-WG and other European and international projects, ERIP will develop policy as well as digital tools that can realise that policy for the reliable scientific evaluation of data and AI.

    Group chair serving as contact person
    Francis P. Crawley

    Meeting objectives
    The main objective of this working meeting is to further develop the work of the EOSC-Future/RDA Artificial Intelligence & Data Visitation Working Group (AIDV-WG) in relation to the need for a transformation on how we value science and measure its contribution to research, education, and society generally. The AIDV-WG has laid the groundwork for investigating governance models for data and AI that can be used to bring new appreciations to research assessment. Importantly, the AIDV-WG has engaged a global network with attention to local impact. This has contributed to the building of a truly international community for examining the ethical, legal, and social implications of data and AI in the transformation of research methodologies, outputs, and impacts. In February of 2024 the Coalition for the Advancement of Research Assessment (CoARA) announced a new Working Group on Ethics and Research Integrity Policy in Responsible Research Assessment for Data and Artificial Intelligence (ERIP) promoted by the European Commission, ALLEA and leading European and international research institutions, including RDA and RDA-Europe.
    This session builds on the AIDV-WG’s ELSI and governance outputs and relates them to the need for the integration of research ethics and research integrity into digital tools for the establishment of policy and governance in the evaluation of scientific research. It examines three key trajectories regarding the implementation of machine learning methods and artificial intelligence models for expanding traditional understandings of participation in, and contributions to, scientific outputs and communication. In particular, it looks at the need to develop the following areas:

    methods and tools to ensure the research ethics and integrity of scientific methods and outputs with the advancing use of data and the impact of AI;
    methods and tools to evaluate digital contributions to science/knowledge in research programs and assessment procedures; and
    innovative methodologies for employing data ecosystems and AI models for research assessment in digital environments with a focus on open science infrastructures.

    The session will demonstrate how data and AI governance, policy, and guidance can be integrated into digital tools for advancing research assessment that promote the role of, and define the ethical and integrity characteristics of, a responsible culture for the assessment of data and AI in research, fostering responsibility, transparency, and societal benefit. It will look at the relationship between research assessment policy and data and AI tools for developing new indicators and metrics in evaluating the contributions of science to the academic and research communities as well as society as a whole.

    Please indicate at least (3) three breakout slots that would suit your meeting.
    Breakout 4, Breakout 7, Breakout 17

    Privacy Policy

Log in to reply.