Skip to main content

Notice

The new RDA web platform is still being rolled out. Existing RDA members PLEASE REACTIVATE YOUR ACCOUNT using this link: https://rda-login.wicketcloud.com/users/confirmation. Please report bugs, broken links and provide your feedback using the UserSnap tool on the bottom right corner of each page. Stay updated about the web site milestones at https://www.rd-alliance.org/rda-web-platform-upcoming-features-and-functionalities/.

Shared AIDV WG Literature Update

  • Creator
    Discussion
  • #98088

    Natalie Meyers
    Participant

    Hello,
    Below see recent highlights from the RDA AIDV-WG shared citation library. There are recent articles on Civil Society participation in Standards Development in AI, some news on the ChatGPT pause,  Italy’s temporary ChatGPT ban, and an article on Data sharing in the age of deep learning that asks: “How can we protect personal information and the integrity of artificial intelligence models when sharing data?” 
    You can access the group’s full shared library online at Zotero:
    https://www.zotero.org/groups/4922635/aidv-wg/library

    Hannah Ruschemeier (April 7 2023) Squaring the Circle. Verfassungsblog, (2366–7044) URL: https://verfassungsblog.de/squaring-the-circle/  
    The article explains what ChatGPT is, how the Italian DPA handled it, and what this tells us about the current state of EU data protection law and attempts to regulate ‘artificial intelligence’.
    See also  Ruschemeier H (2023) AI as a challenge for legal regulation – the scope of application of the artificial intelligence act proposal. ERA-Forum, 23(3):361–376. https://doi.org/10.1007/s12027-022-00725-6

    Ada Lovelace Institute (2023) Inclusive AI Governance: Civil Society Participation in Standards  Development in AI. https://www.adalovelaceinstitute.org/report/inclusive-ai-governance/

    AI language models: Technological, socio-economic and policy considerations, OECD Digital Economy Papers, No. 352, OECD Publishing, Paris, https://doi.org/10.1787/13d38f92-en and an accompanying article:  As language models and generative AI take the world by storm, the OECD is tracking the policy implications  by Karine Perset, Audrey Plonk, Stuart Russell (2023) OECD.AI.  https://oecd.ai/en/wonk/language-models-policy-implications

    Future of Life Institute (2023) Policymaking In The Pause. https://futureoflife.org/wp-content/uploads/2023/04/FLI_Policymaking_In_The_Pause.pdf 
    FLI have called on AI labs to institute a development pause until they have protocols in place to ensure that their systems are safe beyond a reasonable doubt, for individuals, communities, and society. Regardless of whether the labs will heed the call, their policy brief provides policymakers with concrete recommendations for how governments can manage AI risks stating that:  
    The recommendations are by no means exhaustive: the project of AI governance is perennial and will extend far beyond any pause. Nonetheless, implementing these recommendations, which largely reflect a broader consensus among AI policy experts, will establish a strong governance foundation for AI. 
    FLI’s Policymaking  in the Pause recommendations: 
    1. Mandate robust third-party auditing and certification. 
    2. Regulate access to computational power. 
    3. Establish capable AI agencies at the national level. 
    4. Establish liability for AI-caused harms. 
    5. Introduce measures to prevent and track AI model leaks. 
    6. Expand technical AI safety research funding. 
    7. Develop standards for identifying and managing AI-generated content and recommendations. 

    A useful dashboard is available on Future of Life Institute’s  (2022) Global AI Policy page at https://futureoflife.org/resource/ai-policy/ includes a National AI Strategy Radar Dashboard & Companion Doc View 

    The above dashboard pictured above summarizes the distribution of AI documents published by governments and sorted by geography, year, and topic and was created with the help of a natural language processing tool that categorized documents downloaded from the OECD’s AI governance database in February of 2022. Further background information on this initiative is available in a Dec 2022  blog post on Characterizing AI Policy using Natural Language Processing  and users can expect periodic updates to this resource.
     
    The companion “document view” dashboard pictured below gives an in depth look at all the documents individually, organized by their country of origin and the topics.

     
    More AI Policy Papers and news of interest: 
    Comparative

    Golpayegani D, Pandit HJ, Lewis D (2023) Comparison and Analysis of 3 Key AI Documents: EU’s Proposed AI Act, Assessment List for Trustworthy AI (ALTAI), and ISO/IEC 42001 AI Management System. Artificial Intelligence and Cognitive Science, :189–200. 10.5281/zenodo.7277975. 

    Global South

    Sengupta N, Subramanian V, Mukhopadhyay A, Scaria AG (2023) A Global South perspective for ethical algorithms and the State. Nature Machine Intelligence, 5(3):184–186. https://doi.org/10.1038/s42256-023-00621-9

    China

    Zeng Y, Sun K, Lu E, Zhao F (2023) Voices from China on “Pause Giant AI Experiments: An Open Letter.” Center for Long-term Artificial Intelligence, https://long-term-ai.center/research/f/voices-from-china-on-pause-giant-ai-experiments-an-open-letter

    Zeng Y, Kang S (2023) Whether we can and should develop strong AI: a survey in China. Center for Long-term Artificial Intelligence, https://long-term-ai.center/research/f/whether-we-can-and-should-develop-strong-artificial-intelligence

    Liao R (2022) China’s generative AI rules set boundaries and punishments for misuse. TechCrunch, https://techcrunch.com/2022/12/13/chinas-generative-ai-rules-set-boundaries-and-punishments-for-misuse/

    Cao Y (2023) China invites public opinion on generative AI draft regulation. China Daily, https://s.chinadailyhk.com/Y3Y7Vb

    Yang Y (2023) China launches special deployment of “AI for Science.” ChinaDaily, //global.chinadaily.com.cn/a/202303/28/WS64225d2ea31057c47ebb6fa5.html

    Europe

     Liane Colonna (2023) Addressing the Responsibility Gap in Data Protection by Design: Towards a More Future-oriented, Relational, and Distributed Approach. Tilburg law review, 27(1):1-21-1–21. https://doi.org/10.5334/tilr.274

    Spindler G (2023) Algorithms, credit scoring, and the new proposals of the EU for an AI Act and on a Consumer Credit Directive. Law and Financial Markets Review, ahead-of-print(ahead-of-print):1–23. https://doi.org/10.1080/17521440.2023.2168940

    Merve Hickok, Marc Rotenberg, Karine Caunes (2023) The Council of Europe Creates a Black Box for AI Policy. Verfassungsblog, (2366–7044) https://verfassungsblog.de/coe-black-box-ai/ 

    Francesca Palmiotto (2023) Preserving Procedural Fairness in The AI Era. Verfassungsblog, (2366–7044) https://verfassungsblog.de/procedural-fairness-ai/ 

    Philipp Hacker, Andreas Engel, Theresa List (2023) Understanding and Regulating ChatGPT, and Other Large Generative AI Models. Verfassungsblog, (2366–7044) https://verfassungsblog.de/chatgpt/

    Natali Helberger, Nicholas Diakopoulos (2023) ChatGPT and the AI Act. Internet policy review, 12(Issue 1)https://doi.org/10.14763/2023.1.1682

    USA

    Kompella K (2023) The AI Bill of Rights: A Small Step in the Right Direction. Information Today, 40(1):40. URL https://infotoday.com/it/jan23/Kompella–The-AI-Bill-of-Rights-A-Small-Step-in-the-Right-Direction.shtml 

    Roy PP, Agarwal A, Li T, Krishna Reddy P, Uday Kiran R (2023) Data Challenges and Societal Impacts – The Case in Favor of the Blueprint for an AI Bill of Rights (Keynote Remarks). 13773https://doi.org/10.1007/978-3-031-24094-2_1

    National Telecommunications and Information Administration (2023) NTIA AI Accountability RFC. https://www.regulations.gov/document/NTIA-2023-0005-0001

    More Articles, Reports and news of interest: 

    Fay Cobb Payton, Eric Chown, Ishwar Puri, Jason D’Cruz (2023) Artificial Intelligence and Research Ethics. https://www.chronicle.com/events/virtual/artificial-intelligence-and-research-ethics

    Hajibabaei A, Schiffauerova A, Ebadi A (2023) Women and key positions in scientific collaboration networks: analyzing central scientists’ profiles in the artificial intelligence ecosystem through a gender lens. Scientometrics, 128(2):1219–1240. https://doi.org/10.1007/s11192-022-04601-5

    Wynsberghe A van, Vandemeulebroucke T, Bolte L, Nachid J (2023) Towards the Sustainability of AI; Multi-Disciplinary Approaches to Investigate the Hidden Costs of AI. https://doi.org/10.3390/books978-3-0365-6601-6

    Rotenberg M, Roschelle J (2023) Making AI Fair, and How to Use It. Communications of the ACM, 66(1):10–11. https://doi.org/10.1145/3570517

    Editorial (2023) Data sharing in the age of deep learning. Nature Biotechnology, 41(4):433–433. https://doi.org/10.1038/s41587-023-01770-3

    Visit the shared AIDV citations’ library at: 
    https://www.zotero.org/groups/4922635/aidv-wg/library
     
    Sincerely,
     
    Natalie Meyers
    on Behalf of AIDV-WG 

Log in to reply.