Skip to main content

Notice

We are in the process of rolling out a soft launch of the RDA website, which includes a new member platform. Existing RDA members PLEASE REACTIVATE YOUR ACCOUNT using this link: https://rda-login.wicketcloud.com/users/confirmation. Visitors may encounter functionality issues with group pages, navigation, missing content, broken links, etc. As you explore the new site, please provide your feedback using the UserSnap tool on the bottom right corner of each page. Thank you for your understanding and support as we work through all issues as quickly as possible. Stay updated about upcoming features and functionalities: https://www.rd-alliance.org/rda-web-platform-upcoming-features-and-functionalities/

AI reads your mind, has a metadata problem, creates political attack ads, gets weird, and much much more

  • Creator
    Discussion
  • #98028

    Hello AIDV working group,
    Artificial Intelligence has been very much in the news lately – here’s an update on papers, posts, and news stories recently added to our shared citation library that may interest you. 
     
    Tang J, LeBel A, Jain S, Huth AG (May 1, 2023) Semantic reconstruction of continuous language from non-invasive brain recordings | Nature Neuroscience. Nature Neuroscience, https://doi.org/10.1038/s41593-023-01304-9 & Whang O (2023) A.I. Is Getting Better at Mind-Reading. The New York Times, https://www.nytimes.com/2023/05/01/science/ai-speech-language.html 
    Huth and team have developed and are testing a non-invasive [A.I. ] decoder that reconstructs continuous language from cortical semantic representations recorded using functional magnetic resonance imaging (fMRI). It’s an decoding A.I. that  translates the private thoughts of human subjects by analyzing fMRI scans. Given novel brain recordings, this decoder generates intelligible word sequences that recover the meaning of perceived speech, imagined speech and even silent videos. 
     
    Metz C . ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead. (May 1, 2023) The New York Times, https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html
     
    Grady P. Tech Panics, Generative AI, and the Need for Regulatory Caution. (May 1, 2023)  https://datainnovation.org/2023/05/tech-panics-generative-ai-and-regulatory-caution/
     
    Khan LM. We Must Regulate A.I. Here’s How. (May 3, 2023) The New York Times, https://www.nytimes.com/2023/05/03/opinion/ai-lina-khan-ftc-technology.html 
     
    Lina Khan, chair of the Federal Trade Commission, on the agency’s oversight of the A.I. revolution, writes about web 2.0 that “What began as a revolutionary set of technologies ended up concentrating enormous private power over key services and locking in business models that come at extraordinary cost to our privacy and security . . . The trajectory of the Web 2.0 era was not inevitable — it was instead shaped by a broad range of policy choices. And we now face another moment of choice. As the use of A.I. becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself.” 
     
    Ezra Klein with guest Erik Davis The Culture Creating A.I. Is Weird. Here’s Why That Matters. (May 2, 2023) The New York Times, https://www.nytimes.com/2023/05/02/opinion/ezra-klein-podcast-erik-davis.html discus how:  
    “programs like ChatGPT can profoundly unsettle our sense of reality and our own humanity, how the behaviors of A.I. systems reveal far more about humanity than we like to admit, why we might be in a “sorcerer’s apprentice moment” for artificial intelligence, why we often turn to myth and science fiction to explain technologies whose implications we don’t yet grasp . . ” 
     
    Tellez M. RNC slams Biden reelection bid with AI generated ad. (April 26, 2023) FOX 11 Los Angeles, https://www.foxla.com/video/1212152 . 
    In this story, telejournalist Marla Tellez reports on an Attack/Opposition ad that uses generative AI to inspire voter fear and interviews Chris Mattmann, Division Manager of the AI, Analytics, and Innovative Development Organization in the Information Technology and Solutions Directorate &  Chief Technology and Innovation Officer (CTIO) at NASA JPL .
     
    Montgomery C, Rossi F, New J. A Policymaker’s Guide to Generative AI (May 1, 2023)  IBM Policy Lab. https://newsroom.ibm.com/Whitepaper-A-Policymakers-Guide-to-Foundation-Models 
    “The best way policymakers can meaningfully address concerns related to foundation models is to ensure any AI policy framework is risk-based and appropriately focused on the deployers of AI systems. This will guarantee all AI systems – including those based on foundation models – are subject to precise and effective governance to minimize risk.”   
     
    There’s a companion blogpost  titled: What policymakers need to know about foundation models. IBM Blog, https://www.ibm.com/blog/what-policymakers-need-to-know-about-foundation-models/ 
     
    To summarize for AIDV, The guide has four recommendations for policymakers which offer points of discussion & consideration for our  working group: 
     
    1)Promote Transparency To that end, IBM has developed AI FactSheets – a tool & examples to facilitate better AI governance and provide deployers and users with relevant information about how an AI model or service was created. 
     
    2)Leverage Flexible Approaches this recommendation calls attention to the value of flexible soft law approaches. In this section it is recommended that “Given the variety of potential applications of generative AI and the different levels of control AI developers and deployers may want to have, policymakers should protect the ability for these actors to negotiate and define responsibilities contractually.” And recommends that “Policymakers should also support national and international standards development work focused on establishing common definitions, specifications for risk management systems, risk classification criteria, and other elements of effective AI governance.” 
     
    3)Differentiate Between Different Kinds of Business Models: here they recommend among other things that  “policymakers should distinguish between developers and deployers. As mentioned, developers should be required to provide documentation like AI FactSheets. However, the focus of regulation should be on the end of the AI value chain, where deployers fine-tune foundation models and introduce AI systems into the world. Deployers have final say about when, where, and how to deploy AI systems and are best positioned to address the risks.” 
     
    4)Carefully Study Emerging Risks : this section bundles several reommendations including:  “policymakers should devote significant resources to identify and understand emerging risks posed by increasingly powerful AI” & “the potential for IP challenges posed by generative AI, particularly the confusion about ownership rights, licensing, and downstream obligation” and ” Policymakers should invest in creating a common research infrastructure.” and “policymakers should support developing better scientific evaluation methodologies for foundation models.” 
     
    Metz C. What Exactly Are the Dangers Posed by A.I.? (May 1, 2023) The New York Times, https://www.nytimes.com/2023/05/01/technology/ai-problems-danger-chatgpt.html
     
    Atleson M. The Luring Test: AI and the engineering of consumer trust. (April 28, 2023)  https://www.ftc.gov/business-guidance/blog/2023/05/luring-test-ai-engineering-consumer-trust
     
    In The Luring Test, The US FTC’s Michael Atleson’s describes the risk of  “generative AI tools and their built-in advantage of tapping into unearned human trust” amd how a “key FTC concern is firms using them in ways that, deliberately or not, steer people unfairly or deceptively into harmful decisions in areas such as finances, health, education, housing, and employment. ”
     
    He writes that “Companies thinking about novel uses of generative AI, such as customizing ads to specific people or groups, should know that design elements that trick people into making harmful choices are a common element in FTC cases, such as recent actions relating to financial offers, in-game purchases, and attempts to cancel services. Manipulation can be a deceptive or unfair practice when it causes people to take actions contrary to their intended goals. Under the FTC Act, practices can be unlawful even if not all customers are harmed and even if those harmed don’t comprise a class of people protected by anti-discrimination laws.” 
     
    Chan W. Can existing laws regulate AI? The federal government and experts say yes. (Apriil 28, 2023)  Fast Company, https://www.fastcompany.com/90889000/can-existing-laws-regulate-ai-the-federal-government-and-experts-say-yes
     
    Amanda Nelson. Education professor explores ChatGPT as a tool for research, learning. (April 26, 2023)  UKNow: University of Kentucky News, https://uknow.uky.edu/professional-news/education-professor-explores-chatgpt-tool-research-learning
     
    Pascal Heus’ in his April 24th post titled “AI has a metadata problem”( https://plgah.medium.com/ai-has-a-metadata-problem-78b30ca1936b)  asks:
     
    “Would we accept statements from anonymous speakers, press releases from unknown sources, or scientific papers without authors or peer review? Such analogies illustrates the current state of AI, where the absence of metadata poses a significant challenge.” 
     
    The conclusion reminded me of why many of us commit to do the work of largely volunteer efforts like RDA’s AIDWG, where he writes: “By elevating metadata management and transparency to a priority and adopting a collaborative approach towards the establishment of documentation standards and best practices, we can work towards a future where AI models are not only powerful but also transparent, accountable, and trustworthy.” 
     
    Thanks for taking a look ! 
    Sincerely,
     
    Natalie

Log in to reply.