Co-located with NeurIPS 2024
December 14, 2024
East Meeting Room 16, Vancouver Convention Center
Stanford University
IBM Research
NVIDIA
MIT & Layer Health
CMU & Abridge
Google Cloud
Google Cloud
University of Washington, Seattle
University of Chicago & Virtue AI
University of California, Santa Cruz
Stanford University
University of Texas at Austin
University of North Carolina at Chapel Hill
TIME | EVENT & PRESENTERS |
---|---|
8:15 am - 8:20 am | Welcome by the organizers |
8:20 am - 8:40 am | Keynote: Daguang Xu (NVIDIA) |
8:40 am - 9:00 am | Keynote: Tanveer Syeda-Mahmood (IBM) |
9:00 am - 9:20 am | Keynote: James Zou (Stanford) |
9:20 am - 9:50 am | Coffee Break |
9:50 am - 10:10 am | Keynote: David Sontag (MIT & Layer Health) |
10:10 am - 10:30 am | Keynote: |
10:30 am - 10:40 am | Oral (Research Track): Demographic Bias of Expert-Level Vision-Language Foundation Models in Medical Imaging |
10:40 am - 10:50 am | Oral (Research Track): PATIENT-Ψ: Using Large Language Models to Simulate Patients for Training Mental Health Professionals |
10:50 am - 11:00 am | Oral (Research Track): Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare |
11:00 am - 11:10 am | Oral (Position Paper): Participatory Assessment of Large Language Model Applications in an Academic Medical Center |
11:10 am - 11:30 am | Keynote: Sarita Joshi and Ayo Adedeji (Google) |
11:30 am - 12:10 pm | Panel: Generative AI Health in Industry Panelists: Daguang Xu, Tanveer Syeda-Mahmood, James Zou, David Sontag, Zachary Lipton, Sarita Joshi and Ayo Adedeji Moderator: Jason Alan Fries |
12:10 pm - 1:00 pm | Lunch Break (Poster Session Preparation) |
1:00 pm - 1:50 pm | Poster Session |
1:50 pm - 2:00 pm | Invited Talk: Morse & ARCLab (Winner from NeurIPS 2024 LLM Privacy Challenge) |
2:00 pm - 2:20 pm | Safety Keynote: Bo Li (UChicago & Vritue AI) |
2:20 pm - 2:40 pm | Safety Keynote: Yuyin Zhou (UCSC) |
2:40 pm - 3:00 pm | Ethics Keynote: Sanmi Koyejo (Stanford) |
3:00 pm - 3:30 pm | Policy Keynote: Connor T. Jerzak (UT Austin) |
3:30 pm - 3:50 pm | Policy Keynote: Snehalkumar 'Neil' Gaikwad (UNC) |
3:50 pm - 4:10 pm | Keynote: Su-In Lee (UW) |
4:10 pm - 4:20 pm | Coffee Break |
4:20 pm - 4:50 pm | Poster session |
4:50 pm - 4:55 pm | Award Ceremony |
4:55 pm - 5:30 pm | Panel: Safety, Ethics and Policy for Generative AI in Health Panelists: Bo Li, Yuyin Zhou, Sanmi Koyejo, Connor T. Jerzak, Snehalkumar 'Neil' Gaikwad, Su-In Lee Moderator: Ying Ding |
Generative AI (GenAI) emerged as a strong tool that can revolutionize healthcare and medicine. Yet the public trust in using GenAI for health is not well established due to its potential vulnerabilities and insufficient compliance with health policies. The workshop aims to gather machine learning researchers and healthcare/medicine experts from both academia and industry to explore the transformative potential of GenAI for health. We will delve into the trustworthiness risks and mitigation of cutting-edge GenAI technologies applicable in health applications, such as Large Language Models, and multi-modality large models. By fostering multidisciplinary communication with experts in government policies, this workshop seeks to advance the integration of GenAI in healthcare, ensuring safe, effective, ethical, and policy-compliant deployment to enhance patient outcomes and clinical research.
We invite paper submissions that have not been published, falling in but not limited to the following topics.
For example, surveys of GenAI in healthcare, methodologies of using GenAI for data synthesis, simulation (e.g., digital twins), preliminary study, improving diagnosis accuracy, treatment assistance, and digital therapies.
For example, novel benchmarks of GenAI safety in specific or general health use cases, potential misuse, safeguarding techniques, reliability, and ethical disparities.
For example, reviews of the latest policies in the association of AI and health, evaluation of the compliance of current GenAI applications, and pipelines to coordinate policymakers, GenAI developers, and security experts.
Papers will be submitted in three tracks: demonstration papers for the GenAI health applications, research papers for the policy-compliant GenAI trustworthiness in health or methodology of using GenAI for health, and position papers discussing policies and solutions for technical compliance. We encourage authors to involve multidisciplinary experts, especially the health community (e.g., stakeholders) and policymakers in writing the papers, which ensures the developed methods can address emerging stakeholders' and policymakers' concerns. Accepted papers will be non-archival and presented on the workshop website.
Submission Site Open | July 25, 2024 |
Paper Submissions | August 30 September 10, 2024, AoE |
Paper Notifications | October 11, 2024, AoE |
Camera-ready Submission | November 11, 2024, AoE |
Last Chance for Registration Refund | November 15, 2024, 11:00 PM PST |
Workshop date | December 14, 2024 |
University of Texas at Austin
Harvard University
Stanford University
University of California, San Francisco
University of Texas at Austin
University of Texas at Austin
Stanford University
University of Texas at Austin
Volunteer Organizer for Local Events
Contact: jiaweixu@utexas.edu