The use of artificial intelligence and machine learning in the healthcare industry is no longer an abstract idea that lives in the pages of science fiction novels. This observation was made by Southern Methodist University Professor Nathan Cortez to kick off the SMU Science and Technology Law Review’s 2022 symposium, “Medicine + AI: The Emerging Legal and Ethical Frameworks for Artificial Intelligence.” The symposium was presented by the SMU Dedman School of Law, in partnership with Perkins Coie and the Tsai Center for Law, Science and Innovation.
Innovations in AI have rapidly changed the healthcare landscape, but the pace by which AI has advanced and continues to advance has resulted in practical, legal and ethical dilemmas, particularly with respect to issues of bias in the development and deployment of healthcare AI, the use of AI in practice and the regulation of AI at the state, national and global levels. The symposium gathered leaders in the legal, academic, healthcare and medical device communities to examine innovations in medical AI as well as the legal and ethical dilemmas that arise in the design, development and use of medical AI.
Panel 1: Designing and Developing Medical AI
Moderated by Samantha Ettari, senior counsel in Perkins Coie’s Privacy and Security group, the first panel introduced the concepts of AI, ML, algorithmic fairness and privacy by design to the audience and then focused on medical AI and ML specifically, highlighting healthcare innovations that the panelists have had a hand in developing.
Dr. Uzma Samadani, founder and scientific advisor of Oculogica, and Dr. Rosina Samadani, president and CEO of Oculogica, discussed the Oculogica EyeBOX, a Food and Drug Administration cleared, patented technology that uses ML to assess ocular motility and other domains of brain function to help clinicians determine the presence of concussions. Dr. Vishal Ahuja, assistant professor of operations management at the SMU Cox School of Business and adjunct assistant professor of internal medicine at the University of Texas Southwestern Medical Center, discussed his research and role in bringing various healthcare and patient care platforms to market and their focus on data-driven analytic tools, with an emphasis on addressing chronic diseases such as diabetes. He discussed the exponential growth in capital dedicated to this space in the past three years, particularly as a result of the Covid-19 pandemic and the need for nontraditional patient care options. Dominique Shelton Leipzig, a partner at Perkins Coie, chair of Perkins Coie’s Global Data Innovation team, and co-chair of Perkins Coie’s Ad Tech Privacy & Data Management practice, rounded out the panel by discussing the panoply of domestic and international privacy and other laws that govern the use of AI, the need for privacy by design and algorithmic fairness, and the legal implications of generating astronomical points of data to create and sustain innovations in medical AI. Shelton Leipzig observed that the 2.5 quintillion bytes of data that are being generated globally every day require particular attention from companies in the health and AI space. The issues of data privacy and data stewardship go directly to brand, value and trust. From a practical standpoint, healthcare AI startups have an interest in getting data privacy right, if for no other reason than to heighten their ability to be acquired and to prevent reductions of price or opportunities entirely by missing this issue.
The current healthcare regulatory landscape, although advancing rapidly, is extremely fragmented and complicated, with varying privacy regulations across the United States and globally, complicating the landscape even more. The panelists agreed that data privacy is not an adjacent issue to medical AI and ML use but rather is a core ideal that is critical to the development of the healthcare sector. The speed at which medical AI is advancing puts the healthcare industry ahead of the curve on regulation—and in many instances ahead of the regulatory and statutory frameworks that seek to regulate them and ensure their safety and fairness. This means that industry itself has great potential to take the lead in creating ethical frameworks and serving as a positive influence for future federal—and even global—regulations. With the general understanding that most governments have a keen interest in protecting citizens while also aiming to encourage innovation, the panelists noted that companies can stay ahead of the curve and mitigate the inevitable weight of federal regulations by contemplating issues of data privacy, security, bias and algorithmic fairness in the development and design phases of medical AI, rather than retroactively addressing them when seeking regulatory clearance or capital infusions.
Panel 2: Applying Medical AI
The second panel, moderated by Perkins Coie Corporate & Securities Partner Jill Louis, examined practical applications of medical AI in the healthcare setting. The panel focused on assessing whether standardization of medical AI was possible. On this point, panelist Dr. Herbert Zeh, professor and chair of the department of surgery, UT Southwestern Medical Center, noted the practical use of medical AI in two key ways: (1) algorithms that aid in the facilitation of practice (e.g., a fully robotic bowel reconstruction surgery), and (2) “over the shoulder” technology that practitioners use to supplement or augment their practice. Dr. Zeh noted that the question for medical AI should center not on concrete standardization but more on examining thresholds that are acceptable for different outcomes based on the given inputs. Building on Dr. Zeh’s observations, panelist Dean Harvey, partner at Perkins Coie and co-chair of Perkins Coie’s Artificial Intelligence, Machine Learning & Robotics practice, noted that because there is no prevalent standardization of medical AI yet, principles of explainability, transparency and accountability are extremely critical to gaining patient trust and to moving medical AI forward. With respect to the legal landscape of medical AI, Harvey stated that legal practitioners advising in the medical AI space are charged with guiding clients in the successful discharge of their continued duty of competence regardless of automation, which means that lawyers must understand the actual technology in addition to the laws and regulations in the medical AI space.
The second panel also discussed the brittleness of medical AI models, specifically with respect to retraining models while simultaneously trying to eliminate or minimize bias or noise in outcomes. On this point, panelist Dr. Teodor Grantcharov—professor of surgery at the University of Toronto, Keenan chair in surgery at St. Michael’s Hospital in Toronto and the inaugural director of the International Centre for Surgical Safety—noted how shifting data types and contexts can impact models to the extent that models may not perform to expected standards. To account for that, modelers need to be constantly aware of and monitor model outputs to ensure that models will perform as projected based on the inputs. Panelist Dr. Steve Miff, president and CEO of the Parkland Center for Clinical Innovation, rounded out this point by noting that modeling is a journey. With every application and evolution of data, modelers need to analyze and retrain models with clinical usefulness in mind and to be extremely cautious of filtering or screening data in ways that may introduce unanticipated bias even when the aim is to do good.
Keynote
The keynote address took a deep dive into regulatory trends and issues concerning medical AI. Keynote speaker Bakul Patel, the newly appointed chief digital health officer of global strategy and innovation at the FDA Center for Devices and Radiological Health, highlighted several initiatives that the FDA has introduced to lead on regulating medical AI use in both the United States and in other jurisdictions to create a common vernacular and framework. Patel’s keynote address focused on how the FDA is working with industry to streamline its regulation of AI while maintaining its standards of rigorous review.
One experimental program the FDA has rolled out is the Digital Health Software Precertification Program. The program, currently in its pilot phase, is designed to establish a “regulatory model that will provide more streamlined and efficient regulatory oversight of software-based medical devices developed by manufacturers who have demonstrated a robust culture of quality and organizational excellence, and who are committed to monitoring real-world performance of their products once they reach the U.S. market.”
Patel has prioritized the development of global strategies to advance health equity and digital health. Over the years, Patel has worked to create a common set of definitions for AI and ML in the healthcare regulatory space that minimalizes the fundamental disconnect that can occur with varied language, which has resulted in the global ubiquity of the “software as a medical device” vocabulary. Additionally, in October 2021, the FDA, along with Health Canada and the United Kingdom’s Medicines and Healthcare products Regulatory Agency, published Good Machine Learning Practice for Medical Device Development: Guiding Principles. The publication identifies 10 ML practices for the advancement of “high quality artificial intelligence/machine learning enabled medical device development.” The publication sets forth advice for developing AI and ML and signals that regulators across the globe are in active collaboration.
Panel 3: Shaping Medical AI
The third panel, moderated by SMU’s Professor Cortez, further examined regulatory trends in medical AI. Each panelist discussed regulatory priorities and observations in the field. First, panelist Nicholson Price, professor of law at the University of Michigan Law School, noted that as regulations begin to fall into place for medical AI, such regulations should aim to make data sharing easier and more transparent to encourage continued innovation. Panelist I. Glenn Cohen, deputy dean and faculty director of the Petrie-Flom Center for Health Law Policy, Biotechnology & Bioethics at Harvard Law School, examined some of the challenges the FDA may face in its review of medical AI, particularly with respect to the dynamic and adaptive nature of AI and ML. For instance, medical AI products have the potential to change more quickly than a standard healthcare product, which prompts the question of how the FDA is accommodating these rapid changes—e.g., how much change is enough to trigger another review of a particular medical AI by the FDA? Panelist Dr. Colleen Flood, a University of Ottawa professor, university research chair in health law and policy and inaugural director of the University of Ottawa Centre for Health Law, Policy and Ethics, touched upon challenges in Canada’s regulatory approach to AI, noting that the multiple regulators and the overlap and gaps in Canada’s regulatory regime present struggles for innovators. Dr. Flood noted that the gaps in Canada’s regulatory regime and the general lack of cohesive regulation could undermine the trust of patients, providers, consumers and the public and thwart innovation in medical AI. Zubin Khambatta, partner at Perkins Coie, rounded out the panel by emphasizing the importance of regulators incentivizing companies to innovate safe and equitable medical AI, given the potential for medical AI to increase the efficiency and accuracy of diagnosis and treatment while overcoming obstacles to the comprehensive and equitable access to care.
Conclusion
The SMU Science and Technology Law Review and SMU Dedman School of Law’s Tsai Center for Law, Science and Innovation host this annual symposium to discuss “cutting-edge issues in the science and technology field.” This year’s symposium featured a new partner and co-host, Perkins Coie LLP, a leading international law firm that is known for providing high value, strategic solutions, particularly with respect to the privacy, artificial intelligence, machine learning, data rights, technology, and healthcare fields. By combining resources to present this year’s symposium, SMU and Perkins Coie were able to present a dynamic, insightful, and meaningful conversation regarding the legal and ethical frameworks surrounding medical AI to a wider audience, ensuring that this year’s symposium was a rousing success.
Kiaria Sewell is an associate in the technology transactions and privacy law practice at Perkins Coie in Dallas.
Samantha V. Ettari is a senior counsel at Perkins Coie in Dallas, where she counsels clients on privacy, data security and data management.