March 05, 2024

Life Sciences AI Summit: Industry Leaders Highlight Key Opportunities and Urge Caution as AI’s Prominence Grows

At a Glance

  • The inaugural ACI Life Sciences AI Summit focused on the impact and unique challenges AI brings to the life sciences industry.
  • A general theme of uncertainty and caution emerged from the conference, as presenters and attendees repeatedly noted the general absence of concrete AI-related laws and regulations in this space.

The American Conference Institute hosted its inaugural Life Sciences AI Summit in New York on February 21 and 22. The two-day conference brought together key stakeholders, including regulatory players such as FDA and the USPTO, in-house counsel of medical device and pharmaceutical companies, outside counsel, as well as companies developing health care-related AI products and services. The conference offered a comprehensive agenda that covered a variety of robust topics and panel discussions focused on the impact of AI and its unique challenges to the life sciences industry.

Given the ever-growing public, governmental and regulatory focus on and interest in AI, a general theme of uncertainty and caution emerged from the conference, as presenters and attendees repeatedly noted the general absence of concrete AI-related laws and regulations in this space. Below are some key highlights from the program.

  • No AI Rulebook: Leaders from across the life sciences industry acknowledge that there is no single, unified rulebook for AI, but rather a patchwork of existing regulations, evolving guidelines and proposed legislation, both domestically and abroad. Regulatory bodies, such as the FDA, are primarily concerned with ensuring the safety and effectiveness of products, which includes setting baseline standards that establish the responsible use of data and health information. As industry stakeholders navigate evolving regulatory requirements and compliance, life sciences companies should consider that the U.S. seems poised to follow a similar trajectory to regulations in the privacy domain space from a few years ago, with the E.U. taking the lead on regulation by enacting the GDPR and the U.S. following its lead. Companies will have to choose whether to globally adhere to the most rigorous regimes or to tailor their compliance and programs to the jurisdictions in which they operate. For many, including those in the life sciences space, adhering to the most rigorous standards available will be the likely path forward.
  • U.S. and EU Legislation Updates: A panel featuring in-house counsel at Amazon Web Services discussed parallels and disparities between legislative developments in the U.S. and EU. With no clear path to federal legislation in the U.S. at this point, regulation of AI has been driven primarily by the sweeping Executive Order issued by the White House in October 2023 and budding legislation in individual states, and state legislation is largely expected to be consumer-minded in left-leaning states and business-friendly in right-leaning ones. The EU, in contrast, has approved the landmark EU AI Act, with full enactment expected later this year. That legislation, which extends to all things brought into Europe, whether developed in the EU or not, employs risk-based categories as well as outright prohibitions against certain types of AI. Companies devising compliance programs will need to be well-versed in all potential regulatory schemes affecting their business, products and services.
  • Spotlight on Health Care: Two panel discussions centered around the use of AI in health care. The role of digital health technologies is evolving due to AI-powered remote patient monitoring that has the capacity to improve wearables, predictive analytics and personalized medicine. In addition to its diagnostic capabilities in digital health and software medicines, AI is also enhancing clinical decision-making by assisting health care providers to offer treatment decisions. The FDA is closely monitoring the ethical and regulatory implications of AI’s role in patient data management and clinical decision support, especially as it relates to implications for patient care and data privacy concerns.
  • Generative AI Market Expansion: In-house counsel from Dyno Therapeutics and Pfizer discussed the vast potential, and risks, of using generative AI in the life sciences space. Though the use of generative AI in this space is largely untested and not fully realized, there are opportunities for the technology to be employed across the life cycle of a medical device or pharmaceutical, from drug design and clinical trials to manufacturing and marketing of the product and maximizing the product supply chain. With the ability to create new data models or enable the generation of biological structures, generative AI goes beyond primarily analyzing existing data and requires careful navigation of related legal, ethical and business implications. Given the rapid evolution of generative AI innovations, the life sciences industry is seeing market expansion opportunities in key areas including drug discovery and design, medical imaging, synthetic biology and disease modeling.
  • Litigation Forecast: In-house counsel from Medtronic highlighted some of the potential litigation challenges facing life sciences companies in a continually evolving legal landscape. With AI driving advancements in drug discovery and medical devices, life sciences companies should expect to see potential legal liabilities rise in the following areas: innovation and IP; quality control and assurance; accountability and transparency; data integrity and privacy; and product liability. At this point, there are perhaps more questions about these liabilities than there are answers, and some of the questions the industry can expect to encounter as AI-related litigation increases include basic things, such as: Who is responsible and may be sued, and how will fault be attributed and apportioned across the life cycle of a particular AI technology? What claims may be brought, and what defenses may be available — will litigation be relegated to existing causes of action, or will the industry see an explosion in novel theories of law and liability? In the face of these uncertainties, life sciences companies should be employing a holistic and integrated risk management structure with a focus on how the organization is using AI (both internally and externally), how its third-party vendors are using AI (and how to build adequate controls and risk management into these contracts), and on developing AI-related principles and policies that align with the company’s business processes.
  • Key IP Complexities and Considerations: A panel that included speakers from Novartis and BIO explored the relationship between AI and intellectual property in the life sciences industry, specifically as it relates to inventorship, patentability, IP rights and ownership. The discussion highlighted distinct complexities of IP challenges related to drug development, trade secrets, data privacy, data generation, trademarks and copyright breaches, as well as ethical considerations and more. Industry players should continue to focus on assessing and mitigating IP protections, risks and liabilities related to AI-driven innovations.

With panelists and discussions raising just as many thought-provoking questions as the agenda attempted to answer, Faegre Drinker’s health and life sciences-focused AI team will be keeping a close eye on developments in this space throughout 2024 and beyond. To receive more information and insights from our AI-X team, please visit the Faegre Drinker Subscription Center to join our mailing list.

Related Topics