Faegre Drinker Biddle & Reath LLP, a Delaware limited liability partnership | This website contains attorney advertising.
April 30, 2025

Data Security Program, Medicare Advantage, Colorado Insurance Requirements & More

Artificial Intelligence Briefing

This month’s briefing covers DOJ’s approach to implementing its final rule on what’s now called the Data Security Program. The Trump administration also published a final rule on Medicare Advantage (MA), choosing not to define “artificial intelligence” or “automated system.” Meanwhile, Colorado commenced rulemaking proceedings to revise and expand existing governance and risk management requirements for insurers using external consumer data and information sources. Read on for a deeper dive into these and more key updates.

Regulatory, Legislative & Litigation Developments

DOJ “Data Security Program” Goes Into Effect

On April 11, 2025, the National Security Division (NSD) of the Department of Justice announced its approach to implementing and enforcing the DOJ’s recent final rule on “Preventing Access to U.S. Sensitive Personal Data and Government-Related Data by Countries of Concern or Covered Persons” (28 C.F.R. Part 202), which NSD now calls the “Data Security Program” or “DSP.” The rule, which took effect on April 8, 2025, prohibits or restricts transactions that would allow access to government-related data or bulk U.S. sensitive personal data by any “covered person” or “country of concern.” The DSP includes both civil and criminal enforcement mechanisms and imposes substantial compliance obligations related to exports, cybersecurity and sensitive data handling. Sensitive data used to train artificial intelligence-based applications, and the access to that data by entities in countries of concern (either deliberate access or inadvertent access due to poor controls) is identified in the rule as a type of transaction that is entirely prohibited or that would require adoption of significant cybersecurity, reporting and other requirements.

National Security Commission Proposes Strategies to Counter China’s Biotechnology Surge, Emphasizes AI Integration and Workforce Development

On April 8, 2025, the National Security Commission on Emerging Biotechnology, chaired by Senator Todd Young (R-IN), released recommendations to counter China’s emerging biotechnology surge. The commission highlights biotechnology as crucial for national security, economic power and global influence, noting that its intersection with AI accelerates its impact across various industries. The commission’s strategic recommendations include prioritizing biotechnology nationally, mobilizing the private sector and the strength of our allies, maximizing biotechnology for defense, out-innovating competitors, and building a robust biotechnology workforce. The recommendations emphasize the need for congressional action on AI standards set by the National Institute of Standards and Technology to ensure high-quality, standardized biological data for AI applications, underscoring AI’s role in advancing biotechnology research and innovation.

CRS Report Details GenAI Issues

The Congressional Research Service on April 2, 2025, released a report on generative artificial intelligence (GenAI), detailing its development, capabilities and potential issues for Congress to consider. The report highlights technical advances that have significantly improved AI performance. It also addresses concerns related to GenAI, such as misinformation, biases, security risks and impacts on the labor market. The report sets forth several measures that Congress should consider in managing these risks, including regulatory oversight and whether to require independent testing of GenAI models.

Trump Administration Issues Final Rule on Medicare Advantage, Excludes AI and DEI Provisions From Biden Era

The Trump administration published its first final rule addressing Medicare Advantage (MA), which now covers about 33 million Medicare enrollees. CMS opted not to finalize several proposals that were proposed in the November 2024 rule developed by the Biden administration. Among these was a proposal to bring clarity about what is and is not “artificial intelligence” or “automated system,” by defining these terms in regulations and specifying that these tools, if utilized, must be used in a manner that preserves equitable access to MA services (proposed 42 CFR 422.112(a)(8). These defined terms were based on the Biden administration’s Blueprint for an AI Bill of Rights and the National Artificial Intelligence Initiative Act of 2020. Perhaps unsurprisingly, considering the rescission of the Biden administration’s executive order on AI and its review and reframing of DEI and health equity initiatives, CMS under the Trump administration announced it would not finalize these provisions.

Despite this move, stakeholders should be aware that CMS acknowledges “broad interest” in the regulation of AI and may engage in further rulemaking in this area. Moreover, nothing in the rule changes previously finalized regulations addressing the use of prior authorization and applying Medicare coverage criteria to basic benefits at 422.101, and the Trump administration has not rescinded February 2024 guidance stating that an algorithm or artificial intelligence must comply with all applicable rules for how coverage determinations by MA organizations are made.

Trump Prioritizes AI in Innovation Agenda Letter to OSTP Director Kratsios

President Trump outlined his vision for a “Golden Age of American Innovation” in a recent letter to Office of Science and Technology Policy (OSTP) Director Michael Kratsios, emphasizing artificial intelligence as a cornerstone technology for maintaining U.S. technological leadership. The president’s letter builds upon his first-term initiatives, including the American Artificial Intelligence Initiative, and challenges Kratsios to secure America’s position as the “unrivaled world leader in critical and emerging technologies” through accelerated R&D, reduced regulatory barriers, strengthened supply chains and increased private sector investment. The letter echoes principles from the Vision for American Science & Technology (VAST) Task Force report, which identified AI among five crucial domains “that will have pivotal roles in defining the future.” Comprised of leaders from public, private and social sectors, the VAST Task Force report offers recommendations for unleashing America’s science and technology enterprise, building the workforce, driving breakthroughs, and strengthening national security.

FDA Issues Draft Guidance on Use of AI in Drug and Biological Product Development

In January 2025, U.S. Food and Drug Administration (FDA) issued draft guidance for public comment, titled “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products — Guidance for Industry and Other Interested Parties.” The document was jointly authored by the FDA centers for drugs, biologics, medical devices and veterinary medicine, along with the FDA Oncology Center of Excellence, Office of Combination Products, and Office of Inspections and Investigations. The guidance recommends that sponsors first establish the questions that would be addressed through AI tools, then define context of use and assess risk of the selected AI model. Detailed recommendations are provided for establishing credibility of the AI model, from developing and executing a plan, documenting the results, and finally assessing the adequacy of the AI model. In response to this draft guidance, more than 100 comments from organizations and individuals have been submitted, ranging from highly technical to highly philosophical. In parallel with the publication of the draft guidance, FDA also updated the previously published discussion papers:

Separately, in a memo dated April 3, 2025, the White House encouraged all federal agencies to remove barriers to innovation and to accelerate AI adoption.

EIOPA Hearing on AI Opinion

On April 8, 2025, the European Insurance and Occupational Pensions Authority (EIOPA) held a public hearing to discuss its Opinion on Artificial Intelligence Governance and Risk Management, currently out for consultation through May 12. EIOPA clarified during the hearing that the purpose of the opinion is to provide guidance on the use of AI systems in insurance that are not prohibited practices or considered high-risk under the EU AI Act. The presentation covered expectations relating to fairness and ethics; data governance; documentation and recordkeeping; transparency and explainability; human oversight; and accuracy, robustness and cybersecurity.

Colorado Division of Insurance Launches Formal Rulemaking on Proposed Amended ECDIS Governance and Risk Management Regulation

On April 22, 2025, the Colorado Division of Insurance (Division) commenced formal rulemaking proceedings on its draft proposed amended Regulation 10-1-1 concerning governance and risk management requirements for certain insurers authorized to do business in Colorado that use external consumer data and information sources (ECDIS), including algorithms and predictive models using ECDIS. The formal rulemaking follows the Division’s release of an earlier draft and receipt of comments from interested parties in December 2024. If adopted, the amended Regulation will revise existing ECDIS governance and risk management obligations applicable to life insurers and expand the reach of those obligations to private passenger automobile insurers and health benefit plan insurers. The Division has scheduled a virtual rulemaking hearing for June 2 where oral comments on the draft can be presented. Written comments may also be submitted to the Division until June 5.

Texas Attorney Faces $15,000 Sanctions and Disciplinary Referral for AI-Generated Fake Case Citations

In yet another cautionary tale of unchecked AI use, a Texas attorney faces a recommended $15,000 sanction after submitting multiple briefs containing nonexistent case citations generated by AI in an Indiana ERISA case. U.S. Magistrate Judge Mark J. Dinsmore issued the recommendation after a solo practitioner admitted he used generative AI but failed to verify citations, claiming he was unaware AI could “hallucinate” case law. The matter escalated on March 6, 2025, with the court referring the lawyer to the Indiana Attorney Disciplinary Commission for potential violations of Professional Conduct Rules 1.1 (competence), 3.1 (meritorious claims) and 3.3 (candor toward the tribunal). In his March 7 response, the lawyer argued that objections were “moot” due to “irreversible harm” already suffered to his professional reputation, while emphasizing his actions stemmed from “oversight and a lack of understanding of the technology’s limitations, rather than any intentional misconduct.” Despite these arguments, Judge Dinsmore maintained that with readily available research tools, “there is simply no reason for an attorney to fail to fulfill this obligation.”

What We’re Reading

The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.

Related Policy, Advocacy, and Consulting Services