Trump’s EO Seeks to Eliminate ‘Unconstitutional’ Regulations; EU Commission Scraps AI Liability Directive; Lawsuit Alleges Racial Discrimination by Meta’s Ad Algorithms for Higher Education; and More
Artificial Intelligence Briefing
This month’s briefing covers President Trump’s executive orders, as well as the EU Commission’s decision not to proceed with the AI Liability Directive. Meanwhile, an advocacy organization’s lawsuit alleges Meta’s ad algorithms discriminate against Black users seeking higher education. Read on for a deeper dive into these and more key updates.
Regulatory, Legislative & Litigation Developments
Trump Signs EO to Eliminate “Unconstitutional” Regulations, Potentially Impacting AI Innovation
On February 19, 2025, President Trump signed an executive order (the Order) instructing federal agencies to identify regulations within each agency’s jurisdiction that are “unconstitutional” or “unlawful” delegations of legislative power. The Order provides a laundry list of the types of regulations that should be targeted. While AI is not specifically mentioned, the Order identifies at least three classes of regulations that could directly impact AI regulation:
- “regulations that impose significant costs upon private parties that are not outweighed by public benefits;”
- “regulations that harm the national interest by significantly and unjustifiably impeding technological innovation …;”
- “regulations that impose undue burdens on small business and impede private enterprise and entrepreneurship.”
Within 60 days of the Order, agencies must provide a list of such regulations to the Office of Management and Budget for rescission or modification. Further, agencies are instructed to “preserve their limited enforcement resources” by de-prioritizing enforcement of regulations that are “based on anything other than the best reading of a statute.”
Trump Administration Announces “America First Investment Policy” to Boost Economic Security With Foreign Capital
On February 21, 2025, the Trump administration published the America First Investment Policy, directed at strengthening economic security in America with foreign capital. The policy aims to preserve an open investment environment to ensure that emerging technologies, such as artificial intelligence, are “built, created, and grown” in the United States. In order to accomplish this goal, the review process by the Committee on Foreign Investment in the United States for investments from foreign allies will be expedited, while any investments from or to the People’s Republic of China, as well as other foreign adversaries, will be restricted and scrutinized.
European Union Commission Scraps the AI Liability Directive
On February 12, 2025, the EU Commission decided not to proceed with the AI Liability Directive, which was originally intended to establish harmonized rules in fault-based claims — including a rebuttable presumption of causation between the fault of the AI deployer and the output produced by the AI system. The Commission cited a lack of consensus and a need for regulatory simplification. Industry groups welcomed the decision, arguing that the EU can remain competitive only by ensuring its digital and tech framework does not become an unworkable patchwork. However, critics warned that this move will weaken consumer protections. The move aligns with the EU’s broader shift, emphasized at the Paris AI Summit, to balance AI innovation with streamlined regulation.
Lawsuit Alleges Meta’s Ad Algorithms Discriminate Against Black Users Seeking Higher Education
In Equal Rights Center v. Meta Platforms, Inc., filed February 11, 2025, in the Superior Court of the District of Columbia, a nonprofit advocacy organization alleges Meta engages in discriminatory practices by disproportionately steering advertisements for for-profit colleges to Black users of Facebook and Instagram, while directing public university ads more frequently to white users, thereby violating the D.C. Human Rights Act. The complaint, supported by academic research that includes a 2024 study, contends that Meta’s advertising algorithms perpetuate educational inequality by denying Black users equal access to information about public higher education opportunities. According to the plaintiffs, Meta acknowledged in a January 2023 report that its personalization systems could lead to unfair outcomes if they incorrectly predict interest in certain ad types based on demographic groups. The lawsuit claims these practices constitute “digital redlining” that reinforces historical barriers to economic mobility for Black students and seeks declaratory and injunctive relief, civil penalties, damages and attorney’s fees.
Texas Legislature Introduces Four New Bills Addressing AI Use in Education and Health Care
The current Texas legislative session has seen the introduction of four new AI-related bills. SB 382 aims to prohibit the use of AI for certain classroom instruction. SB 815 seeks to prevent AI-based algorithms from being the sole determinant in decisions to deny, delay, or modify health care services based on medical necessity or appropriateness. Similarly, HB 2922 would prohibit the use of AI-based algorithms as the sole basis for medical-necessity decisions in utilization reviews. Additionally, SB 1411 proposes the introduction of Subchapter O to the Texas Insurance Code, which would prohibit health benefit plan issuers from “discriminating based on race, color, national origin, gender, age, vaccination status, or disability through the use of clinical AI-based algorithms in the issuer’s decision.” There is some debate over whether existing state and federal laws already address the types of discrimination mentioned in SB 1411, and it remains unclear if the bill would apply to administrators or third parties making benefit decisions.
Congresswoman Waters Requests GAO Study on AI’s Impact on Insurance
On February 10, 2025, Congresswoman Maxine Waters (D-CA), ranking member of the House Financial Services Committee, sent a letter to the comptroller of the Government Accountability Office (GAO) requesting a study of AI’s impact on the insurance industry. Among other issues, Waters asked GAO to study: (1) how property/casualty and life insurers use AI in their underwriting and claims adjustment processes; (2) how the use of AI affects the pricing and availability of insurance coverage; and (3) how state regulators oversee insurers’ use of AI and what role federal AI-related regulations play.
New Jersey Adopts NAIC AI Bulletin
New Jersey has adopted the National Association of Insurance Commissioners’ (NAIC) AI model bulletin, setting forth the state’s expectations for insurers that use artificial intelligence systems. So far, 23 jurisdictions have adopted the bulletin.
EIOPA AI Consultation
On February 10, 2025, the European Insurance and Occupational Pensions Authority (EIOPA) released for public consultation its opinion on AI governance and risk management in insurance. The opinion is addressed to insurance supervisors and covers AI use cases that do not involve life and health insurance underwriting or pricing. (Those use cases are already governed by the EU’s AI Act, so there was no need for EIOPA to weigh in.) For insurers that are up to speed on the NAIC’s AI model bulletin, Colorado’s ECDIS governance regulation and the New York circular letter, much of the EIOPA opinion will sound familiar. If not, maybe this weekend would be a good time to catch up? Comments on the opinion are due on May 12.
Microsoft Disrupts Generative AI Abuse by Storm-2139 Cybercrime Network
Microsoft’s Digital Crimes Unit recently published a post on its blog describing efforts they took to disrupt an international cybercrime network responsible for hacking and misusing certain generative AI services. The group, known as Storm-2139, exploited compromised credentials to access the AI services, altered the services’ capabilities, and then resold access to malicious actors who then used the services to generate harmful and illicit content, including nonconsensual and sexually explicit content. Microsoft filed a civil lawsuit against the then-“John Doe” actors in December 2024 and, through subsequent investigation, has identified several of the group members by their true identities. Microsoft is planning to refer these bad actors to U.S. and foreign law enforcement.
Upcoming Events
- March 27, 2025, Webinar: AI Beyond Scratching the Surface — Innovations and Insights for Stakeholders
- April 10, 2025, Program: The Colorado AI Act & Other AI Legislation: What Legal Teams Need to Know
In Case You Missed It
- Virginia Legislature Passes High-Risk AI Regulation Bill
- Permission to Record: Considerations for AI Meeting Assistants
What We’re Reading