AI After the Election: Potential Differences Between a Harris Administration and a Second Trump Administration for Artificial Intelligence Regulation
At a Glance
- Over the four years of a Harris administration, we would likely see a push for strong government oversight of AI.
- Former President Trump, during a political rally, promised to “cancel Biden’s artificial intelligence executive order and ban the use of AI to censor the speech of American citizens on day one.”
To help health and life science companies remain current on artificial intelligence-related legal and regulatory expectations, we have published many updates on the United States and European Union AI initiatives. Even when expressed by federal agencies charged with overseeing industry utilization of AI in different areas — e.g., labor and employment, intellectual property, drug development and marketing, and insurance — the shape and size of these expectations are not yet concrete or clear. Today we add another dimension to the picture by exploring what might happen to U.S. AI regulation based on the outcome of the November 2024 presidential election.
Harris’s First-Term AI Regulation
If Vice President Kamala Harris wins the election, she would presumably continue the Biden administration’s current AI initiative’s momentum. As vice president, Harris has played a significant role in the administration’s AI efforts and has even been called the “AI czar.” In her tenure as vice president, she has met with CEOs from Microsoft, Google and other big tech companies on advancing responsible AI innovation; been involved with the October 2023 AI executive order; represented the U.S. at the 2023 Global Summit on AI Safety; and pushed Congress to adopt AI legislation. In a White House statement, Harris remarked, “I believe we have a moral, ethical, and societal duty to make sure that AI is adopted and advanced in a way that protects the public from potential harm and ensures that everyone is able to enjoy its benefits.” Over the four years of a Harris administration, we would likely see a push for strong government oversite of AI and federal regulations governing both the private and public sectors’ use of AI.
Trump’s Second-Term AI Regulation
If former President Donald Trump wins the election, what about the U.S.’s current AI oversight scheme might change? Would it stop or reverse course?
For this speculation, there are few relevant primary-source documents. We’ll start with what Trump says in his platform, then explore the steps he took during his first term, and then ask whether and how recent Supreme Court and federal district court rulings could play a part. Finally, we’ll have a go at prognostication on the future of federal AI regulation in the United States.
Republican Platform
In chapter 3 of the 2024 Republican Party platform (“Build the Greatest Economy in History”), Trump lists a focus on “Champion[ing] Innovation” and articulates his position on artificial intelligence as follows:
We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing.
Moreover, as reported by the Washington Examiner on December 2, 2023, Trump alleged during a political rally that Homeland Security Secretary Alejandro Mayorkas had used AI to censor political speech in the United States, and promised: “When I’m reelected, I will cancel Biden’s artificial intelligence executive order and ban the use of AI to censor the speech of American citizens on day one.”
Of note, it was reported that Mark Zuckerberg, CEO of Meta, the parent company to Facebook, Instagram and WhatsApp, wrote a letter dated August 26, 2024, to Rep. Jim Jordan, the Republican chair of the House Judiciary Committee, communicating his grievances about the pressure tactics of senior Biden administration officials to moderate content, including humor and satire, on the social media platforms.
2016-20 Trump Administration Actions on AI
In February 2019, then-President Trump issued his own AI executive order, “Maintaining American Leadership in Artificial Intelligence.” Noting AI’s “strategic importance [ ] to the Nation’s future economy and security,” the Trump administration summarized the EO as identifying:
… five key lines of effort, including increasing AI research investment, unleashing Federal AI computing and data resources, setting AI technical standards, building America’s AI workforce, and engaging with international allies. These lines of effort were codified into law as part of the National AI Initiative Act of 2020.
In historic actions, the Administration committed to doubling AI research investment, established the first-ever national AI research institutes, issued a plan for AI technical standards, released the world’s first AI regulatory guidance, forged new international AI alliances, and established guidance for Federal use of AI.
In President Joe Biden’s October 2023 Executive Order, equity of outcome is a central theme. By contrast, Trump’s AI EO focused on strengthening the economy and national security, together with the following proviso:The ongoing adoption and acceptance of AI will depend significantly on public trust. Agencies must therefore design, develop, acquire, and use AI in a manner that fosters public trust and confidence while protecting privacy, civil rights, civil liberties, and American values, consistent with applicable law and the goals of Executive Order 13859.
Since he left office, Trump has been vocal and vociferous in his objections to diversity, equity and inclusion (DEI) mandates in a number of contexts. Reinstituting his previous EO at the same time he revokes Biden’s EO could fit the bill for a quick and effective solution.
Murthy v. Missouri: SCOTUS Rules Plaintiffs Including “Healthcare Activist” Lack Standing
The recent U.S. Supreme Court decision in Murthy v. Missouri, 603 U.S. ___ (2024), is also worth noting. In Murthy, a preliminary injunction action was brought by several plaintiffs (two states, three medical doctors, a news website proprietor and a “healthcare activist” (majority opinion, p. 3)) against federal government actors (White House, Surgeon General, Centers for Disease Control and Prevention (CDC), FBI, and Cybersecurity and Infrastructure Security Agency (CISA) officials). The plaintiffs claimed that these federal government actors engaged in censorship by using public statements and threats of regulatory action to induce social media platforms to suppress conservative-leaning speech related to COVID-19 and the 2020 election. Free speech is a constitutional right Americans enjoy from the government, and the platforms are corporations — not government actors. Accordingly, the respondents needed to prove that the actions of government officials transformed the platforms’ actions into state actions.
In the majority opinion, written by Trump-appointee Justice Amy Coney Barrett (joined by Justices Roberts, Kagan, Kavanaugh, Sotomayor and Jackson), the majority found the allegations of government coercion of the social media platforms were not sufficiently traceable to government actors and/or that they did not prove causation or evidence likelihood of future harm; thus, the plaintiffs lacked standing. In his dissenting opinion, Justice Samuel Alito vehemently disagreed with the majority (Justices Thomas and Gorsuch, joining), providing great detail about the Biden administration’s specific and frequent communications to Facebook alleging lax guardrails and failures to prevent misinformation from being shared, the tacit and explicit threat of retaliation in those communications, and the resultant evidence that Facebook did what the administration asked it to do.
Prognostication for a Second-Term Trump Administration
Why is the Murthy opinion important to the future of federal oversight of AI? Because one might speculate that the former president, unhappy with the Supreme Court’s ruling, and now supported by Zuckerburg’s public testimony, would use a reclaimed position of authority to take actions designed to revive and fortify an open marketplace of ideas, including by preventing the government’s ability to wield coercive authority over those social media platforms. AI may be the natural home base for such an effort. A new Trump administration might couch it as supporting the independence of virtual soapboxes, thereby acknowledging the new world of communication — i.e., that most Americans now receive the news of the day on web-based platforms like Facebook, X and Tik-Tok, and facilitated about 90% of the time by one search engine, Google.
Noteworthy, as well, is the August 5, 2024, federal court ruling against Google in favor of the Biden administration Department of Justice’s unfair trade practices suit, United States v. Google. D.C. District Judge Amit Mehta, in a 277-page opinion, said the company illegally monopolized the general search text ads market: “Google is a monopolist, and it has acted as one to maintain its monopoly” (p. 4). The opinion dives deep into AI topics, including the use of algorithms to perfect or tailor search results in ways that could amount to anticompetitive action.
Given the concentrations of search engine and social media platform markets, Zuckerburg’s public testimony of government pressure to restrict speech on Meta’s social media platforms, and now bolstered by the Google ruling, one might envision a new Trump administration consulting tax, finance and legal experts on his team to explore the government’s legal options in various realms. Unclear would be how Trump’s residual interest in Truth Social, the social media platform owned by Trump Media & Technology Group (assuming divestiture), might figure.