Politics are meant to be discussed indoors and I’m not a fan of swaying opinions; however, with a new President Elect in place and months until he takes office, it’s important to step back and understand his perspective on AI Regulation. In the long-run, it affects us all. So, if you’re looking for a non-biased opinion on the future of AI under the upcoming mandate, this article is for you. With the 2024 U.S. election cycle concluded, Donald Trump is set to return to office, potentially reshaping the AI regulatory landscape. For the AI industry, which has been operating under a blend of voluntary guidelines and light-touch federal policies, this shift introduces both uncertainty and opportunity. This article explores what Trump’s administration might mean for AI, from federal oversight to state-level initiatives and international trade implications.
On this page
Biden’s AI Policy: The Legacy and Its Discontents
Trump’s Proposed AI Direction: Free Speech and Minimal Oversight
State-Level Innovation: Filling the Federal Void?
Global Trade and AI: Implications of a Trump Administration
What AI Leaders Should Watch for in Trump’s Administration
Preparing for an Uncertain Future
Stay Informed, Stay Ahead
In October 2023, President Biden established the first comprehensive U.S. policy on AI through an executive order (AI EO) that offered guidelines on ethical AI use, especially concerning IP protection, security, and bias mitigation. Biden’s executive order aimed to balance innovation with safeguards for public and national interests, and it led to the creation of the U.S. AI Safety Institute (AISI) under the National Institute of Standards and Technology (NIST) to assess AI risks, particularly in defense and public safety.
While widely regarded as a step forward, the AI EO has faced criticism from Trump allies who believe the reporting requirements could impede innovation. Industry voices, such as Representative Nancy Mace (R-SC), argue that disclosing training data and security protocols stifles AI development and risks exposing proprietary information. Similarly, NIST’s role in promoting “safe” and “fair” AI has sparked concerns, with Senator Ted Cruz (R-TX) calling it an encroachment on free-market principles. Trump’s campaign promises suggest a potential rollback of these measures, but what exactly would replace them remains to be seen.
The Trump campaign signaled support for AI policies emphasizing free speech and market-driven development. This approach would likely reduce regulatory pressures by prioritizing AI’s physical safety risks over broader social concerns, leaving companies more freedom in model development while potentially narrowing NIST’s oversight responsibilities.
Some expect Trump’s administration to dissolve or scale back the AISI, which monitors AI risks and collaborates with industry and research partners to set standards. Without federal mandates, Trump’s approach may lean heavily on industry self-regulation, relying on private sector expertise to establish best practices and safeguard innovation. Additionally, Trump’s platform includes a focus on AI R&D, drawing from earlier executive orders issued in his last term, which promoted workforce development and AI “rooted in American values.”
Should Trump relax federal oversight, states may step in to address public safety and ethical concerns around AI. California, Colorado, and Tennessee are leading state-level initiatives with legislation focused on AI’s impact on consumer rights, risk-based AI deployment, and ethical data use. Notably, California recently introduced AI safety laws requiring greater transparency in AI model development, suggesting that states, particularly Democratic-led ones, may expand their own regulatory frameworks in the absence of stringent federal mandates.
This growing patchwork of state regulations could result in fragmented compliance requirements, making it more challenging for companies operating across state lines. Yet, the trend toward state-led AI legislation also offers a testing ground for effective policies and could serve as a blueprint for potential future federal standards.
Trump’s potential AI regulatory approach extends beyond U.S. borders, with a focus on protectionism likely to shape the international AI market. Hamid Ekbia, a professor of public affairs at Syracuse University, suggests Trump’s trade policies may restrict access to AI technologies, particularly in China. These restrictions would impact AI export regulations, possibly limiting international AI collaboration and supply chains, and creating challenges for companies sourcing AI technologies from abroad.
However, this approach could spur innovation domestically by directing resources toward building U.S.-based AI infrastructure. For industry leaders, the question becomes how to balance compliance with new trade restrictions and ensure supply chain resilience in a volatile geopolitical landscape.
While the exact shape of Trump’s AI policy remains unclear, industry stakeholders can prepare by focusing on several key areas:
As industry leaders, the road ahead requires adaptability and a readiness to pivot as policies evolve. Trump’s approach signals a probable shift to industry-led governance, amplifying the importance of ethics, transparency, and robust internal controls in the responsible deployment of AI. Yet, even amid regulatory uncertainty, the AI industry remains a powerful catalyst for transformation. Leaders who champion responsible AI practices and prioritize both security and innovation will drive the next wave of progress, ultimately shaping the future of AI on a global scale.
This is a pivotal moment for AI. As regulations evolve, so must our strategies. At my company, we specialize in helping businesses navigate complex AI landscapes, ensuring compliance, security, and innovation at every turn. Explore our insights and solutions tailored for today’s AI challenges. valere.io/AI
Share