Will Harris or Trump Regulate AI?

Regulation will only happen if Congress passes it. That probably takes a crisis.

EXPERT OPINION BY PETER COHAN, FOUNDER, PETER S. COHAN & ASSOCIATES @PETERCOHAN

SEP 27, 2024
harris-trump-ai-regulation-inc-GettyImages-1575409460

Kamala Harris and Donald Trump.. Photos: Getty Images

In November, the U.S. will vote for a new president. Following the vote, it is possible either Kamala Harris or Donald Trump will take the oath of office in January 2025. The future of the country and the world will differ significantly depending on who takes that oath.

Rather than speculating about that question, I’m going to focus on a much narrower topic — how regulation of generative artificial intelligence might vary depending on the election outcome. That’s because next month, I will speak at a Boston University conference on the future of AI regulation — specifically focusing on executive orders concerning AI.

Having recently published Brain Rush: How to Invest and Compete in the Real World of Generative AI, I have given thought to how AI chatbots ought to be regulated. Indeed, I devoted a chapter of Brain Rush to generative AI’s light and dark sides and proposed some principles on which to base the regulation of the technology.

For now, there is very little generative AI regulation in the U.S. At issue following the November election is the fate of an executive order signed by President Joe Biden on October 30, 2023.

Biden’s EO focused most heavily on protecting the government’s generative AI use “and setting standards that could foster commercial adoption,” according to the Associated Press. While the European Union has passed broad rules on AI, such regulations could only exist in the U.S. if Congress passed them, AP noted.

What will happen to AI regulation under Harris or Trump?

If Harris ascends to the presidency, we have a clearer idea of what values she might bring to regulating generative AI. Two days after Biden signed his EO, she gave a speech expressing a need “to codify protections quickly without stifling innovation,” AP noted.

Harris’s speech aimed at protecting individuals from the harm of AI gone rogue. “When a senior is kicked off his health care plan because of a faulty AI algorithm, is that not existential for him?” Harris asked a crowd in London last November, according to AP. “When a woman is threatened by an abusive partner with explicit deepfake photographs, is that not existential for her?”

Were Trump to take the oath of office, not much is clear about how he would regulate AI. To be sure, repealing Biden’s EO is a plank in the 2024 Republican Party Platform. Moreover, in his final months in office, Trump signed an EO promoting the use of “trustworthy” AI in the federal government which is still in effect, AP noted.

A crisis is needed to build support for Congress to pass AI regulation

AI regulation in the U.S. is not robust. Yet that seems to fit within the context of how regulation happens here. My reading of history is that it takes a crisis to get the political support needed for Congress to pass regulation.

America is not good at proactively passing laws in anticipation of protecting the public from harm. In my view, regulation is generally not needed to encourage investors and entrepreneurs to seize new opportunities. However, in some cases, if a country fears it is falling behind a rival, Congress may pass rules to favor domestic economic actors.

I do not know what kind of generative-AI-caused crisis would be bad enough to rise to the top of Congress’s list of legislative priorities.

However, were such a crisis to emerge, I imagine companies that provide the AI products and services that caused the crisis would lobby to shift the responsibility to consumers. Congress should counteract that effort by assuring companies have a financial incentive to make AI safe.

More specifically, as I wrote in Brain Rush, I envision a future need to protect consumers from the following categories of harm due to AI:

  • Misinformation that harms consumers, such as the deepfake example Harris cited
  • Risk to chatbot owners of using copyrighted material to train chatbots without compensating authors
  • Risk of liability due to employees sharing proprietary company information with chatbots such as ChatGPT
  • Risk of reputational damage and possible liability resulting from AI chatbots — say, in customer support — providing incorrect information on which consumers rely to their detriment
  • Risk of massive worker displacement causing harm to employees, their families, and the overall economy

Congress should pass regulations to protect against these dark sides of generative AI. However, such legislation will take a back seat to many other priorities — such as immigration, health care, housing, food prices, student loan repayment, and other issues — that matter more to politicians seeking to get and hold on to power. It will take a crisis clearly caused by rogue AI for Congress to regulate AI.

The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

Inc Logo
Top Tech

Weekly roundup of the latest in tech news