AI Regulatory Roundup: U.S. Pushes a UN Measure as Europe’s Rules Take Effect

Safeguards, guidelines, and access all get addressed in policymakers try to catch up with fast-developing technology.

BY ASSOCIATED PRESS

MAR 13, 2024
united-nations-hq-ny-inc-GettyImages-1706648508

The United Nations headquarters in New York, NY.. Photo: Getty Images

The United States is spearheading the first United Nations resolution on artificial intelligence, aimed at ensuring the new technology is “safe, secure, and trustworthy” and that all countries, especially those in the developing world, have equal access. Also on Wednesday, European Union lawmakers approved the first law governing artificial intelligence, a measure that will likely influence government policies on AI worldwide.

The draft General Assembly resolution aims to close the digital divide between countries and make sure they are all at the table in discussions on AI — and that they have the technology and capabilities to take advantage of its benefits, including detecting diseases, predicting floods, and training the next generation of workers.

The draft recognizes the rapid acceleration of AI development and use, and stresses “the urgency of achieving global consensus on safe, secure, and trustworthy artificial intelligence systems.” It also recognizes that “the governance of artificial intelligence systems is an evolving area” that needs further discussions on possible governance approaches.

U.S. National Security Advisor Jake Sullivan said the United States turned to the General Assembly “to have a truly global conversation on how to manage the implications of the fast-advancing technology of AI.”

The resolution “would represent global support for a baseline set of principles for the development and use of AI and would lay out a path to leverage AI systems for good while managing the risks,” he said in a statement to the Associated Press.

If approved, Sullivan said, “this resolution will be an historic step forward in fostering safe, secure and trustworthy AI worldwide.”

The United States began negotiating with the 193 U.N. member nations about three months ago, spent hundreds of hours in direct talks with individual countries, 42 hours in negotiations, and accepted input from 120 nations, a senior U.S. official said. The resolution went through several drafts and achieved consensus support from all member states this week and will be formally considered later this month, the official said, speaking on condition of anonymity because he was not authorized to speak publicly.

Unlike Security Council resolutions, General Assembly resolutions are not legally binding, but they are an important barometer of world opinion.

A key goal, according to the draft resolution, is to use AI to help spur progress toward achieving the U.N.’s badly lagging development goals for 2030, including ending global hunger and poverty, improving health worldwide, ensuring quality secondary education for all children, and achieving gender equality.

The draft resolution encourages all countries, regional and international organizations, technical communities, civil society, the media, academia, research institutions, and individuals “to develop and support regulatory and governance approaches and frameworks” for safe AI systems.

It warns against “improper or malicious design, development, deployment, and use of artificial intelligence systems, such as without adequate safeguards or in a manner inconsistent with international law.”

Lawmakers in the European Union just gave final approval to the world’s first comprehensive AI rules on Wednesday. Countries around the world, including the U.S. and China, or global groupings like the Group of 20 industrialized nations, are also moving to draw up AI regulations.

The U.S. draft calls on the 193 U.N. member states and others to assist developing countries to access the benefits of digital transformation and safe AI systems. It “emphasizes that human rights and fundamental freedoms must be respected, protected, and promoted throughout the life cycle of artificial intelligence systems.”

U.S. Ambassador Linda Thomas-Greenfield recalled President Joe Biden’s address to the General Assembly last year where he said emerging technologies, including AI, hold enormous potential. She said the resolution, which is co-sponsored by dozens of countries, “aims to build international consensus on a shared approach to the design, development, deployment, and use of AI systems,” particularly to support the 2030 U.N. goals.

The resolution responds to “the profound implications of this technology,” Thomas-Greenfield said, and, if adopted, it will be “an historic step forward in fostering safe, security and trustworthy AI worldwide.”
  

EU rules approved: What you need to know 

European Union lawmakers gave final approval to the 27-nation bloc’s artificial intelligence law Wednesday, putting the world-leading rules on track to take effect later this year.

Lawmakers in the European Parliament voted overwhelmingly in favor of the Artificial Intelligence Act, five years after regulations were first proposed. The AI Act is expected to act as a global signpost for other governments grappling with how to regulate the fast-developing technology.

“The AI Act has nudged the future of AI in a human-centric direction, in a direction where humans are in control of the technology and where it — the technology — helps us leverage new discoveries, economic growth, societal progress and unlock human potential,” Dragos Tudorache, a Romanian lawmaker who was a co-leader of the Parliament negotiations on the draft law, said before the vote.

Big tech companies generally have supported the need to regulate AI while lobbying to ensure any rules work in their favor. OpenAI CEO Sam Altman caused a minor stir last year when he suggested the ChatGPT maker could pull out of Europe if it can’t comply with the AI Act — before backtracking to say there were no plans to leave.

Here’s a look at the world’s first comprehensive set of AI rules:

How does the AI act work?

Like many EU regulations, the AI Act was initially intended to act as consumer safety legislation, taking a “risk-based approach” to products or services that use artificial intelligence.

The riskier an AI application, the more scrutiny it faces. The vast majority of AI systems are expected to be low-risk, such as content recommendation systems or spam filters. Companies can choose to follow voluntary requirements and codes of conduct.
High-risk uses of AI, such as in medical devices or critical infrastructure like water or electrical networks, face tougher requirements, like using high-quality data and providing clear information to users.

Some AI uses are banned because they’re deemed to pose an unacceptable risk, like social scoring systems that govern how people behave, and some types of predictive policing and emotion recognition systems in school and workplaces.

Other banned uses include police scanning faces in public using AI-powered remote “biometric identification” systems, except for serious crimes like kidnapping or terrorism.

What about generative AI?

The law’s early drafts focused on AI systems carrying out narrowly limited tasks, like scanning resumes and job applications. The astonishing rise of general-purpose AI models, exemplified by OpenAI’s ChatGPT, sent EU policymakers scrambling to keep up.
They added provisions for so-called generative AI models, the technology underpinning AI chatbot systems that can produce unique and seemingly lifelike responses, images and more.

Developers of general-purpose AI models — from European startups to OpenAI and Google — will have to provide a detailed summary of the text, pictures, video and other data on the internet that is used to train the systems as well as follow EU copyright law.

AI-generated deepfake pictures, video, or audio of existing people, places, or events must be labeled as artificially manipulated.

There’s extra scrutiny for the biggest and most powerful AI models that pose “systemic risks,” which include OpenAI’s GPT4 — its most advanced system — and Google’s Gemini.
The EU says it’s worried that these powerful AI systems could “cause serious accidents or be misused for far-reaching cyberattacks.” They also fear generative AI could spread “harmful biases” across many applications, affecting many people.

Companies that provide these systems will have to assess and mitigate the risks; report any serious incidents, such as malfunctions that cause someone’s death or serious harm to health or property; put cybersecurity measures in place; and disclose how much energy their models use.

Do Europe’s Rules Influence the rest of the world?

Brussels first suggested AI regulations in 2019, taking a familiar global role in ratcheting up scrutiny of emerging industries, while other governments scramble to keep up.

In the U.S., President Joe Biden signed a sweeping executive order on AI in October that’s expected to be backed up by legislation and global agreements. In the meantime, lawmakers in at least seven U.S. states are working on their own AI legislation.
Chinese President Xi Jinping has proposed his Global AI Governance Initiative for fair and safe use of AI, and authorities have issued “interim measures” for managing generative AI, which applies to text, pictures, audio, video, and other content generated for people inside China.

Other countries, from Brazil to Japan, as well as global groupings like the United Nations and Group of Seven industrialized nations, are moving to draw up AI guardrails.

What happens next?

The AI Act is expected to officially become law by May or June, after a few final formalities, including a blessing from EU member countries. Provisions will start taking effect in stages, with countries required to ban prohibited AI systems six months after the rules enter the lawbooks.

Rules for general-purpose AI systems like chatbots will start applying a year after the law takes effect. By mid-2026, the complete set of regulations, including requirements for high-risk systems, will be in force.

When it comes to enforcement, each EU country will set up their own AI watchdog, where citizens can file a complaint if they think they’ve been the victim of a violation of the rules. Meanwhile, Brussels will create an AI Office tasked with enforcing and supervising the law for general purpose AI systems.

Violations of the AI Act could draw fines of up to $38 million, or 7 percent of a company’s global revenue.

This isn’t Brussels’s last word on AI rules, said Italian lawmaker Brando Benifei, co-leader of Parliament’s work on the law. More AI-related legislation could be ahead after summer elections, including in areas like AI in the workplace, which the new law partly covers, he said.
 

Copyright 2024. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

Inc Logo
Top Tech

Weekly roundup of the latest in tech news