Home World News US Plans More Curbs On Core AI Tech Exports To Russia, China

US Plans More Curbs On Core AI Tech Exports To Russia, China

The American intelligence community, think tanks and academics are increasingly concerned about risks posed by foreign bad actors gaining access to advanced AI capabilities.

WASHINGTON: The Biden administration is contemplating guardrails around the core software of artificial intelligence (AI) systems like ChatGPT to prevent Russia and China from exploiting them.

The Commerce Department is considering fresh restrictions on the export of proprietary or closed source AI models, three people familiar with the matter said.

This would complement a series of measures put in place over the last two years to block the export of sophisticated AI chips to China, in an effort to slow Beijing’s ability to use it for military purposes.

The Chinese Embassy described the move as a “typical act of economic coercion and unilateral bullying, which China firmly opposes,” adding that it would take “necessary measures” to protect its interests.

Currently, nothing stops U.S. AI giants like Microsoft-backed OpenAI, Alphabet’s Google DeepMind and rival Anthropic, from selling this technology to almost anyone in the world.

Government and private sector researchers worry U.S. adversaries could use the models, which mine vast amounts of text and images to summarize information and generate content, to wage aggressive cyber attacks or even create potent biological weapons.

One of the sources said any new export control would likely target Russia, China, North Korea and Iran.

Microsoft said in a February report that it had tracked hacking groups affiliated with the Chinese and North Korean governments as well as Russian military intelligence and Iran’s Revolutionary Guard, as they tried to perfect their hacking campaigns using large language models.

To develop an export control on AI models, the sources said the U.S. may turn to a threshold contained in an AI executive order issued last October that is based on the amount of computing power it takes to train a model.

Nitin A Gokhale WhatsApp Channel

When that level is reached, a developer must report its AI model development plans and provide test results to the Commerce Department.

That computing power threshold could become the basis for determining what AI models would be subject to export restrictions, according to two U.S. officials and another source briefed on the discussions.

If used, it would likely only restrict the export of models that have yet to be released, since none are thought to have reached the threshold yet, though Google’s Gemini Ultra is seen as being close, according to EpochAI, a research institute tracking AI trends.

The agency is far from finalizing a rule proposal, the sources stressed. But the fact that such a move is under consideration shows the U.S. government is seeking to close gaps in its effort to thwart Beijing’s AI ambitions, despite serious challenges to imposing a muscular regulatory regime on the fast-evolving technology.

The American intelligence community, think tanks and academics are increasingly concerned about risks posed by foreign bad actors gaining access to advanced AI capabilities.

Researchers at Gryphon Scientific and Rand Corporation noted that advanced AI models can provide information that could help create biological weapons.

The Department of Homeland Security said cyber actors would likely use AI to “develop new tools” to “enable larger-scale, faster, efficient, and more evasive cyber attacks” in its 2024 homeland threat assessment.

To address these concerns, the U.S. has taken measures to stem the flow of American AI chips and the tools to make them to China. It also proposed a rule to require U.S. cloud companies to tell the government when foreign customers use their services to train powerful AI models that could be used for cyber attacks.

But so far it hasn’t addressed the AI models themselves.
(REUTERS)