Home Europe Tech Giants Seek Light Touch By EU In Codes Underlying AI Act

Tech Giants Seek Light Touch By EU In Codes Underlying AI Act

Artificial Intelligence
Representational image of Artificial Intelligence

The world’s biggest tech giants have embarked on a final push to persuade
the European Union to take a light-touch approach to regulating artificial intelligence as they seek to fend off the risk of billions of dollars in fines.

EU lawmakers in May agreed the AI Act, the world’s first comprehensive set of rules governing the technology, following months of intense negotiations between different political groups.

But until the law’s accompanying codes of practice have been finalised, it remains unclear how strictly rules around “general purpose” AI (GPAI) systems, such as OpenAI’s ChatGPT will be enforced and how many copyright lawsuits and multi-billion dollar fines companies may face.

The EU has invited companies, academics, and others to help draft the code of practice, receiving nearly 1,000 applications, an unusually high number according to a source familiar with the matter who requested anonymity because they were not authorised
to speak publicly.

The AI code of practice will not be legally binding when it takes effect late next year, but it will provide firms with a checklist they can use to demonstrate their compliance. A
company claiming to follow the law while ignoring the code could face a legal challenge.

“The code of practice is crucial. If we get it right, we will be able to continue innovating,” said Boniface de Champris, a senior policy manager at trade organisation CCIA Europe, whose members include Amazon, Google, and Meta.

“If it’s too narrow or too specific, that will become very difficult,” he added.

Data Scraping

Tech giants such as Stability AI and OpenAI have faced questions over whether using bestselling books or photo archives to train their models without their creators’ permission is a breach of copyright.

Nitin A Gokhale WhatsApp Channel

Under the AI Act, companies will be obliged to provide “detailed summaries” of the data used to train their models. In theory, a content creator who discovered their work had been used to train an AI model may be able to seek compensation, although this is being tested in the courts.

Some business leaders have said the required summaries need to contain scant details in order to protect trade secrets, while others say copyright-holders have a right to know if their content has been used without permission.

OpenAI, which has drawn criticism for refusing to answer questions about the data used to train its models, has also applied to join the working groups, according to a person
familiar with the matter, who declined to be named.

Google has also submitted an application, a spokesman told Reuters. Meanwhile, Amazon said it hopes to “contribute our expertise and ensure the code of practice succeeds”.

Maximilian Gahntz, AI policy lead at the Mozilla Foundation, the non-profit organisation behind the Firefox web browser, expressed concern that companies are “going out of their way to avoid transparency”.

“The AI Act presents the best chance to shine a light on this crucial aspect and illuminate at least part of the black box,” he said.

With Reuters inputs