The Spanish Presidency of the Council of the EU asked for feedback on a number of less controversial points after negotiations on the Artificial Intelligence Law with the European Parliament hit a wall over the foundation model.
The Artificial Intelligence Act is a legislative proposal to regulate artificial intelligence based on its ability to cause harm. The file is currently in the last phase of the legislative process, with the so-called trilogue negotiations between the EU Council, the Parliament and the Commission.
On Friday (10.11.) Euractiv reported as representatives of the EU parliament left the technical meeting after the Spanish presidency, under pressure from France and Germany, tried to deviate from the approach to regulating the foundation model.
EU countries had until Monday to submit written comments ahead of a debate on the issue at a meeting of the Telecommunications Working Party, the Council’s technical body, on Friday (17 November). Some options are expected to be distributed before then.
Euractiv understands that the presidency is mediating directly with the countries concerned on a possible solution acceptable to the European Parliament. Meanwhile, this impasse disrupts an already busy agenda, as a chapter on foundation models was due to be agreed at a technical meeting on Thursday.
Meanwhile, the Spanish also circulated a consultation document with some of the European Parliament’s less politically charged proposals to gather feedback from member states and gauge their flexibility.
The deadline for submitting written comments on these topics was Tuesday (November 14).
Responsibilities along the AI value chain
The most significant aspect of the consultation document relates to responsibilities along the artificial intelligence value chain.
Interestingly, the document was shared before France and Germany came out vehemently against any commitment on foundation models. However, the presidency’s approach seems to be to keep that part separate from the core model layout.
The Commission’s original proposal detailed the obligations of distributors, importers and users in a separate article, which the Council deleted in favor of a requirement that other persons be subject to the obligations of suppliers.
Lawmakers kept the original article and further expanded it to include obligations to ensure that downstream economic providers who adapt a general-purpose AI system such as ChatGPT can meet the requirements of the AI Act.
The Presidency noted that this approach goes beyond the Council’s version, but could be important to ensure that high-risk providers of artificial intelligence systems can meet legal requirements.
Spain proposed several options in between. One would be to accept Parliament’s version, but introduce references to interaction on relevant EU harmonization legislation from the Council’s mandate.
Another option involves deleting the obligation for underlying models, as it will apply under the new underlying model approach anyway.
Finally, the Presidency proposed deleting the Commission’s obligation to create a model of contractual conditions or references to trade secrets from the text of the representatives.
Unfair contract terms
The relationship between general purpose AI providers and downstream economic operators continues to be affected by provisions proposed by Parliament to prevent the former from imposing unfair contractual terms on the latter.
“Although the intention is to avoid abuses by large companies against smaller ones, the article seems to be outside the scope of the Regulation. This statement is also based on initial feedback from the delegations,” the paper continues.
Here the options are only to accept or reject the proposal.
Assessment of the impact on fundamental rights
Left-wing members of the European Parliament proposed an obligation for users of high-risk artificial intelligence systems to carry out an impact assessment on fundamental rights. Spain agreed to a softened version of this proposal, but only for public bodies.
However, the question of whether private companies should be covered is still open, with some EU countries admittedly preferring this wider scope, and the European Parliament may approve the lifting of the requirement that users carry out public consultation for potentially affected groups in return.
The mandate of the European Parliament introduces a set of general principles that all AI operators should make every effort to follow in order to develop and use AI systems. These principles would also be incorporated into the requirements for technical standards.
“The presidency believes that member states may have concerns about this article because its provisions could undermine the risk-based approach and place an unnecessary burden on the standardization process,” the document said.
Madrid has also expressed skepticism about the measure, arguing that some of these principles are already covered by existing legislation and that it is not clear why they should be introduced into every artificial intelligence system.
Options include accepting or rejecting these principles in their entirety, agreeing to include them only in the preamble of the law, or considering them to guide the development of a code of conduct. Separately, the EU countries are asked whether it is acceptable for the principles to be included in the requirements for standardization.
MEPs presented wording that requires EU and national institutions to promote measures to develop sufficient artificial intelligence literacy, while also obliging providers and implementers of artificial intelligence applications to ensure that their staff have sufficient knowledge about them.
Again, in addition to accepting or rejecting this proposal, the Presidency proposed to move it to the preamble of the non-legally binding text. Moreover, the paper questions whether it is acceptable to move these provisions on AI literacy to other parts of the text, such as those related to transparency, human oversight or codes of conduct.
(Edited by Nathalie Weatherald)