Where are we in terms of regulation on Artificial Intelligence? – The Sun of Mexico

This year has witnessed both the accelerated development of Artificial Intelligence (AI) and the extensive discussion about its real and potential benefits and risks. In previous installments, I have written about some of them, so now I consider it appropriate to reflect on where we are in terms of AI regulation.

Broadly speaking, there would be three major models of AI regulation, which largely define the way in which all other countries adapt to it: European, American and Chinese. Given their legal systems, most countries in Latin America will follow in the footsteps of the European or American model or a compromise between the two. What do these models establish?

The most extensive and ambitious in its scope is, without a doubt, the European one. Just at the beginning of this month, the consensus on the Artificial Intelligence Law of the European Union was announced, the first of its kind in the world, whose dual purpose is to reduce the risks in the use of AI and promote research, development and own innovation in this field. Three aspects of this Law should be highlighted.

First, it establishes mandatory rules regarding ethics and transparency with the aim of protecting human rights. Among other things, it will be mandatory for technology companies to inform users when they are interacting with AI; tag AI-generated content; perform risk testing for users; allow users to raise complaints about the operation of these systems, as well as receive an explanation from the companies. Fines for non-compliance will range between 1.5% and 7% of the company’s global revenue, depending on the failure and its size.

Second, the Law regulates uses, not technologies. Four levels of risks are established: a) unacceptable, prohibited from entering, such as emotion recognition systems in educational and work environments, social credit systems, or predictive police systems (except when they are strictly supervised by people and have a specific use). specific and limited). B) High, which can operate, under strict human supervision and subject to risk and transparency tests. For example, systems to select personnel, systems applied to the administration of justice, or critical infrastructure systems, among others. There was great debate about this level of risk because the Law states that it will be the companies themselves that will establish which of their products could imply this level of risk. C) Limited, which requires generating reports, notifying users of interactions with AI and making databases transparent. Here would be the Chatbots. D) Minimum, practically exempt from this Law, such as video games and spam filters.

Third, establishes an AI Office to supervise the proper implementation of the regulation, harmonize standards, suggest adjustments to the Law itself, handle legal matters and, with the help of experts, detect risks and better classify AI models. Although the Law will come into force until 2025 – and companies would still have an adjustment period of between six months and two years –, the European Union will ask companies to adjust to the rules voluntarily and gradually (without penalty for the soon).

In contrast, the United States has a less detailed and strict position, leaving companies much greater freedom to self-regulate, under the assumption that any limitation on technological development can leave them behind in their competition with China. However, since 2022, the government – and recently Congress – have tried to promote greater discussion, especially on the issue of safety in the use of AI. In this sense, at the end of October, the government published its Executive Order on AI with the intention of establishing certain principles of transparency, generating standards (especially in the labeling of content created by AI) and, of course, not slow development and innovation. The two most important aspects of this Executive Order are, first, that it will establish, through the Department of Commerce, a series of general criteria for labeling content, which companies will be able to adopt voluntary standards for their own developments. Second, that companies must share test results with the government when there are risks to national security. In the opinion of many experts, the two greatest weaknesses of this Executive Order are that nothing is mandatory and that, since it is an administrative act, the next government can revoke it.

In Latin America, some countries are already beginning to look to both sides for guidance. In our country, although some legislators have begun to think about the issue, the truth is that we are still far from having a serious debate about it. Unfortunately, the electoral calendar and the imminent end of the current legislature make it practically impossible to consider a substantive discussion before the fall of 2024, although that is possible, on a highly complex issue, but one that will have increasing impacts on the lives of the people like few things.

Leave a Reply

Your email address will not be published. Required fields are marked *