Balancing Innovation and Regulation: Insights from the EU AI Act
For both individuals and states, artificial intelligence represents a quickly changing, and often challenging mode of technology that has become ever-present in daily life. On March 13th, the EU Parliament voted in favour of a bill that is considered the world’s first major legislation on artificial intelligence; the European Union (EU) Artificial Intelligence (AI) Act. Most state representatives or members of the European Parliament (MEPs) voted to pass the bill, with 523 votes in favour, 46 votes against, and 49 abstentions. This act underscores the countless discussions occurring across the globe as people question where to draw the line when it comes to AI. Artificial intelligence, or AI, as it is more commonly known, is a form of advanced technology that has been used in varying contexts, from allowing students to quickly write an essay to allowing advanced biomechanics programs to be employed. Per the EU AI Act, an AI system is defined as:
This definition touches on the crucial point of AI’s ability to work autonomously and to adapt and retain knowledge as it continues improving its services. The growth of AI has split people down the middle, while some view it as a necessary and impressive development, others see it as a hubristic act, as though humans are flying too close to the sun.
As of now, this act is the first of its kind and represents a turning point in the field of AI-related policy. While certain regions, such as California and Vermont in the United States, have made strides towards legislating AI developments, they have not made any progress as comprehensive as this. This policy is by far the most progressive legislation of its kind, with its particular significance lying in its citizen-focused provisions that ensure the rights of consumers and the public more generally from any restrictions on AI may entail.
Diving more in depth, this bill classifies AI into four categories depending on the risk they present: unacceptable risk, high risk, limited risk, and minimal risk. Systems in the unacceptable-risk category are completely prohibited under the act, which includes social scoring systems and manipulative AI. This most notably includes AI systems designed for ‘real-time’ remote biometric identification (RBI) in publicly accessible spaces; this rule also extends to law enforcement agencies. High-risk systems are those that profile individuals’ data; providers of these AI systems are subject to extremely strict rules for enforcement and limitation of their technology. Ultimately, this legislation is aimed at heavily limiting the work of the providers insofar as the high-risk systems have the inherent ability to work autonomously if designed properly. By the same token, limited-risk systems and their developers are also under tight guidelines. For instance, developers and deployers must ensure that end-users are aware of any AI presence, such as interactions with AI chatbots and deepfakes. The last category, minimal-risk AI systems, are largely unregulated and represent the majority of AI systems that users are most familiar with, such as spam filters or video games.
However, not everyone is content with these safeguards on AI. In fact, many in the technology industry are calling this a counter-progressive piece of legislation that limits the progress currently being made across Europe. This will no doubt come as a setback to many businesses, who now must reconfigure their company infrastructure and plans to realign themselves with the provisions laid out in the act, such as the transparency requirements. On the other hand, many are celebrating the security that the act will provide, heralding the fact that it will promote fairness, transparency, safety, and privacy.
While the bill has been formally voted on by the members of the European Parliament (MEPs), it must still undergo revisions before it is formally enacted. Going forward, the EU AI Act must be published in the Official Journal of the EU, in which all legislation and records are processed; once published it will be considered legally binding. The act will become fully applicable 2 years after its publication, with some notable exceptions. These exceptions pertain mostly to the high-risk and unacceptable-risk systems, insofar as prohibitions against unacceptable-risk systems will begin as soon as 6 months after the feature in the Official Journal.
At this point, it is important to consider how this monumental policy might influence AI-related legislation in Canada. As of now, the AI policy landscape in Canada is quite limited and consists mainly of Bill C-27, the Artificial Intelligence and Data Act, which was enacted in 2023. This legislation is less comprehensive compared to the aforementioned EU AI Act, as it pertains mostly to “high-impact” AI systems, which means it covers fewer AI systems than its European counterpart. In this regard, it seems that Canada has fewer protective measures in place for its citizens, insofar as Bill C-27 lacks major restrictions on things like biometric scans, and social scoring systems.
It will be interesting to see how the global policy landscape develops as the world continues to adapt to the ongoing progress of artificial intelligence around the world. It is quite clear that the EU is a leader for states around the globe as their act is a good representation of protecting user’s rights above companies.