The European Parliament has given final approval to the most comprehensive AI law yet seen in the world, with its terms now slated to gradually take effect over the next several years.
Final adoption is expected to be a formality at this point, but is not slated to take place until April 2024. From there, individual components of the regulation go into effect anywhere from 6 to 36 months from that date.
New AI law establishes risk, safety and transparency requirements
Though the rollout is slow and gradual, the new AI law will eventually touch just about any organization developing, selling or using an AI system in the bloc.
A ban of AI systems that pose an “unacceptable risk” is the first regulation to go into force, six months from the anticipated April 2024 start date. That will be followed by the codes of practice at nine months, and application of rules on general-purpose AI systems that need to comply with transparency requirements at 12 months. “High-risk systems” are being given an extended period of time to come into compliance with applicable requirements, up to 36 months from the start date.
The current crop of chatbots, such as ChatGPT, will not meet that “high risk” threshold. But they will soon face new requirements. Generative AI models will have to disclose that content has been made by AI, apply safeguards to prevent users from creating illegal content, and make public summaries of the data that was used to train the model. However, ChatGPT 4.0 and some others will be labeled as “high impact” and will be subject to additional “thorough” evaluations to determine what level of systemic risk they pose. These models must also now report any “serious” incidents to the European Commission.
All of the components of the new AI law will track back to some sort of “risk level” assigned to each model. For the most part, generative AI models will slot into a certain risk category based on their function. At the bottom of this scale is a level of “unacceptable risk” systems that legislators see as posing a direct threat to people: those that manipulate the behavior of vulnerable groups, that assign people a “social score” based on qualities of their personal identity or categorize them based on biometric information, or those that engage in real-time biometric identification (such as facial recognition systems). These are generally banned for use by the general public, with certain limited exceptions for law enforcement purposes.
Systems may be legal but considered “high risk” if they fall into one of two categories: those that are used in products that fall under existing safety legislation (such as vehicles and medical devices), and those that fall into certain categories that require registration in an EU database. Some examples of that include critical infrastructure systems, essential private services, public services and benefits, employee management, border control, legal matters and law enforcement. Under the new AI law these systems must complete an initial assessment before being brought to market, and will be subject to periodic ongoing reviews.
New transparency and disclosure requirements coming for most organizations
In total, the full terms of the new AI law will be in legal force by mid-2026. Each of the EU member nations will have its own AI watchdog organization that specializes in these issues, collectively headed up by a Brussels-based AI Office that will focus on regulating large general-purpose systems like ChatGPT that span the bloc.
Though the reporting and oversight mechanisms are similar to those of the General Data Protection Regulation (GDPR), the penalties for violations of the AI law are potentially steeper. Organizations face maximum fines of up to 35 million Euros or 7% of annual global turnover.
The AI law may also be similar to the GDPR in that other countries may take it as a baseline for development of their own regulations. The impact of the rules will likely reach beyond the boundaries of the EU, as international companies will likely roll out changes tailored to the bloc’s regulations that are applied globally.
The two other nations that are likely to have the strongest impact on global AI development, the US and China, are both still in the development stages of their AI law. China introduced initial regulations specifically for generative AI systems in August 2023, but is still working on a more comprehensive AI law that is expected to surface (at least in draft form) sometime this year. The Biden administration issued a blueprint for the future of AI regulation around the same time in 2023, but thus far states have been taking the lead in terms of concrete action.
Peter Sandkuijl, VP, EMEA Engineering and Evangelist at Check Point Software Technologies, notes that the full scope of the EU law will likely take years beyond the initial rollout to develop: “The initial attention will fall on the hefty fines imposed, however that should not be the main focus; as laws are accepted, they will still be tested and tried in courts of law, setting precedents for future offenders. We need to understand that this will take time to materialize, which may actually be more helpful, though not an end goal. The rapid speed of AI adoption demonstrates that legislation alone cannot keep pace and the technology is so powerful that it can and may gravely affect industries, economies and governments. My hope for the EU AI law is that it will serve as a catalyst for broader societal discussions, prompting stakeholders to consider not only what the technology can achieve but also but also what the effects may be. By establishing clear guidelines and fostering ongoing dialogue, it paves the way for a future where AI serves as a force more for good, underpinned by ethical considerations and societal consensus.”