The EU Artificial Intelligence Act
The AI Act brings about substantial new obligations for both the developers and users of artificial intelligence. While the analogy is not precise, the Act can be seen as a type of ‘product safety’ legislation. As such, it leaves a wide range of topics to be dealt with in other EU and/or national laws, or by the parties involved in a specific transaction.
Although we can now rest assured that the AI Act will be adopted by the European Union, there is still work to be done. Most importantly, the Act needs to go through a final lawyer-linguist check and be formally endorsed by the Council. Once all this is done, it will enter into force on the 20th day following its publication in the Official Journal of the European Union.
As with the GDPR, the AI Act will not become applicable immediately. Instead, organisations are generally given 24 months to prepare for it. The AI Act will therefore likely be applied starting spring 2026, with some exceptions, most notably:
- the prohibitions on AI practices causing ‘unacceptable risk’ will be applicable after six months from the date of entry into force; and
- the obligations for general purpose AI governance become applicable after 12 months from the date of entry into force.
Zooming in on the AI Act itself, here are some thoughts on what companies should be focusing on in 2024.
Understand what is covered by the definitions of ‘AI system’ and ‘general-purpose AI model’ and create an AI Governance Strategy
One of the most challenging tasks of the legislators was finding a consensus on what types of systems should be regulated as ‘artificial intelligence’. Although the definitions are broad, there are also some limitations to the scope of applicability of the Act.
Map your use of AI and the risks involved with it
The adopted text takes a risk-based approach to AI. The higher the risk, the more stringent the obligations. The AI Act even ended up banning the use of certain AI systems due to the unacceptable risk they are seen to pose on health, safety and fundamental rights. While low-risk systems are subject to rather lenient obligations which revolve primarily around transparency, high-risk systems must also comply with numerous other provisions regarding (among others):
- risk management;
- impact assessments;
- data governance requirements;
- technical documentation and record-keeping;
- cyber security;
- human oversight; and
- system robustness and accuracy.
In order to be able to comply with these new obligations, organisations need to assess their current compliance level and perform a gap analysis to define a roadmap for meeting the new requirements. To do so, you must first know the risks involved with the AI you are using (or developing).
Prepare to communicate transparently
As mentioned above, regardless of what type of AI system you are using, you will most likely be subject to transparency obligations regarding your AI use. Therefore, prepare to communicate openly with your employees, customers and stakeholders on your use of AI technologies, and what effects such use has on those individuals.
Integrate AI Act compliance into your existing workflows
You know what they say: ‘Don’t fix it if it’s not broken’. Instead of creating completely new compliance processes for AI, it is often more efficient to adapt your old ones instead. Data protection, procurement and data security processes in particular often form a sturdy foundation for the use of AI systems as well. When choosing the ‘building blocks’ of your internal AI compliance work, look especially to your policies, processes, training material, training events, and monitoring and supervision activities to ensure they all take AI into account.
Reduce, reuse, recycle: ensure you take good care of your data assets
AI is often only as good as the data it relies on. In order to ensure your AI tools can be used to their full potential and that you stay compliant with not only the AI Act itself, but all other applicable laws, it is essential to take an early look at what data you have at your disposal, evaluate the quality and content of such data, and ensure that you have sufficient rights to such data.
Update your contracts and templates
Collaboration with your AI partners is key in ensuring compliance with the AI Act. In many situations such collaboration is best supported through a clear agreement framework which sets out unambiguous obligations for each party.
Takeaway for Employers
The AI Act creates new business opportunities and calls for innovative new services in an EU-wide harmonised market. Are you up for the challenge?