Faegre Drinker Biddle & Reath LLP, a Delaware limited liability partnership | This website contains attorney advertising.
July 12, 2024

The EU Artificial Intelligence Act Comes Into Force on 1 August 2024

At a Glance

  • Various obligations will come into force in phases — between 6 and 36 months following 1 August 2024. While the timetable for compliance may seem relatively long, developing an AI compliance program is a time-consuming process and businesses will need to start as soon as possible.
  • Many of the obligations under the AI Act are drafted in relatively high-level terms, and detailed guidance is not likely to emerge for several months. In the meantime, businesses should press on with their compliance programs by taking a risk-based approach and benchmarking against emerging industry norms and the general practice of industry peers.

The European Union Artificial Intelligence Act (the AI Act) was published on 12 July 2024 in the European Union Official Journal and will enter into force on August 1, 2024. The AI Act is one of the most keenly anticipated pieces of legislation globally and has been subject to intense negotiation and revision since its inception in 2021. We considered the provisions of the AI Act in detail in our previous article: EU Artificial Intelligence Act — Final Form Legislation Endorsed by European Parliament.

Publication in the Official Journal means that the dates for compliance are now confirmed. The AI Act will become fully applicable on 2 August 2026. However, some notable provisions will apply before that date:

  • 2 February 2025 — The ban on prohibited AI systems that are deemed to pose an unacceptable risk will apply. While many of the prohibitions are aimed largely at governmental entities (e.g., social scoring systems), others will be relevant to businesses — for example, biometric categorization systems, facial recognition databases and emotion recognition systems.
  • 2 August 2025 — Provisions regulating general-purpose AI systems will apply.
  • 2 August 2026 — High-risk AI systems that are specifically designated as high-risk by the European Commission and listed in Annex III will be subject to the AI Act. This includes use of AI in respect of recruiting and managing staff, biometrics and access to services (including credit scoring and eligibility for emergency health care).
  • 2 August 2027 — High-risk systems categorised pursuant to Annex I (AI systems that are subject to existing EU health and safety legislation) will be subject to the AI Act. This includes a wide range of products — for example, medical devices, machinery, radio equipment, toys, and motor and agricultural vehicles.

The EU AI Act is by no means the only piece of legislation regulating AI — more than 20 countries globally have implemented some form of AI regulations or guidelines. However, given the importance of the EU market to many international businesses, the relative size of the EU market in relation to the global economy, and the significant extra-territorial effect of the EU AI Act, it is likely to be a key benchmark (if not the default standard) for global businesses.

While the timetable for compliance may seem relatively long (particularly given the extremely fast pace at which technology is developing), businesses that have not already started to implement their global AI compliance programs will need to make this a matter of priority.

Key steps include:

  1. Implementing a governance framework, sponsored at board level, and drawing specialists from multiple business functions, including IT, data privacy and ethics, business operations, R&D, HR, procurement, legal, and diversity professionals.
  2. Assessing the current use of AI within the business, producing an inventory of AI deployed internally and provided by third-party vendors.
  3. Identifying risk management principles and procedures, reflecting the overall risk tolerance of the business.
  4. Drafting appropriate policies and procedures relating to AI including procurement and vendor management policies, and updating related policies including data privacy, IT and security policies.
  5. Assessing the geographical use of AI within the business and the extent to which various business functions will need to comply with the strictest global standards and potentially more permissive rules in certain jurisdictions in which it operates. A one-size-fits-all approach will not necessarily be appropriate; and there may be business, regulatory and cultural factors which require different regional approaches.
  6. Implementing the policies and procedures (with suitable regional variations) in all aspects of the business and supply chains.
  7. Training staff and updating senior leadership on the benefits and risks of using AI.

Many of the obligations under the AI Act are drafted in relatively high-level terms, and detailed guidance is not likely to emerge for several months. In the meantime, businesses are pressing on with their compliance programs and benchmarking against general practice of industry peers. We will continue to provide practical insights in our clients’ key market sectors over the coming months.

 

The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.

Related Topics

The Faegre Drinker Biddle & Reath LLP website uses cookies to make your browsing experience as useful as possible. In order to have the full site experience, keep cookies enabled on your web browser. By browsing our site with cookies enabled, you are agreeing to their use. Review Faegre Drinker Biddle & Reath LLP's cookies information for more details.