Top
  /  ECCT   /  Latest News   /  Understanding the EU's AI Act

Understanding the EU's AI Act

The ECCT's Technology committee hosted a lunch on the topic "Understanding the European Union's AI Act from a Taiwan perspective" featuring guest speakers Tseng Ken-Ying, Partner at Lee and Li Attorneys-at-Law (who is also a Co-Chair of the Technology committee) and Isabel Hou, Secretary General of the Taiwan AI Academy, who is also a Steering Committee Member of Taiwan's Ministry of Digital Affairs' AI Product and System Evaluation Center. At the event, Tseng Ken-Ying provided a comprehensive overview of the recently approved EU AI Act and its legal framework, delving into the key provisions of the act and explaining its requirements regarding the development and application of AI technologies. In addition, Isabel Hou shared her insights from a Taiwan perspective on the EU AI Act and the AI governance framework for Taiwan.

In her presentation, Tseng Ken-Ying noted that the final version of the AI Act has not yet been published but is expected to be within the next few months, after which it will be implemented in phases. The objective of the act is to improve the functioning of the EU's internal market and promote the uptake of human-centric and trustworthy artificial intelligence while ensuring a high level of protection of health, safety, fundamental rights, including democracy and the rule of law. Key words here are human-centric, in that the act aims to protect individuals, ensuring their fundamental rights, health, and safety are safeguarded against the potential negative impacts of AI systems. In addition, the act aims to protect the environment and promote innovation within the European Union. In essence, the act seeks to establish a comprehensive regulatory framework for AI, balancing innovation and risk management within the European Union.

The act is actually a special type of product safety law. It covers AI systems, including general-purpose AI models (GP-AI). These models are typically trained on large amounts of data, through various methods, such as self-supervised, unsupervised or reinforcement learning. Although AI models are essential components of AI systems, they do not constitute AI systems on their own. AI models require the addition of further components, such as for example a user interface, to become AI systems. The act provides specific rules for GP-AI that pose systemic risks, which should apply also when these models are integrated or form part of an AI system. It excludes certain areas, such as national security, military and defence, scientific research, and personal/non-professional AI usage. It applies to a range of entities within the EU, including providers, deployers, manufacturers, authorised representatives, importers, distributors, and affected persons, with a focus on managing unacceptable, high-risk, limited-risk, and minimal-risk AI.

The act defines an AI system as a machine-based system designed for varying levels of autonomy, capable of generating outputs that influence physical or virtual environments. It outlines the rules for GP-AI models and those posing systemic risks, emphasising the need for compliance with risk-based approaches and specific obligations for high-risk AI. The act categorizes AI systems based on their level of risk. Unacceptable risks (such as social scoring, live surveillance, emotion recognition systems in work/educational settings and predictive policing) are banned. High-risk AI requires conformity assessments (including AI for machinery, toys, medical devices, educational or vocational training, employment, access to services, vehicle safety, law enforcement, biometrics and profiling). Limited-risk AI is only subject to transparency obligations, while minimal-risk AI faces no obligations.

Compliance for high-risk AI involves a risk management system, data governance, technical documentation, record-keeping, and the implementation of human oversight. Transparency and labelling requirements are also emphasised for limited-risk AI, such as notifying users of AI interactions and marking AI-generated content. Tseng noted that the EU regulators do not appear to be too concerned about deep fakes and chatbots, since they are categorized as "limited risk" and only require labelling, which is also a requirement for AI-generated content (including audio, video, images and text).

Moreover, the act introduces a systematic risk threshold for GP-AI models, necessitating notification to the European Commission and evaluations to identify and mitigate systemic risks. It requires measures to ensure adequate cybersecurity protection. In future, it may be necessary for companies to hire third party experts to evaluate their AI for compliance purposes. Non-compliant AI may be subject to penalties, although warnings will be issued in advance and time will be granted to make revisions before fines are imposed.

Besides guarding against harm, the other intention of the AI Act is to encourage innovation through EU harmonised standards and a regulatory sandbox.

The act is set to become effective in phases. The ban on prohibited AI will come into effect in 2024. The GP-AI provisions are scheduled to become applicable in the summer of 2025. Most of the rules on high-risk AI as per Annex III will become applicable by the summer of 2026 while the rules on high-risk AI as per Annex I will become applicable in 2027.

Both speakers noted that there are still some items in the AI Act that are ambiguous and subject to interpretation, which means that Taiwanese firms who do business in Europe should monitor developments carefully, particularly future clarifications and guidance from the EU on ambiguous provisions. Isabel Hou cited some examples which, on the one hand, imply that a product or service would be subject to the AI Act but could also be regarded as exempt from the rules. For example, if the entity is placing AI systems in the EU market or the AI's output is used in the EU, the entity would be considered a "provider" for the purposes of the legislation and subject to the relevant provisions. However, the act also states that AI components provided "under free and open source licences" are excluded from the AI Act, leading to some ambiguity.