The EU AI Act in conjunction with IDW PS 861

PrintMailRate-it

​​​​​​​​​​​​​​​​​Published on 7 October 2024 | Reading time approx. 3 minutes

 

The EU AI Act represents the world's first comprehensive legislation aimed at regulating Artificial Intelligence (AI). Its primary objective is to mitigate potential risks to health, safety, and fundamental rights that may arise from the deployment of AI systems. The EU AI Act establishes guidelines for providers, operators, importers and distributors of AI systems to ensure the trustworthiness and reliability of their underlying AI systems. With staggered compliance periods ranging from 6 to 36 months, companies with potential interactions with AI systems should familiarize themselves with the new requirements in a timely manner ​




The EU AI Act adopts a risk-based approach, categorizing AI systems into three risk levels: 

 

(1) Unacceptable Risk: AI systems that pose an unacceptable risk are prohibited from use within the EU. This includes, for example, AI systems used for social scoring to evaluate individual behavior or those designed to manipulate human behavior in an impermissible manner. 

 

(2) High Risk: AI systems classified as high-risk encompass applications in areas such as critical infrastructure, healthcare, and finance. These systems must meet strict requirements to be permitted for operation within the EU. 

 

(3) Limited Risk: AI systems with limited risk are subject to less restrictive regulations, primarily involving transparency and information obligations.  

 

In addition to the risk classification, the EU AI Act imposes transparency obligations. AI-generated or AI-processed content must be explicitly identified as such, enhancing user awareness regarding interactions with AI systems. Furthermore, additional documentation requirements for providers of General Purpose AI, such as the current model classes of Generative AI, are specified 

 

The EU AI Act aims to balance the promotion of innovation with risk prevention. While comprehensive guidelines are in place for high-risk AI systems, a degree of flexibility is granted in the realm of research and development. Consequently, a global framework for AI regulation is being established, which shall ensure the safe and responsible use of AI within the EU.  

 

While the EU AI Act sets forth regulatory requirements and frameworks for AI systems, the IDW Auditing Standard: Audit of AI Systems (IDW PS 861) (03.2023) provides criteria for the uniform auditing of AI systems by auditors. Therefore, the standard addresses the growing demand for standardized AI audits. Reports resulting from an audit under IDW PS 861 are based on International Standard on Assurance Engagements 3000 (Revised), Assurance Engagements other than Audits or Reviews of Historical Financial Information. An audit according to IDW PS 861 necessitates the assessment of criteria such as ethical and legal requirements, traceability, IT security, and the performance of AI systems. During such a standardized AI audit, an auditor focuses on the description of the AI system along with the explanation provided by the company's legal representatives.  

 

Companies are recommended to consider the impact of the EU AI Act on their business at an early stage and to assess the benefits of a unified audit of their AI systems according to IDW PS 861.  

From the newsletter




Contact

Contact Person Picture

Frank Reutter

Partner

+49 221 949 909 316

Send inquiry

Contact Person Picture

Tassilo Föhr

Manager

+49 731 96260 14

Send inquiry

Further information

Skip Ribbon Commands
Skip to main content
Deutschland Weltweit Search Menu