The AI Act in Italy: impacts and opportunities. How to get ready?

PrintMailRate-it

​​​​​​​​​published on 11 October 2024 | reading time approx. 6 minutes


Artificial intelligence ('AI') is a rapidly evolving group of technologies that uses the received inputs to create outcomes such as predictions, content, recommendations or decisions, thus contributing to the achievement of a wide range of economic, environmental and societal benefits across the spectrum of industrial and social activities. ​

 
  

In other words, it is the ability of a machine to display human capabilities such as reasoning, learning, planning and creativity. Or even better, it is a set of sophisticated technologies that allows systems to ‘read’ its environment through data, to relate to perceived inputs, to solve problems and act towards a specific goal, based on specific algorithms.

AI has been recently regulated at the European level by EU Regulation No. 2024/1689 of 13 June 2024 (‘AI Act’), which came into force on 1 August 2024 and constitutes the world's first regulation concerning the responsible and safe use of Artificial Intelligence.

With this legislation, the European Union (‘EU’) wanted to regulate this complex category of technologies in order to improve the functioning of the internal market by establishing a uniform legal framework for the development, placing on the market, distribution and use of AI systems; to promote the deployment of anthropocentric, trustworthy and welfare-enhancing AI; and to ensure a high level of protection of health, environment, safety and fundamental rightsof individuals, businesses, democracy, and the rule of law .

This Regulation, on the one hand, aims to promote innovation and make the EU a leader in the adoption of trustworthy AI; on the other hand, it tackles AI-related risks and impacts.

Through the AI Act, the EU is in fact creating the first global comprehensive legal framework to reap the economic benefits and mitigate the risks associated to AI, including generative AI (GenAI), i.e. those AI systems, used for general purposes, capable of generating audio, images, video or synthetic textual content.

As a matter of fact, artificial intelligence undoubtedly offers great opportunities, for instance, the benefits in terms of predictivity (the improvement of forecasts), optimisation (the optimisation of operations, productivity and resource allocation), tailoring (the customisation of digital solutions available to individuals and organisations), and competitiveness (it can provide key competitive advantages to businesses and organisations. Opportunities that could result successful in fields, such as healthcare, agriculture, food security, education and training, media, sports, culture, infrastructure management, energy, transport and logistics, public services, security, justice, energy and resource efficiency, environmental monitoring, biodiversity and ecosystem conservation and restoration, climate change mitigation and adaptation.

Nevertheless, AI also presents new challenges for our organisations and societies to be dealt with.

The increasing use of AI systems may harm public interests and fundamental rights protected by Union law. Such harm can be both material and immaterial, including physical, psychological, social or economic harm, and, for instance, misuse, liability, product damages, threats to democracy, security, employment.

The EU recommends addressing these aspects by adopting a risk-based approach, i.e. a risk management system that starts: first,with the mapping of AI models and systems, continues with the classification of these on the basis of risks (unacceptable, high, limited, minimal and systemic); second,with the assessment of these risks; last, with regular remediation and post-market monitoring actions.

Noteworthy, among the remediation initiatives, the AI Pact, which allows companies - who voluntarily adhere to it - to implement in advance the obligations of the Regulation, even before they become formally applicable.  Among the many advantages associated with early adherence to the Pact is the increase in the company's visibility and credibility, as well as the guarantees put in place to demonstrate the reliability of its AI systems.

But when does the European Union propose to achieve these important goals? In accordance with a step-by-step approach. 

In particular, the applicability of the obligations of the Regulationfollows a specific agendadictated by the legislation itself:
  • rules imposing bans on prohibited uses of AI systems and literacy obligations will apply as of 2 February 2025;
  • the rules on general-purpose AI models (so-called GPAI models) will apply from 2 August 2025;
  • all other obligations under the Regulation, such as obligations for high-risk AI systems not already subject to product safety legislation, will apply from 2 August 2026 (e.g. AI used for resource recruitment or credit assessment);
  • finally, obligations for AI systems embedded in products already subject to conformity assessment will apply from 2 August 2027 (e.g., in medical devices or industrial machinery).

In the event of non-compliance, penalties can be very severe, up to 7 per cent of an organisation's global annual turnover.

The AI legislation does not come to an end here.

In this regard, the Italian legislator has also reacted: the Italian government has welcomed the introduction of a common framework of rules on AI, stressing the importance that the new regulation should protect fundamental rights and impose obligations and sanctions proportional with the risk. In line with the European legislation, the Council of Ministers has therefore recently approved a draft law that will establish the guiding principles for the use of artificial intelligence in Italy and on which further details are forthcoming. 

In order to support the Government in the definition of national legislation and strategies related to this technology, a Committee of experts prepared the Italian Strategy for Artificial Intelligence 2024-2026 on 22 July 2024. The Italian Strategy for Artificial Intelligence is a crucial step for Italy, which aims to take a leading role in AI and technology transition, also thanks to the important role it is playing in chairing the G7. The document reflects the government's commitment to creating an environment in which AI can develop in a safe, ethical and inclusive manner, maximising the benefits and minimising the potential adverse effects.

In this regulatory context, whatshould companies and organisations do to comply with the AI Act's obligations if they have developed, placed on the market, implemented, deployed, imported these technologies (or want to do so)? 

Companies and organisations will, necessarily and as a first step, have to adopt a risk management system aimed at first assessing the level of risk associated with the specific AI system and/or model adopted, and then estimating the necessary remediation and monitoring actions.

This will occur by designing the first phase of the Methodology, i.e. the Risk Assessment tools, aimed at systems risk classification; robustness assessment; assessment for Fundamental Rights Impact Assessment (FRIA); ethical risk assessment (environment and ESG); as well as interdisciplinary impact assessment, which may involve a whole range of areas of interest (e.g. intellectual property, labour law, healthcare, criminal law and product liability, compliance 231, food & pharma, tax, and more).

Second, by planning the launch of the next step of the Methodology, i.e. Remediation, aimed at identifying the most suitable technical and organisational measures to mitigate the identified risks. and
Finally, the last phase, i.e. the periodic Monitoring of the implemented measures (e.g. by appointing - also by outsourcing - a Chief IA Officer).
Skip Ribbon Commands
Skip to main content
Deutschland Weltweit Search Menu