The Artificial Intelligence Act (AI Act) is a European Union regulation concerning artificial intelligence (AI). It establishes a common regulatory and legal framework for AI within the European Union (EU).
The AI Act
(Regulation (EU) 2024/1689 laying down harmonised rules on artificial
intelligence) provides AI developers and developers with clear requirements and
obligations regarding specific uses of AI. At the same time, the regulation
seeks to reduce administrative and financial burdens for business, in
particular small and medium-sized enterprises (SMEs).
The AI Act is part
of a wider package of policy measures to support the development of trustworthy
AI, which also includes the AI Innovation Package and the Coordinated Plan on
AI. Together, these measures guarantee the safety and fundamental rights of people
and businesses when it comes to AI. They also strengthen uptake, investment and
innovation in AI across the EU.
The AI Act is the
first-ever comprehensive legal framework on AI worldwide. The aim of the new
rules is to foster trustworthy AI in Europe and beyond, by ensuring that AI
systems respect fundamental rights, safety, and ethical principles and by
addressing risks of very powerful and impactful AI models.
The AI Act:
Establishes a
risk-based approach
The AI Act
categorizes AI applications into three risk levels: minimal risk, high risk,
and unacceptable risk.
Protects
fundamental rights
The AI Act aims to
ensure that AI systems respect fundamental rights and ethical principles.
Promotes
trustworthy AI
The AI Act aims to
address the risks of powerful AI models.
Introduces
transparency requirements
The AI Act
requires systems like chatbots to clearly inform users that they are
interacting with a machine.
Includes
penalties
The AI Act
includes penalties for infringements, which are set as a percentage of the
company's global annual turnover. Up to 40 million Euro or 7% of global annual
turnover
The EU aspires to be the global
leader in safe AI. By developing a strong regulatory framework based on human
rights and fundamental values, the EU can develop an AI ecosystem that benefits
everyone. This means better
healthcare, safer and
cleaner transport, and improved
public services for citizens. It brings innovative products and services, particularly in energy,
security, and healthcare, as well as higher productivity and more efficient manufacturing for businesses, while governments can benefit
from cheaper and more sustainable
services such as transport, energy and waste management.
Recently, the Commission has
launched a consultation on a Code of Practice for providers of general-purpose
Artificial Intelligence (GPAI) models. This Code, foreseen by the AI Act, will
address critical areas such as transparency, copyright-related rules, and risk
management. GPAI providers with operations in the EU, businesses, civil
society representatives, rights holders and academic experts are invited to
submit their views and findings, which will feed into the Commission's upcoming
draft of the Code of Practice on GPAI models.
The
provisions on GPAI will enter into application in 12 months. The Commission
expects to finalise the Code of Practice by April 2025. In addition, the
feedback from the consultation will also inform the work of the AI Office,
which will supervise the implementation and enforcement of the AI Act rules on
GPAI.
These
acts and initiatives aim to:
1. Ensure
AI transparency and explainability
2.
Prevent bias and discrimination
3.
Protect personal data and privacy
4.
Promote ethical AI development
5. Foster
innovation and competitiveness
The AI
Act ensures that Europeans can trust what AI has to offer. While most AI
systems pose limited to no risk and can contribute to solving many societal
challenges, certain AI systems create risks that we must address to avoid
undesirable outcomes.
Although
existing legislation provides some protection, it is insufficient to address
the specific challenges AI systems may bring.
The new
rules:
·
Address risks specifically created by AI
applications
·
Prohibit AI practices that pose unacceptable risks
·
Determine a list of high-risk applications
·
Set clear requirements for AI systems for high-risk
applications
·
Define specific obligations deployers and providers
of high-risk AI applications
·
Require a conformity assessment before a given AI
system is put into service or placed on the market
·
Put enforcement in place after a given AI system is
placed into the market
·
Establish a governance structure at European and
national level
The EU's
AI regulatory framework is evolving, and organizations operating in the EU
should stay informed about the latest developments and ensure compliance.
Unacceptable risk AI systems are those that pose significant harm and
are strictly prohibited. Examples include:
·
AI systems that manipulate human behaviour through
subliminal techniques or exploit vulnerabilities related to age, disability, or
socio-economic status.
·
Biometric categorization systems that infer
sensitive attributes like race, political opinions, or sexual orientation.
These prohibitions aim to protect individuals from harmful AI practices and
ensure ethical AI development and deployment within the EU.
·
These AI systems are prohibited to enter the EU
markets.
High Risk
AI System: The concept of “high-risk AI system” is not explicitly defined. Instead,
a group of AI systems are classified as such provided that certain conditions
are met.
High risk
AI systems:
·
Biometrics
·
Critical infrastructures
·
Employment, workers management and access to
self-employment
·
Education and vocational training
·
Law enforcement
·
Migration, asylum and border control management
·
Administration of justice and democratic processes
It must
be emphasized, however, that not every AI system in these categories is
considered high-risk. There are sub-paragraphs to each of these fields, which
must be examined in detail to determine whether a given AI system indeed is
considered high-risk or not.
Implication
of high-risk AI systems:
·
Establishment of a risk management system
·
Data governance
·
Technical documentation
·
Record-keeping
·
Transparency and information provision to users
·
Human oversight
·
Accuracy, robustness, and cybersecurity
Compliance
with these requirements is mandatory for entities.
Low-risk
AI systems under
the EU AI Act are those that pose minimal threats to rights or safety. These
systems are subject to minimal regulatory requirements, primarily focusing on
transparency. Examples include:
No comments:
Post a Comment