EU Announces New Rules on Artificial Intelligence; Focuses on Human Rights and Equality
The use of AI-based technologies is already our reality and its widespread implementation in all areas of life our imminent future
Although we might be unaware of it, artificial intelligence systems and their algorithms are already part of our everyday lives – smartphone apps using voice and facial recognition technologies, translation apps and websites, product selection and ranking algorithms, social networks’ news feeds and online communication with the clients in the banking sector are just some of the examples.
Thus, it is important to legally regulate this area as soon as possible in order to ensure human rights protection and prevent discrimination. In Croatia, national legislation on AI will be based on and follow the normative frameworks adopted by the European Union and the Council of Europe.
Following the adoption of the EU Commission’s White Paper on AI, a number of regulations adopted by the European Parliament in October 2020 (on ethical aspects of AI, the civil liability regime for AI, AI and copyright issues, the implementation of AI in the criminal proceedings, education, cultural and the audio-visual sectors), as well as the work performed by the High-Level Expert Group on Artificial Intelligence (HLEG), on 21 April 2021 the Commission submitted its Proposal for a Regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act).
At the same time, the Council of Europe is discussing its future normative framework on AI with the aim of ensuring the protection of human rights, democracy and the rule of law in the face of the new challenges ushered in with the use of the artificial intelligence systems and technologies. In December 2020 its Ad Hoc Committee on Artificial Intelligence (CAHAI) published its Feasibility Study, mapping out the future Council of Europe’s normative framework in the field of AI based on the COE’s legal standards and the existing legally binding instruments, such as the European Convention on Human Rights. Following this, CAHAI launched a multi-stakeholder consultation in order to fill out the normative gaps identified by the Study, to formulate the solutions for the potential negative impacts of AI use on human rights, democracy and the rule of law. It is also intended to devise suitable instruments for gender mainstreaming in the field of AI and for the promotion and the protection of the rights of the most vulnerable groups, including persons with disabilities and minors. Ombudswoman of the Republic of Croatia Tena Šimonović Einwalter currently contributes to the work of CAHAI in her role as a co-representative of the European Commission against Racism and Intolerance (ECRI).
European Commission’s Key Challenges: Trust and Responsibility
Artificial intelligence, as the Commission emphasizes in its proposed regulation, is a fast-growing field with a host of potential economic and social benefits. However, its use needs to be harmonized with EU legislation and the Union’s main principles and values and aimed at achieving a high level of health protection and the protection of the citizens’ safety and their fundamental rights and freedoms. Optimized prediction processes and operations, better resource management and the personalization of services can contribute to advancements in the health, agricultural, educational, security, public services and other sectors, as well as to a more successful approach to combating climate change. For example, in the current pandemic the AI systems, i.e. the contact-tracing apps, can provide a significant level of assistance to the epidemiological services in the processes of tracking the infected persons.
On the other hand, enhancing the citizens’ trust in the AI technologies and the question of the responsibility for their possible harmful effects are some of the key challenges identified by the Commission. It is especially important to adequately address these issues in the fields in which human rights face the highest level of risk connected with technology use, such as the justice system and health care.
Taking all of this into account, in its proposal the Commission has adopted a human-centric approach focused on human rights protection: the document’s provisions are based on the principles of safety, transparency, responsibility, human regulation and oversight, protection from bias and discrimination, the right to legal protection, social and environmental responsibility, the respect for privacy and personal data protection. Its Article 5, for instance, bans certain activities that can amount to human rights violations, such as the use of apps for mass surveillance, of systems that allow “social scoring” by governments as well as those that utilize the citizens’ personal data to make predictions that can affect them, especially those belonging to various vulnerable groups, in a negative way. As a matter of exception, the use of the mass surveillance systems by the governments is set to be allowed for the purpose of protecting public safety. This exception, however, will have to be implemented with due caution and only when necessary in order to prevent the excessive collection of the citizens’ personal data and the resulting violations of the right to privacy and of other fundamental rights, such as the right to public assembly and freedom of expression.
Focus on the High-Risk Systems
The Commission’s proposal introduces rules for the development, uptake and use of AI, with a special focus on the high-risk systems and the criteria they have to fulfill prior to entering the markets. These types of technologies have the most significant impacts on human rights and can have potentially discriminatory effects. Due to the fact that the data fed into the AI systems are never neutral and their algorithms are not able to take into account all of the contextual information, their use can create social inequalities.
High-risk AI systems include:
- those used by emergency services, such as firefighting or emergency medicine
- software used in recruitment procedures and workforce evaluation
- systems used in education and vocational training, that can influence the citizens’ access to those services
- credit scoring systems used by banks
- systems for the distribution of social benefits
- systems used in law enforcement for crime prevention and criminal persecution, in the field of asylum and migration in the procedures to grant/deny visas and asylum statuses and those used in the judiciary to assist judges in their work
In cases when the AI systems interact with the citizens or use data categorization or emotion recognition to process their personal information, the citizens need to be informed about these facts. If AI is used to generate or edit images, sound or video content that is significantly reminiscent of living persons or existing objects, places or events, the content created in this manner must include a notice that it was artificially generated, with the exception of such use aimed at the protection of public safety or other type of public interest.
Rules Obligatory for the High-Risk Systems, Voluntary for the Others
The use of many of the AI systems comes with specific risks, such as the lack of transparency (the so-called “black box effect”), complexity, unpredictability and partial system autonomy, which can complicate the implementation of the EU’s human rights legislation or lead to its violations. Thus, the proposed regulation is set to encourage the adoption of the voluntary codes of conduct for non-high-risk AI and the implementation of the rules developed for the high-risk systems even for those falling outside of that category, thus facilitating the development of trustworthy and human-centric AI systems.
With the aim of gaining the users’ trust, the proposal provides for the obligation of the AI developers, especially those developing the systems used in the health sector, to acquire the EU’s “CE” certificate marking the compliance of the product with the EU’s safety, health and environmental protection legislation and foresees the establishment at the EU level of a registry run by the EU Commission of the high-risk AI systems. With a view of ensuring a greater level of transparency, the proposal envisages the publication of the technical documentation, the algorithms, the techniques and the manner used to “train” the AI systems, as well as of the information on the manner of the systems’ functioning and of the instructions for their use. At the same time, all of these information must be formulated in a clear and comprehensible way so that the citizens are able to understand all of the AI-generated decisions pertaining to them. Simultaneously, it is important to put in place mechanisms for the analysis and the clarification of the criteria such decisions are based on.
Sanctions for the Negative Consequences of AI Use and Violations of the EU Charter of Fundamental Rights
The use of AI can have significant benefits, such as making products and processes safer. However, it can also cause damage. The proposed piece of legislation, thus, regulates the responsibility for the potential damages incurred through the use of the high-risk AI systems, both when it comes to the damage to the individuals’ health and safety, such as, for example, the loss of life or damage to property, as well as when it comes to the possible negative effects on the society as a whole, such as affecting the financial, educational or professional chances of various groups of citizens or interfering with their access to the public services. It also foresees the responsibility for the violations of the human rights enshrined in the EU Charter of Fundamental Rights, e.g., of the right to privacy, personal data protection, freedom of expression, of assembly and of association, the right to an efficient legal remedy, to a fair trial, to international protection and protection from discrimination.
Taking into account the fact that, in principle and with certain exceptions, the EU data protection rules forbid the processing of biometric data to identify individuals, the proposed regulation allows for the collection of biometric data in public places using the AI systems only in certain cases and with a special system authorization. Additionally, it envisages caution in the implementation of the systems using personal data to generate predictions, which have the potential to lead to racial profiling or cause other harmful effects on individuals or the specific vulnerable groups. To avoid this, it is crucial to ensure that the high-risk AI systems are not fed biased data.
You can find more information on the potential impacts of AI use on equality in the Equinet publication „Regulating for an Equal AI: a New Role for Equality Bodies“ .
When it comes to the scope of the proposed regulation, in the event the Parliament adopts it in its current form, its provisions will apply not only to the AI systems developed and used in the EU, but also to those developed abroad and imported into the Union and will impose certain obligations on the certified company representatives, importers and distributers of such products.
Member States to Play a Key Role in the Implementation
With the aim of facilitating and coordinating the implementation of the future regulation, the Commission has proposed the establishment of a European Artificial Intelligence Board. However, the main responsibility would lie on the national authorities, which are expected to designate one or several national bodies to monitor and coordinate the implementation of the regulation, but also to serve as contact points between the national governments on one side and the Board and the European Commission on the other.
For the benefits of the AI to be realized to the full extent and accessible as widely as possible, it is important for the member states to adopt efficient regulatory frameworks, as well as to undertake various activities aimed at both enabling the growth and development of the business sector and of the AI-based solutions but also on the protection of the fundamental human rights. AI can enable better and faster decision-making, which can benefit both the small and the large enterprises within the business sector, as well as the public administration.
Croatia: Slow Progress Towards AI Implementation
The public administration system in Croatia is only beginning to explore the benefits of AI use and has only started introducing it into certain areas, such as the health sector. The wider implementation of these technologies requires a certain level of public awareness but also continuous research on the functioning of the AI systems and their impact on social development, human rights and the incidence of discrimination.
Public awareness campaigns aimed at end-users are necessary to strengthen their understanding of the functioning of the AI systems and the benefits their use can bring to the society. At the same time, in line with the EU Commission’s proposal, the national oversight bodies must be staffed with a sufficient number of employees provided with continuous training and competent both in the area of AI , data management and cloud computing, as well as in the area of fundamental rights and human rights standards so that the incidences of human rights violations and discrimination caused by technology use could be brought down to the minimum.
It will also be necessary to provide training for the developers of the AI systems on the legal aspects of AI implementation and on the possible impacts of the use of AI technologies on equality and human rights of the citizens. In that sense, forming multidisciplinary teams consisting of scientists, AI developers and ombudsman institutions would facilitate the creation of solutions aimed at avoiding the drawbacks of AI use and contributing to the prevention of discriminatory outcomes of AI-facilitated processes. More specifically, the functioning of certain algorithms and their results is based on the recognition of behavioral patterns and on predictions the system makes by connecting various data entered into it, and can be completely different from the human perspective. In certain cases, even if some of the population characteristics that can be used as bases for discrimination are left out of the information fed into the system, they can still be used as proxy data based on which the system makes its predictions, which can then result in indirect discrimination.
The courts are set to perform the main role when it comes to the monitoring of the effects of AI use on human rights of the citizens and their protection and should, thus, be adequately staffed and equipped. Ombudsman institutions, on the other hand, will be tasked with public awareness raising on the human rights impacts and the potential discriminatory effects of AI use, the oversight over the public administration’s activities related to the introduction and the implementation of AI, as well as with advocating for the greatest possible level of transparency in AI use.
Finally, it is crucial that Croatia pass its AI Strategy – a document it was set to adopt, in line with the EU’s Coordinated Plan on AI, as far back as late 2019. Along with the accompanying subordinate legislation providing for the ethical guidelines for the use of AI technologies, it will provide a normative framework for the implementation of the proposed EU regulation in Croatia, especially for its application on the public administration system and the work of equality bodies and national human rights institutions. It is important that the strategy is harmonized with the national antidiscrimination legislation and that it includes ethical guidelines, the provisions on the obligatory human rights impact assessments of the AI systems/apps and efficient legal remedies, as well as that it regulates the responsibility of the developers and distributers of the AI systems and the transparency of their work.