Artificial intelligence is becoming an integral part of our lives, and this trend will only continue to grow. In Croatia, it already helps patients receive accurate diagnoses, supports students during university admissions, provides customers with quick answers to frequently asked questions, and selects the content we see on social media platforms.

Despite the benefits of using artificial intelligence, there are also significant risks of misuse and unintentional harm, including violations of human rights and discrimination (examples are provided below).

These risks are expected to be significantly reduced across the EU, following the entry into force of the Artificial Intelligence Act on August 1, 2024.

What Is the Artificial Intelligence Act

This is the first EU legal framework for the use of artificial intelligence. The Act sets minimum standards that all Member States, including Croatia, are required to implement. At the same time, individual countries may go beyond the Act’s requirements to further strengthen the protection of citizens.

The European Commission states that “the aim of the Artificial Intelligence Act is to provide developers and deployers of AI with clear requirements and obligations” and “reduce administrative and financial burdens for businesses, especially small and medium-sized enterprises.” It also emphasizes that the Act builds public trust in AI by introducing “specific transparency obligations” to ensure that citizens are adequately informed.

More specifically, the Act bans AI systems that pose unacceptable risks, defines a list of high-risk applications, and establishes a governance structure at both the European and national levels. Oversight and enforcement of the Act is managed by the European Artificial Intelligence Office, which began operations in early 2024 within the European Commission. With the Act now in effect, the EU has become a global leader in protecting citizens from the risks associated with artificial intelligence.

Examples of Risks

Numerous examples from around the world show what can happen when the use of artificial intelligence is not properly prepared, especially in relation to human rights and equality.

Amazon discontinued the use of a recruitment algorithm after discovering it favored candidates who used words like “execute” or “assertive” in their resumes—terms more commonly used by male applicants.

Studies conducted in the United States and the United Kingdom revealed that AI underestimated the risk of serious illness in younger patients and overestimated it in older ones. As a result, some young patients did not receive adequate treatment, while some older individuals underwent unnecessary procedures or therapies.

One of the most frequently cited risks is in the area of public safety, particularly with automated facial recognition, where there is a higher error rate for people of color, potentially leading to discrimination and eroding trust between different communities and the police.

How the Act Will Protect Citizens from Various Risks

The Act defines four levels of risk associated with the improper use of artificial intelligence:

  • Unacceptable risk: Systems that pose a clear threat to safety, lives, or rights are prohibited. For example, voice-activated toys that encourage dangerous behavior.
  • High risk: Systems that must meet strict requirements to be used (e.g., in transportation, exams in schools and universities, robotic surgery, automated visa application processing, credit scoring in banking, resume filtering during recruitment, and others).
  • Limited risk: Systems lacking transparency in AI use, for which specific transparency obligations are imposed (e.g., website visitors must be informed they are communicating with a chatbot rather than a human customer service agent; AI-generated content on topics of public interest must be clearly labeled as such).
  • Minimal or no risk: The Act allows the free use of low-risk systems, which includes the vast majority of AI currently in use across the EU (e.g., spam mail filters).

Now that the Artificial Intelligence Act has come into effect, Croatia, as an EU Member State, is obliged to implement it while also working to increase public trust in AI.

What Croatian Citizens Think About Artificial Intelligence

In September 2023, the first national survey on perceptions of AI in Croatia was conducted with a sample of 1,300 people, as part of the academic-professional project Perception of Artificial Intelligence in the Republic of Croatia. The results showed that 42% of respondents had tried or were using AI tools, 19.3% reported a high level of knowledge about AI, while 26.3% reported a low level. For 60% of those surveyed, AI evokes feelings of uncertainty or concern, while 30% consider it useful. Due to fears of job loss over the next five to ten years, 49% of respondents expressed concern, and 72% believe AI will negatively affect interpersonal relationships.

The MojPosao job portal also conducted a survey on job loss perception with 800 participants. Results showed that one in five people (22%) believe they could lose their jobs in the next decade due to technological advancement, and 10% of workers have already experienced changes in their job duties due to automation. According to a survey by the Croatian Chamber of Economy and Best Advisory Ltd. involving 342 Croatian companies, 50% of large enterprises use some form of AI tools—mostly for business process automation—but only 5% of companies regularly educate their employees on AI.

Activities of the Ombudswoman

As the national human rights institution and central body for combating discrimination, our office has been monitoring the impact of artificial intelligence on human rights since 2019 and advocating for improved regulations governing its use.

It has also been designated as one of the bodies responsible for safeguarding fundamental rights in accordance with the Act, alongside the Ombudsperson for Children, the Gender Equality Ombudsperson, the Ombudsperson for Persons with Disabilities, the Personal Data Protection Agency, the State Electoral Commission, and the Agency for Electronic Media.

Artificial intelligence is a dedicated chapter in the annual reports of the Ombudswoman to the Croatian Parliament. The latest report includes two recommendations to the Ministry of Economy: to establish a registry of AI systems used in the public sector, and to develop a National Artificial Intelligence Development Plan. Croatia does not yet have such a national plan, despite the increasing use of AI in recent years.

Ombudswoman Tena Šimonović Einwalter, as a member of the European Commission against Racism and Intolerance (ECRI) of the Council of Europe, has participated in the work of the Ad-hoc Committee on Artificial Intelligence (CAHAI) and the Committee on Artificial Intelligence (CAI), which drafted the Council of Europe’s Framework Convention on Artificial Intelligence, Human Rights, and the Rule of Law (2024). She also represents ECRI in the Council of Europe’s Committee of Experts on Artificial Intelligence, Equality, and Discrimination.