On 15 January 2026, the “Law & Tech” conference was held, organized by students of the Faculty of Law at the University of Zagreb. The event aimed to bring together experts in law and information technology and to highlight the impact of modern technologies, including artificial intelligence (AI), on citizens, their rights, freedoms, equality, and broader social development.
Ombudswoman Tena Šimonović Einwalter also participated in the conference as one of the panelists in the discussion on the EU Artificial Intelligence Act and the adoption of a national law on its implementation. In addition to the Ombudswoman, the discussion featured Andrea Čović Vidović, Deputy Head of the European Commission Representation in Croatia, and attorney Stefan Martinić, with moderation by lawyer Natalija Perić.
The Ombudswoman emphasized the importance of having legislation governing the use of artificial intelligence systems both at the European and national levels, stressing that detailed regulation is key to protecting citizens’ rights from the negative impacts of artificial intelligence.
Reflecting on the use of AI systems in Croatia, she cited several examples from the work of the Ombudswoman’s institution in which there were suspicions or findings of rights violations resulting from the use of AI. For instance, she referred to a case in which a company required employees to install a specific application on their mobile phones to monitor them for the purpose of increasing work efficiency. She also mentioned issues related to the automatic synchronization of data from the land cadastre and land register into the Land Data System.
The Ombudswoman noted that AI systems are used in both the private and public sectors in Croatia, but that their use is still not sufficiently transparent. In particular, there is a lack of publicly available information on the cases in which such systems are applied in the public sector. She also highlighted the recommendation that a public register of their use should be established, as prescribed by the EU Artificial Intelligence Act, and not only for high‑risk systems.
Speaking about other challenges in this area, the Ombudswoman highlighted the need to harmonize definitions within the legal framework. She emphasized that, for the successful adoption and effective implementation of such legislation, it is essential not only to understand national and European law, but also to have a clear understanding of the technological foundations of AI systems.
She also highlighted certain risks to human rights and the potential for discrimination arising from the use of AI systems. These risks are not limited to the right to privacy or the protection of personal data, but may extend to all human rights, including the prohibition of discrimination, the right to health, the right to education, the right to life, and others.
In the area of combating discrimination, there are already cases in which rights violations resulting from the use of AI systems have been legally established in other countries. Examples include employment, facial recognition systems, the provision of banking services, prediction of social benefit misuse, sick‑leave management, and similar contexts.
In conclusion, the Ombudswoman offered recommendations on the use of AI systems, emphasizing above all the need for transparency—meaning that it must be clear where, when, and how such systems are deployed—human oversight, enhanced legal protection, and informing those whose rights have been violated, many of whom may not even be aware of it. She stressed that it is essential to educate children and young people about these systems and the associated risks from the earliest age, as well as to provide professional and technical training.
As part of the conference, an introductory lecture was also held on visions for the use of artificial intelligence and the future of society, followed by discussions on a new criminal offence related to AI systems, the analysis of digital evidence, the SKY application, and the presentation “AI vs. Student.”
Overall, the topic of artificial intelligence in Croatia is still not being addressed with sufficient seriousness at all levels of government—including among decision‑makers—and its impact on society as a whole, as well as on citizens’ human rights, freedoms, and equality, is not being examined in sufficient depth.
Therefore, with the aim of protecting citizens’ rights, it is crucial to strengthen the necessary capacities so that the state can monitor technological developments in this area and respond to the challenges posed by the use of artificial intelligence in everyday life, particularly by the private sector.



