Organised by the European Union Agency for Fundamental Rights (FRA), the Fundamental Rights Forum (FRF), the largest European event in the field of human rights and equality, was held in Vienna on 11 and 12 March 2024.
This year’s edition took place under the slogan “Rights in motion: Embracing human rights for Europe’s future”, with a focus on three key topics – ensuring digitalisation that respects human rights, shaping a socially and environmentally sustainable Europe, and safeguarding democracy and civic space on the continent.
Alongside Ombudswoman Tena Šimonović Einwalter, the event brought together numerous leaders and experts from the European Union, the Council of Europe, the OSCE, the United Nations, as well as representatives of academia, local authorities, civil society, the business sector, and the fields of sport, the arts, and religious communities.
In the segment of the Forum dedicated to ensuring digitalisation that respects rights, Ombudswoman Šimonović Einwalter participated in a thematic panel discussion titled “From Code to Conscience: Ensuring Digitalisation that Respects Rights”, together with, among others, Alexandra Xanthaki, UN Special Rapporteur in the field of cultural rights, Teresa Ribeiro, OSCE Representative on Freedom of the Media, and Menno Ettema, Head of the Hate Speech and Hate Crime Division and Artificial Intelligence Unit at the Council of Europe.
Panel participants concluded that a greater level of transparency, coherence and cooperation is needed in the implementation of existing and new standards in this field, such as the EU Artificial Intelligence Act (AIA).
The AIA is the world’s first comprehensive law on artificial intelligence, and is intended to close existing regulatory gaps. It addresses risks associated with the use of artificial intelligence through a set of requirements and obligations designed to safeguard health, safety and fundamental rights in the EU Member States, and is expected to have a significant impact on global AI governance.
The Act, for instance, limits the use of so-called high-risk AI systems and prohibits certain unacceptable uses, meaning that companies providing high-risk AI systems will have to meet specific EU requirements. It also establishes a public EU register of such systems, with the aim of enhancing transparency and improving the enforcement of obligations.
It is crucial to define the responsibilities of both states and private companies developing AI-based systems for the potential adverse impacts of their use. Education and the strengthening of digital and algorithmic literacy are essential, especially among young people, older persons, and other vulnerable groups. Marginalised communities are already disproportionately at risk of having their rights violated by algorithmic decision-making tools that fail to reflect their perspectives and interests.
As panel participants agreed, however, education cannot substitute for purposeful and inclusive regulation by the state, nor should responsibility be shifted solely to citizens as users of digital products and services.
They emphasised the central role of human rights in this process, as a vision and guide for how the use of digital products and services should be regulated in a way that ensures the protection of human rights, combats discrimination, prevents possible abuses and ensures legal remedies. This is particularly important in a time of increasing securitisation, which also increases the risk of undermining human rights and the equality of citizens.
More about the 2024 Fundamental Rights Forum is available here.