Every time we use a smartphone app, we consent to a whole host of conditions such as data and location tracking, performed autonomously by our devices
For a while now artificial intelligence systems have been an integral part of our lives. Their use, such as the functioning of location and personal contact tracking apps, however, has been marked by a pronounced lack of clarity and understanding, especially when it comes to their human rights implications. With the aim of discussing some of these issues, the University of Rijeka’s Faculty of Law and The European Law Students’ Association (ELSA) – Rijeka chapter organised on 25 November a forum on the topic of the use of AI-enabled technologies. The participants included Deputy Ombudswoman and EQUINET chair Tena Šimonović Einwalter, head of the Croatian AI Association (CroAI) Mislav Malenica, chair of the Department of Constitutional Law at the University of Rijeka’s Faculty of Law and member of the Ombudswoman’s Human Rights Council Sanja Barić, associate professor of Constitutional Law at the Faculty of Law of the University of Zagreb, Đorđe Gardašević and Ms. Marijana Šarolić Robić.
Legal Practice in the European Union
Deputy Ombudswoman Šimonović Einwalter took the opportunity at the event to discuss the impacts of AI-enabled technologies on fundamental rights and equality, as well as the activities currently underway in the European Union and the Council of Europe aimed at curbing their potential negative effects. EU law does not regulate health-based discrimination extensively and only a handful of countries, including Croatia, feature it as one of the legally mandated discrimination grounds, noted Šimonović Einwalter. Epidemiological services can benefit from the use of the AI systems in their efforts to track and prevent the spread of the pandemic, for example by tracing the infected persons’ contacts. However, for such measures to be efficient, they need to gather a sufficient number of users, which might prove a challenge, taking into account the citizens’ lack of trust in both the official institutions, as well as the private digital service providers when it comes to the handling of their personal data.
App Use Guidelines
To harmonise the use of COVID apps across the European Union, the European Commission issued data protection guidelines. Although these are not legally binding, the states are expected to implement them with the aim of bringing their own practices in line with the EU data and privacy protection legislation.
According to the guidelines, the use of such apps needs to be limited to health protection purposes only, voluntary and based on the citizens’ informed consent, of limited duration and subject to oversight by independent bodies. Prior to their use, Data Protection Impact Assessment – DPIA needs to be undertaken in order to detect potential privacy risks and suggest the appropriate protection measures. Additionally, in order to map out the potential discriminatory effects of the COVID app use, it would also be beneficial for the states to conduct prior Equality Impact Assessments. One of the issues that surface in the use of such technologies is the citizens’ lack of trust in both the institutions, as well as the private service providers, as indicated by the research conducted by EU Fundamental Rights Agency. According to its results, the majority of people in the EU (72 %) know about the privacy settings on at least some of the apps on their smart phones. However, less than half of them (41 %) know the privacy settings on all apps they use. In Croatia, 54% of the citizens are able to regulate the privacy settings on all of the apps they use, 22% only on some of them, whereas 19% cannot do it at all. 77% of EU and 82% of Croatian citizens know how to turn off their phone’s location settings, while 15% of Croats do not. It is concerning that 33% of the respondents never read the terms and conditions when using online services and 27% of those who do, find them difficult to understand.
Educating the Citizens
Keeping in mind the fact that AI-enabled technologies develop quite rapidly, it is important to include the public in various stages of decision-making processes, as well as to encourage the education and cooperation among various stakeholders, including the academia and the civil society. It is also crucial to promote the citizens’ AI literacy and educate them on the functioning of those systems, since they will be in the position to make decisions on their use in the future.
Read more about the potential equality impacts of AI in EQUINET publication Regulating for an Equal AI: a New Role for Equality Bodies.
- Violent Pushbacks Footage: Efficient Investigation into Use of Force by Police Necessary for the Protection of Human Rights and the Rule of Law
- Equinet Adopts 2022 Work Plan
- High Level Meeting on Standards for Equality Bodies
- NEIWA: 8 Recommendations for Stronger Whistleblower Protection
- Expert Discussion: Filling the gaps in Equality Legislation in the EU Member States
- Support to the Afghanistan Independent Human Rights Commission