Dear participants, dear organizers, dear distinguished speakers,
I’m speaking on behalf of Equinet, the European network of equality bodies, representing 49 equality bodies in 36 European countries. In the next couple of minutes, I will focus on two things – first is the importance of equality and strong redress mechanisms in any future EU regulation on AI. Secondly, I want to make you more familiar with the role that we, as equality bodies across the EU, have in making sure AI systems are equality-compliant.
Why (pay special attention to) equality in the new proposal for an EU regulation on AI? The proposal itself addresses non-discrimination, explicitly and prominently, as one of the leading fundamental rights concerns related to the impact of AI systems.
Now, why (pay special attention to) equality bodies in the new proposal for an EU regulation on AI? First, “who” are equality bodies? Equality bodies are national public bodies, institutions specialized in non-discrimination law and the provision of redress to victims of discrimination. EU equal treatment directives oblige all EU countries to set them up, to promote equality and fight discrimination on the grounds of race, ethnic origin and sex.
Equality bodies bring many added values:
- they can often work with the private sector, including through oversight and capacity-building;
- they can bring cases to court or themselves decide on complaints like quasi-judicial tribunals, in some cases with legally binding decisions and sanctions;
- they provide independent legal support to victims of discrimination;
- and they provide legal and policy advice to governments.
When it comes to work relating to AI, some of our members have already brought legal cases (concerning both the private and the public sector) related to AI, have contributed to the development of national strategies on AI or have given inputs to national legal reforms. They are also building national partnerships across sectors with relevant national regulators such as national data protection authorities, national financial supervisory authorities, etc. More examples are presented in Equinet’s Report “Regulating for an Equal AI: A New Role for Equality Bodies”, mentioned in the European Commission’s White Paper on AI.
With that said, and with the specific focus of this Forum being the future new AI Regulation of the EU, I want to make two suggestions.
My first suggestion is to make equality bodies part of the national supervisory mechanism. This can remedy the fact that the draft AI Regulation proposes no mechanism for consumer/citizen complaints and the provision of redress. This denied access to justice for potential victims of discrimination is especially problematic in the context of an AI-specific regulation.
Designating equality bodies as part of national supervisory authorities would also ensure a more harmonized regulatory approach across EU jurisdictions. It can even contribute to the consistent interpretation of existing national human rights law provisions, including those on equality and non-discrimination.
Of course, this role should only come after safeguarding adequate and meaningful powers and (human, financial and technical) resources to equality bodies, needed to address the new challenges posed by AI.
My second suggestion is to set ground for a cooperation framework, in order to enable collaboration between various national competent authorities under the proposed AI Regulation, and to develop the needed capacity. AI systems have a complex nature and cross-sectoral use. This means that respecting equality requires active collaboration and partnerships. Therefore, there is no doubt that working together is the way to go. Equality bodies should be part of this national governance framework, as they are key partners in the enforcement of national human rights legislation.
Of course, an important first step to facilitate and streamline the European effort on AI is the adoption of national strategies on AI, a step that many member states still have to take.
Finally, this new proposal on AI and timely and meaningful discussions, like the one today, are essential to what we all hope will be a groundbreaking and positive European approach to AI.
It is imperative to place equality and its effective protection through empowered equality bodies at the heart of any such approach.
Seriously thinking of equality implications of AI after public cases of discrimination by AI might lead to building of distrust towards AI and technology by the general public. On the other hand, by including equality experts early on, we can ensure that development and use of AI in the EU contributes to the wellbeing of all.
Thank you for your attention.
Tena Šimonović Einwalter
Equinet Chair & Ombudswoman of the Republic of Croatia
Video of the speech please find here.