Discrimination risks related to the use of artificial intelligence (AI) have already been well-documented in several sectors, including employment and law enforcement. It has become clear, however, that health care is not immune to the discriminatory effects of AI, either.

On 27-29 January 2021, under the title “Enforcing Rights in a Changing World”, Brussels hosted the Computers, Privacy and Data Protection (CPDP) conference – the EU’s main multistakeholder forum for academics, lawyers, industry, government & civil society to discuss privacy, data protection & ICT. On that occasion, Deputy Ombudswoman, Chair of the European Network of Equality Bodies (EQUINET) and co-representative of ECRI to Council of Europe’s Ad hoc Committee on Artificial Intelligence (CAHAI) Tena Šimonović Einwalter participated in the panel on artificial intelligence and discrimination risks in the health sector.

Speaking at the panel, Šimonović Einwalter stressed that the issue of discrimination in the health sector is already complex and the use of the AI-enabled systems renders the situation even more complicated. Discrimination in this field is specific in that the citizens do not possess the adequate medical knowledge to understand the details of their diagnoses and medical treatment, as well as the functioning of the health care system itself. Also, they do not have access to other patients’ medical data and records. All of this makes potential discrimination more difficult to detect. At the same time, when it comes to the use of AI in medicine, neither the medical staff nor the patients know exactly what the technological “black box” contains – i.e. how the technology works and what criteria it utilises in its algorithmic decision-making, making this a sort of a “double black box” situation. Many citizens are already not prone to reporting discrimination in health care because their failing health, and the need for medical attention already puts them in a vulnerable position and the use of AI systems creates additional risks in the form of the reduced transparency of diagnostic procedures and decisions related to treatment. All of this makes it more difficult for the citizens to spot discrimination and seek the protection of their rights. The main difference, underscored Šimonović Einwalter, between the health care sector and all others using AI-enabled technologies, is that it deals with human life and health, which makes the stakes even higher.

There are many potential benefits to using AI in health care, such as greater accuracy of diagnosis and treatment – it makes it possible, for example, to identify skin cancer from large sets of images. It comes with certain risks as well, however, like leaving certain groups of patients without care or with the wrong diagnosis. AI can be useful in the medical field, but in all of its areas of implementation it needs to be underpinned with a sound normative framework to protect human rights and prevent discrimination. Additionally, certain sectors, such as health care, require a targeted approach and stricter sector-specific regulation. The rules for the use of AI in health care need to be adapted to its specificities as a high-risk domain, concluded Šimonović Einwalter.