European controls to mitigate bias in artificial intelligence systems in health are insufficient
2025/05/26 Elhuyar Zientzia Iturria: Elhuyar aldizkaria

European controls to mitigate bias in health systems based on artificial intelligence are insufficient. This is the conclusion reached by researchers such as Iñigo de Miguel Beriain (UPV/EHU) and Guillermo Lazcoz Moratinos (Instituto de Salud Carlos III) in their study published in the journal Bioethics. Researchers have proposed alternative policies to address the problem of bias in these systems.
Artificial intelligence systems are increasingly being used to help make diagnoses, or to recommend treatments that can best be adapted to patients. But these systems also have risks. One of the main problems is bias. For example, suppose we use a system trained with people from a population dominated by very clear skin, a system that has a marked bias because it does not work well with more tanned skins. Biases are a serious health problem, as they not only impede accuracy, but also affect certain sectors of the population in particular.
De Miguel and Lazcoz have studied policies to mitigate these biases. These policies are included in the new European regulations on Artificial Intelligence and the European Health Data Framework (EHEA). And researchers have found that European regulations on medical devices could be inadequate to address this challenge, which is not only technical but also social. It is noted that many forms of verification of medical devices date back to the time when there was no AI.
Researchers believe that the best way to correct bias problems is not to increase the amount of data. This is a reductionist solution, by their use. In addition, there are risks associated with this strategy, especially those related to privacy. On the other hand, they warn that if more data is needed, it is necessary to have a good analysis of where and how these data are processed.
On the other hand, researchers suggest the possibility of introducing mandatory validation mechanisms in the article. These mechanisms would be applied not only during the design and development phases, but also after the launch. For example, they propose that just as the performance of new drugs is tested on a small scale, so should AI systems, rather than on a large scale, for example, be tested in a single hospital. And after seeing that they work, that they are safe, etc., then they should spread to other places.

Gai honi buruzko eduki gehiago
Elhuyarrek garatutako teknologia