Authors: Laura Ación, Laura Alonso Alemany, Amalia Guaymás Canavire, Sabrina López, Maximiliano Maito, Félix Penna and Daniel Yankelevich.
Tuesday, 14 June 2022
Let's imagine a story. Clara works as a gynecologist in different health institutions in a province in the center of Argentina. One day, she is contacted by the providers of a preventive health program of the provincial Ministry of Health. In order to formulate public policies that take into account the reality and needs of the population, health professionals are asked to provide data on their patients: results of annual gynecological check-ups, gender identity, sexual preferences, number of pregnancies and abortions. The information is not associated with personal data but, even so, Clara has her doubts: “I serve in small towns. Could my patients end up identified? Could the data be used for some other purposes? I agree with making the realities of my patients visible, but… what if the built-in prejudices end up affecting or stigmatizing those who entrust me with their gynecological care?”
Clara's story is fictional, but it portrays genuine doubts that have a basis in reality. Argentina’s health system is becoming increasingly digitized: from the notes taken by those in charge of our health care, to the ultrasound images and the results of clinical tests, a multitude of data is recorded, stored and made available electronically. This digitization facilitates, for example, access to the results of a blood test within a few hours after the sample has been taken, without the need for commuting between facilities or joining queues. It also facilitates cross-consultations between specialists from different places. But it's not just about comfort. In the large databases that contain the medical history of thousands of patients, patterns can be discovered for early detection of dangerous health situations, such as dengue outbreaks, or for rolling out preventive medicine programs and improve our quality of life. Used well, digital tools can make people become the center of their health care.
However, making data available also has disadvantages and, if not treated properly, can have a negative impact on our lives. In this scenario, the challenge is to produce digital health data with the potential to produce tangible benefits while minimizing potential risks.
Let's start with a new example, this time on the side of the patients: let's imagine Gisela, a 23-year-old teacher who goes to a health center to be tested for COVID-19. At the counter, one must provide an ID and cell phone number. While waiting for the result, to pass the time, Gisela decides to try the quick diagnosis app that analyzes the sound of her cough. Immediately, she wonders: “Will the system take into account that I am asthmatic? Is the cough heard by health professionals or is it a totally automated process? And now that I think about my COVID test, who will see my ID and my address? Will they have access to other data about my health? Will the result of my test be confidential? If my result is positive and it is available to the public administration, will they renew my teaching hours at the school which took me quite some effort to get accepted to?”
We often hear great promises about the application of artificial intelligence to health: automated systems that can detect cancers invisible to the human eye; controlling epidemics while anticipating their expansion; or prevention of diseases by discovering patterns in large amounts of data. These promises fill us with hope that often make us overlook small details that have the potential to become big problems.
We are therefore at a crossroads between great benefits and great risks. By reading these stories from an individual’s standpoint, we have seen some of these conflicts: the need to record information to provide adequate care whilst balancing privacy and sensitive data protection; rapid access to health care in exchange for the quality of the benefits received; a wider availability of data making it more difficult to guarantee that access to these data will not result in discrimination or harm.
These conflicts pose challenges which we want to overcome as a society. What can health data regulation do to protect people's privacy and prevent the misuse of sensitive data? How should we implement informed consent so that it is really effectively informed, so that people know, understand and can fully decide on the different uses of their health data? In what ways can we anonymize health data to ensure people's privacy? We have made great progress in raising awareness about these problems and a good number of organizations have already implemented different strategies to have vulnerabilities protected or to guarantee security in the way information is made available.
In an effort to offer better access to the right to health, digital health data today is presented both as part of the solution and as part of the problem. Considering the concerns and reflecting on their most problematic dimensions is the first step in contributing to their improvement. This presents us with an ethical challenge, since advances in science and technology occur within a social context that must be taken into account. Hence, certain questions are essential to us and guide the work we do: what are the intended use of these tools? What responsibilities do their development and use entail? What procedures and practices should we prioritize? What risks should we raise awareness of and minimize? These are issues that have practical implications for communities and the broader population, whose well-being is the ultimate goal of what we do.
Tags: Digital health data, COVID-19, Artificial Intelligence,
This note was originally published in Spanish as part of a collaboration between Fundar and Arphai in Argentina, the latter of which is part of the Global South AI4COVID Program.