Uncategorized

ICT4D Work and Migration: Strengthening the Privacy Protection of Undocumented Migrants

Sara Vannini (Department of Communication, University of Washington) reflects on the research ethics involved in work with undocumented migrants, and illustrates five recommendations to strengthen privacy protection in this type of research.

As researchers in ICT4D, we often celebrate digitalization, digital presence and online voice as development successes. Often, we have looked at the way big data can foster socio-economic development in the Global South. We have advocated for the benefits of digital money exchanges, applauded the rapid diffusion of mobile phones and mobile connectivity, and generally promoted access and connectivity. Fewer times, we have considered the consequences of the digital traces we have helped creating, and how they relate to safety and ethics in our research (the work of Kleine and Dearden has been an important effort in this sense – see: https://ictdethics.wordpress.com/).

ICTs and the digitization of data do raise important concerns about data privacy and security. When working with non-dominant, underserved, or otherwise oppressed communities, concerns are amplified (see Eubanks, 2018; Marwick and Boyd, 2018). In the context of migration, they include governments categorizing, profiling, and monitoring migrants’ identities (see Arjana, 2015) – in a political moment when the growing number of socio-economic migrants, refugees, and forcibly displaced people around the world has been met with increased surveillance and datafication.

I have started to engage with topics of information and migration since 2015, in collaboration with my colleagues Ricardo Gomez (University of Washington Information School) and Bryce Newell (University of Oregon School of Journalism and Communication). We have been observing the work and information practices of small humanitarian organizations that help irregular migrants survive, live, and thrive in the United States. These organizations frequently do not have the resources or expertise to fully address the implications and challenges of collecting, storing, and using data about the populations they serve. The chances that their information practices were to exacerbate the threats that migrants are already facing are very high: in the U.S. undocumented migration context, data disclosure can signify detention and deportation. At the Southern border between U.S. and Mexico, it can also enhance people’s vulnerability to human traffickers, smugglers, drug cartels.

The current paradigm in which humanitarian actors are operating is insufficient for the challenges of the digital space: the principle to “do no harm” that guides humanitarian action is not reflected in the way they secure data (see also: Sandvik and Raymond, 2017) for three main reasons:

1) Humanitarian organizations’ informational practices present both technological and human-related risks, ranging from data breakages, hacks, and leaks, to negligence and lack of data-handling-related protocols.

2) Their current expectation of privacy self-management/informed consent by vulnerable populations is problematic. The concept is based on the ideal of a person who is (i) informed; (ii) able to make decisions in their best self-interest about giving or withholding consent to the collection, use, and disclosure of personal data, including short- and long-term consequences of such consent; (iii) in condition to freely choose whether to consent or not to their data disclosure without consequences.

3) There is a lack of robust, actionable, privacy-related guidelines specific to humanitarian information activities, particularly when applied to transnational, irregular migration.

Photo: Portion of the border wall dividing Arizona (US) and Sonora (Mexico) at Sasabe

The principle of “do no harm” that guides humanitarian action needs, then, to be reframed to guide humanitarian information activities in the digital space. Based on field observations and interviews at several U.S.-based humanitarian organizations, higher education institutions, and nonprofit organizations that provide support to undocumented migrants (see Gomez et al, 2020; Vannini et al., 2019a, 2019b), we have drafted five recommendations to strengthen the privacy protection offered to undocumented migrants and other communities that might be disenfranchised by systems of oppression (see: https://sites.uw.edu/rgomez/mind-the-five/):

1) Exercise prudence: collect as little personal information as possible, avoid collecting information when there is no identified and immediate use for it. Practice data minimalism.

2) Protect and secure personal information: develop strategies and expertise to protect digital data from both technical and human risks of disclosure.

3) Provide training about data privacy and security: staff and volunteers, even short-term ones, should be trained in securing data collection and management. Do not rely on staff members’ and volunteers’ previous expertise only. Develop expertise internal to your organization.

4) Share alike: collaborate with other similar organizations that share the same concerns and principles.

5) Practice non-discrimination: make sure you can serve even to those individuals that might decide not to consent and share their information, either digitally or at all.

These recommendations are not only directed to humanitarian organizations. As researchers in ICT4D, we are continually dealing with data from non-dominant, underserved, oppressed communities. Research has shown that ICTs tends to perpetuate and, in some cases, amplify social structures of oppression. Furthermore, since the Cambridge Analytica scandal, it appears clear that digital data can be aggregated and combined in unexpected and unprecedented ways. As ICT4D researchers, we need to actively ensure that our best intentions will not expose people to greater risks, in the present and in the future.

Author Biography: Sara Vannini is a Lecturer at the University of Washington, Department of Communication. She has been working on this research as part of a team with Ricardo Gomez (Associate Professor at University of Washington Information School) and Bryce Newell (Assistant Professor at University of Oregon, School of Journalism and Communication).


Ajana, B. (2015). Augmented borders: Big data and the ethics of immigration control. Journal of Information, Communication and Ethics in Society, 13(1), 55–78. https://doi.org/10.1108/JICES-01-2014-0005

Eubanks, V. 2018. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.

Gomez, R., Newell, B. C., Vannini, S. (2020). Empathic Humanitarianism: Understanding the motivations behind humanitarian work with migrants at the US-Mexico border. Journal on Migration and Human Security.  https://doi.org/10.1177/2331502419900764

Marwick, A.E. and Boyd, D. 2018. Privacy at the Margins| Understanding Privacy at the Margins—Introduction. International Journal of Communication. 12, 0 (2018), 9.

 Sandvik, K. B., & Raymond, N. A. (2017). Beyond the protective effect: Towards a theory of harm for information communication technologies in mass atrocity response. Genocide Studies and Prevention: An International Journal, 11(1), 9–24. https://doi.org/10.5038/1911-9933.11.1.1454

Vannini, S., Gomez, R., Lopez, D., Mora, S., Morrison, C., Tanner, J., Youkhana, L., Vergara, G., Moreno Tafurt, M. (2019). Humanitarian Organizations Information Practices: procedures and privacy concerns for serving the Undocumented. The Electronic Journal of Information Systems in Developing Countries (EJISDC).

Vannini, S., Gomez, R., Newell, B. C. (2019). Mind the Five: Guidelines for Data Privacy and Security in Humanitarian Work with Irregular Migrants and Vulnerable Populations. Journal of the Association for Information Science and Technology (JASIST). https://doi.org/10.1002/asi.24317