Talk at the 10th annual Symposium on Ethics in the Age of Smart Systems 2021-04-19.
Speakers
- Petra Molnar, Refugee Law Lab, York University, Canada, and Migration and Technology Monitor
- Jan Tobias Muehlberg, imec-DistriNet, Department of Computer Science, KU Leuven, Belgium
Abstract
Experiments with new technologies in migration management are increasing. From Big Data predictions about population movements in the Mediterranean, to Canada’s use of automated decision-making in immigration and refugee applications, to artificial intelligence lie detectors deployed at European borders, States are keen to explore the use of new technologies, yet often fail to take into account profound human rights ramifications and real impacts on human lives.
This talk builds on our work on automated decision-making and surveillance technologies used in migration in Canada [1] and Europe [2] and examines how technologies used in the management of migration impinge on human rights with little international regulation. We argue that this lack of regulation is deliberate, as States single out populations on the move as a viable testing ground for new technologies. Making people who cross borders more trackable and intelligible justifies the use of more technology and data collection under the guise of national security, or even under tropes of humanitarianism and development.
These technologies are largely unregulated, developed and deployed in opaque spaces with little oversight and accountability. Now, as governments move toward bio-surveillance to contain the spread of the COVID-19 pandemic, we are seeing an increase in tracking projects and automated drones. If previous use of technology is any indication, refugees and people crossing borders will be disproportionately targeted and negatively affected. Proposed tools such as ‘virus-targeting’ robots, cellphone tracking, and AI-based thermal cameras have all be used against people crossing borders, with far-reaching human rights impacts. In addition to violating the rights of the people subject to these technological experiments, the interventions themselves do not live up to the promises and arguments used to justify these innovations. This use of technology to manage and control migration is also shielded from scrutiny because of its emergency nature. In addition, the basic protections that exist for politically more powerful groups that have access to mechanisms of redress and oversight are often not available to people crossing borders. The current global digital rights space also does not sufficiently engage with migration issues, at best only tokenising the involvement from both migrants and groups working with this community.
The way that technology operates is a useful lens that highlights State practices, democracy, notions of power and accountability, and the respective reactions of marginalised communities. Technology is not inherently democratic and human rights impacts are particularly important to consider in humanitarian and forced migration contexts. An international human rights law framework is particularly useful for codifying and recognising potential harms, because technology and its development is inherently global and transnational. More oversight and issue-specific accountability mechanisms are needed to safeguard fundamental rights of migrants such as freedom from discrimination, privacy rights and procedural justice safeguards such as the right to a fair decision-maker and the rights of appeal.
References
- https://ihrp.law.utoronto.ca/sites/default/files/media/IHRP-Automated-Systems-Report-Web.pdf
- https://edri.org/wp-content/uploads/2020/11/Technological-Testing-Grounds.pdf
Last modified: 2022-05-09 22:20:35 +0200