All around the world, a large number of people are constantly on the move and fleeing their home countries due to conflict, civil unrest, and instability. Over the last decade, the number of people seeking asylum in the European Union (EU) has increased exponentially.
Europe’s refugee crisis has been a topic of heated debate and has revealed serious fault lines in Europe’s migration project.
Governments have been unsuccessful in reaching a consensus on how to share the responsibility for welcoming asylum seekers and continue to blame each other for the seemingly uneven distribution of asylum seekers. In response to this crisis, the EU is now conducting wide-ranging technological experiments, such as utilising Big Data for making predictions about movements of people, implementing automated decision-making for processing immigration applications, and using Artificial Intelligence-based lie detectors and risk scoring at EU borders.
The EU justifies the usage of these technologies and the digitalisation of border control by claiming that they would help in ensuring ‘efficient management’ of migration and fair distribution of asylum seekers. The use of technology for managing migration is the main focus of European policy. In September 2020, the EU Commission published “Pact on Migration and Asylum” wherein a study “on the technical feasibility of adding a facial recognition software…for the purposes of comparing facial images, including of minors” for migration management was mentioned and discussed. The pact also proposed several other measures such as,
“pre-entry screening process with biometric data, security, health, and vulnerability checks; and strengthening the mandate of FRONTEX – the European Border and Coast Guard Agency by equipping it with new technological tools.”
However, the EU has failed to take into account the negative impacts of these technological experiments on human lives. Greece, Italy and Spain have become these spaces where new AI-based surveillance technology, aerostats and drones are being piloted and tested to make the borders more difficult to cross and automate different facets of immigration in the refugee regime, without thinking about what this is doing to human dignity and procedural fairness. According to a recent report, Technological Testing Grounds, “Biometrics like iris scanning are increasingly being rolled out in humanitarian settings – where refugees, on top of their already difficult living conditions, are required to get their eyes scanned in order to eat. Not even private information is safe – social media scraping and mobile phone tracking to screen immigration applications is becoming common practice.”
Not only does such surveillance and data collection give rise to privacy issues but placing reliance on information gathered from people’s electronic devices for verification purposes and for assessing the credibility of a person is highly concerning. Worryingly, the data collected is increasingly being used by states to predict population movements, and is then misused by governments for justifying their anti-immigration policies.
The EU has also piloted a project called iBorderCtrl, which essentially introduces an AI-based lie detector at the border checkpoints, thereby eliminating the need of human guards at the border. The system monitors faces of asylum seekers for anomalies while they answer questions. This system is problematic because lie detectors, in general, are unreliable. More importantly, it is not clear how the system will be able to deal with differences in cross-cultural communications or the impact of trauma on memory or the fact that stories cannot always be recalled in a linear way.
Concerns have been raised over the inability of these systems to detect trauma suffered by asylum applicants and the possibility exacerbating racial and ethnic inequalities in immigration enforcement. Petra Molnar, author of Technological Testing Grounds, has rightly pointed out that, “Refugee claims and immigration applications are filled with nuance and complexity, qualities that may be lost on automated technologies, leading to serious breaches of internationally and domestically protected human rights in the form of bias, discrimination, privacy breaches, and due process and procedural fairness issues, among others.”
All in all, migration management as a project is all about making people trackable, identifiable and controllable. These practices map into the historical ways that certain communities, for instance people of colour, are marginalised and the state as an entity is the powerful one that gets to determine what priorities count. As a result, the regulatory and legal regime around the use of these technologies remains murky and weak, full of discretionary decision-making, uncertainty and lack of oversight.
There is no universal regulatory regime to govern the use of new technologies for migration management. While the area of ethics is often used to develop principles to govern technologies, a regulatory framework based on ethics is inadequate to deal with misuse of technology by state governments. Ethical principles lack clarity and enforcement capabilities. Moreover, ethical language is not clearly defined and does not carry with it any concrete legal duties.
The need of the hour is to have a robust regulatory framework with oversight and accountability mechanisms, that is guided by International Human Rights and a framework which also takes into account the high-risk nature of technologies deployed for migration management. A holistic framework that has human rights at its centre is only possible when all the stakeholders affected by border control technologies are involved. Therefore, affected communities, civil society organisations and human rights lawyers must also be involved in the process of building a regulatory regime along with governments and other private sector actors. Involving people who have been negatively impacted by such technology and who have experienced displacement will help in shaping the discussions around how new technology must be integrated into border security and management.
Creating a regulatory framework would be a time-consuming process and therefore in the meantime, states must ban the use of all automated technologies for migration management. The ban should remain until it becomes possible to carry out proper human rights impact assessments of these technologies. Lastly, any new regulation must recognise the power imbalances inherent in technological management and must commit to address issues relating to racial bias and discrimination.