The European Parliament has voted to back a total ban on biometric mass surveillance.
AI-powered remote surveillance technologies such as facial recognition have huge implications for fundamental rights and freedoms like privacy but are already creeping into use in public in Europe.
To respect “privacy and human dignity”, MEPs said that EU lawmakers should pass a permanent ban on the automated recognition of individuals in public spaces, saying citizens should only be monitored when suspected of a crime.
The parliament has also called for a ban on the use of private facial recognition databases — such as the controversial AI system created by US startup Clearview (also already in use by some police forces in Europe) — and said predictive policing based on behavioural data should also be outlawed.
MEPs also want to ban social scoring systems which seek to rate the trustworthiness of citizens based on their behaviour or personality.
Back in April, the EU’s executive presented draft legislation for regulating high risk uses of artificial intelligence technology — which included a ban on social scoring and prohibition in principal on the use of remote biometric surveillance in public.
However civil society, the European Data Protection Board and European Data Protection Supervisor and a number of MEPs quickly warned that the Commission’s proposal did not go far enough.
The parliament as a whole has now made it clear it also wants stronger safeguards for fundamental rights.
In a resolution adopted last night — with the parliament voting 377:248 in favor of the LIBE committee’s report on Artificial Intelligence in criminal law — parliamentarians sent a strong signal over what they will accept in the upcoming negotiations between EU institutions that will nail down the details of the Artificial Intelligence Act.
The relevant paragraph on remote biometric surveillance calls on the Commission to:
…implement, through legislative and non-legislative means, and if necessary through infringement proceedings, a ban on any processing of biometric data, including facial images, for law enforcement purposes that leads to mass surveillance in publicly accessible spaces; calls further on the Commission to stop funding biometric research or deployment or programmes that are likely to result in indiscriminate mass surveillance in public spaces
The resolution also takes aim at algorithmic bias, calling for human supervision and strong legal powers to prevent discrimination by AI — especially in a law enforcement and the border-crossing context.
Human operators must always make the final decisions, MEPs agreed, saying that subjects monitored by AI-powered systems must have access to remedy.
To ensure fundamental rights are upheld when using AI-based identification systems — which MEPs noted have been shown to misidentify minority ethnic groups, LGBTI people, seniors and women at higher rates — algorithms should be transparent, traceable and sufficiently documented, they also said.
They also called for public authorities to use open-source software in order to be more transparent, wherever possible.
MEPs also targeted a controversial EU-funded research project — to create a ‘smart’ lie-detector based on analyzing facial expressions — saying the iBorderCtrl project should be discontinued.
Commenting in a statement, rapporteur Petar Vitanov (S&D, BG) said: “Fundamental rights are unconditional. For the first time ever, we are calling for a moratorium on the deployment of facial recognition systems for law enforcement purposes, as the technology has proven to be ineffective and often leads to discriminatory results. We are clearly opposed to predictive policing based on the use of AI as well as any processing of biometric data that leads to mass surveillance. This is a huge win for all European citizens.”
The Commission has been contacted for comment on the vote.
The parliament’s resolution also calls for a ban on AI assisting judicial decisions — another highly controversial area where automation is already been applied, with the risk of automation cementing and scaling systemic biases in criminal justice systems.
Global human rights charity, Fair Trials, welcomed the vote — calling it a “landmark result for fundamental rights and non-discrimination in the technological age”.
via Tech News Digest
No comments:
Post a Comment