06 Jul
Artificial Intelligence AIDA

MEPs demand human oversight of AI systems, open algorithms and public audits

The use of Artificial Intelligence (AI) in law enforcement and the judiciary should be subject to strong safeguards and human oversight, says the Civil Liberties Committee.

In a draft report adopted with 36 votes to 24, and 6 abstentions (on past 29 June 2021), MEPs highlight the need for democratic guarantees and accountability for the use of Artificial Intelligence (AI) in law enforcement.


MEPs worry that the use of AI systems in policing could potentially lead to mass surveillance, breaching key EU principles of proportionality and necessity. The committee warns that otherwise legal AI applications could be re-purposed for mass surveillance.

The draft resolution highlights the potential for bias and discrimination in the algorithms on which AI and machine-learning systems are based.

As a system’s results depend on its inputs (such as training data), it is important to take algorithm bias into account.

Currently, AI-based identification systems are inaccurate and can wrongly identify minority ethnic groups, LGBTI people, seniors and women, among other groups. In addition, AI-powered predictions can amplify existing discrimination, a concern in the context of law enforcement and the judiciary.

Use of facial recognition 

Addressing specific techniques available to the police and the judiciary, the committee notes that AI should not be used to predict behaviour based on past actions or group characteristics.

On facial recognition, MEPs note that different systems have different implications. They demand a permanent ban on the use of biometric details like gait, fingerprints, DNA or voice to recognise people in publicly accessible spaces.

The committee wants to ban law enforcement from using private facial recognition databases, like the already used Clearview AI. MEPs also ask for a ban on assigning scores to citizens based on AI, stressing that it would violate the principles of basic human dignity. Finally, facial recognition should not be used for identification until such systems comply with fundamental rights, state MEPs.

The use of biometric data for remote identification is of particular concern to MEPs. For example, automated recognition-based border control gates and the iBorderCtrl project (a “smart lie-detection system” for traveller entry to the EU) are problematic and should be discontinued, say MEPs, who urge the Commission to open infringement procedures against member states if necessary.

Next steps

The non-legislative report will be up for debate and vote during the September plenary session (13-16 September 2021).

By: Estela Martín

Linkedin TopVoices España 2020. DirCom & RSC en ...


Call Us