The Brussels Privacy Hub launches an appeal to approve a fundamental rights 
impact assessment in the EU law on artificial intelligence. The appeal has 
surpassed 100 signatures among distinguished academics who have signed up



The Brussels Privacy Hub and over 100 academics from across Europe and beyond 
are asking that the upcoming EU Artificial Intelligence Regulation (AI Act) 
includes a requirement for a fundamental rights impact assessment (FRIA). The 
proposal of the European Parliament already leans in this direction, but it 
risks being watered down during the trialogue, when the three institutions will 
meet to ratify the final text. In the proposal of the European Parliament, the 
requirement for a FRIA is limited to high-risk AIs in both the public and 
private sectors.

Responding to an appeal launched by the Brussels Privacy Hub Co-Director 
Gianclaudio Malgieri (Leiden University), Alessandro Mantelero (Politecnico di 
Torino), and Vincenzo Tiani (Brussels Privacy Hub), the over 100 signatories, 
considered to be authorities on AI, data protection, and fundamental rights, 
many of whom are already consulted by national, European, and international 
institutions, call for maintaining the parliament’s version and, in particular, 
for ensuring:

1) clear parameters about the assessment of the impact of AI on fundamental 
rights;

2) transparency about the results of the impact assessment through public 
meaningful summaries,

3) participation of affected end-users, especially if in a position of 
vulnerability;

4) involvement of independent public authorities in the impact assessment 
process and/or auditing mechanisms.


Here the letter:

https://brusselsprivacyhub.com/2023/09/12/brussels-privacy-hub-and-other-academic-institutions-ask-to-approve-a-fundamental-rights-impact-assessment-in-the-eu-artificial-intelligence-act/
_______________________________________________
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa

Reply via email to