Home
9

Governance

9

Sustainability Policy

9

Privacy Policy

9

Cookie Policy

Members
News & Media
Education & Training
9

Summer Academy For Global Privacy Law 2024

The Future of Data Protection : Navigating Between Data Regulations, Data Spaces and Data Dogmas

9

BPH Privacy & Data Protection

Doctoral Seminars

Events
9

Meet the Author Series

9

Brussels Privacy Symposium

9

Data Protection In the World Series

9

Enforcing Europe Series 2

9

Data Sustainability Series

9

Ad-Hoc Events

Publications
9

Working Papers

Data Protection & Privacy

9

Workshop Summaries

From BPH Events

9

Reports

Projects
9

Data Protection in Humanitarian Action

9

VULNERA

Contact
9

Contact

Hub Co-Director Gianclaudio Malgieri and Frank Pasquale publish Working Paper ‘From Transparency to Justification: Toward Ex Ante Accountability for AI’. 

Abstract of the Working Paper:

“At present, policymakers tend to presume that AI used by firms is legal, and only investigate and regulate when there is suspicion of wrongdoing. What if the presumption were flipped? That is, what if a firm had to demonstrate that its AI met clear requirements for security, non-discrimination, accuracy, appropriateness, and correctability, before it was eployed? his paper proposes a system of “unlawfulness by default” for AI systems, an ex-ante model where some AI developers have the burden of proof to demonstrate that their technology is not discriminatory, not manipulative, not unfair, not inaccurate, and not illegitimate in its legal bases and purposes. The EU’s GDPR and proposed AI Act tend toward a sustainable environment of AI systems. However, they are still too lenient and the sanction in case of non-conformity with the Regulation is a monetary sanction, not a prohibition. This paper proposes a pre-approval model in which some AI developers, before launching their systems into the market, must perform a preliminary risk assessment of their technology followed by a self-certification. If the risk assessment proves that these systems are at high-risk, an approval request (to a strict regulatory authority, like a Data Protection Agency) should follow. In other terms, we propose a presumption of unlawfulness for high-risk models, while the AI developers should have the burden of proof to justify why the AI is not illegitimate (and thus not unfair, not discriminatory, and not inaccurate). Such a standard may not seem administrable now, given the widespread and rapid use of AI at firms of all sizes. But such requirements could be applied, at first, to the largest firms’ most troubling practices, and only gradually (if at all) to smaller firms and less menacing practices..”

You can read the Working Paper from the following link: