Home
9

Governance

9

Privacy Policy

9

Cookie Policy

Members
News & Media
Education & Training
9

Summer Academy For Global Privacy Law 2022

Engineering the data regulation(s) in an age of reform

9

BPH Privacy & Data Protection

Doctoral Seminars

9

Visiting Scholars Programme

Events
9

Meet the Author Series

9

Brussels Privacy Symposium

9

Data Protection In the World Series

9

Enforcing Europe Series 2

9

Data Sustainability Series

9

Ad-Hoc Events

Publications
9

Working Papers

Data Protection & Privacy

9

Workshop Summaries

From BPH Events

9

Reports

Projects
9

Data Protection in Humanitarian Action

Contact
9

Contact

The International Observatory on Vulnerable People In data protection

Resources

Disclaimer: The resources are arranged per thematic sections to facilitate the consultation. However, we are aware of the intersections and overlaps thereof. And that AI is generating new categories outside the traditional ones. We are also aware that pre-identifying categories of vulnerable people in data processing is not possible, vulnerability being a largely contextual and elusive concept. We are not claiming to be exhaustive, but to initiate a discussion.

Do you have any suggestions as to how improve the repository? Have you come across any resource that should be included in the repository? Contact us.

Non-discrimination, AI and data protection

- Academic writings

Bekkum van, M. Borgesius, F.Z. (2023) Using sensitive data to prevent AI discrimination: Does the EU GDPR need a new exception? Computer Law and Security Review 48 (2022) 105770  https://doi.org/10.1016/j.clsr.2022.105770

Buolamwini, J. ( 2023) Unmasking AI: My Mission to Protect What is Human in a World of Machines Random House.

Caplan, R.  et al. (2018). Algorithmic Accountability: A Primer. Data & Society report. https://datasociety.net/library/algorithmic-accountability-a-primer/

Costa, R. S., & Kremer, B. (2022). Inteligência artificial e discriminação: desafios e perspectivas para a proteção de grupos vulneráveis frente às tecnologias de reconhecimento facial. Revista Brasileira De Direitos Fundamentais & Justiça, 16(1). https://doi.org/10.30899/dfj.v16i1.1316

Crawford, K. (2021). The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press https://yalebooks.yale.edu/book/9780300264630/atlas-of-ai/

Florid, L. (2023) The Ethics of Artificial Intelligence: Principles, challenges, and opportunities Oxford University Press

Kamiran, F.,  Calders, T.,  & Pechenizkiy, M. (2013). Techniques for Discrimination-Free Predictive Models, in Custers et al. (eds) Discrimination and Privacy in the Information Society, Heidelberg: Springer. https://doi.org/10.1007/978-3-642-30487-3_12

Kasy, M., & Abebe, R. (2021, March). Fairness, equality, and power in algorithmic decision-making. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 576-586. https://doi.org/10.1145/3442188.3445919

Krupiy, Tetyana. “Why the Proposed Artificial Intelligence Regulation Does Not Deliver on the Promise to Protect Individuals from Harm.” European Law Blog, 2021. https://europeanlawblog.eu/2021/07/23/why-the-proposed-artificial-intelligence-regulation-does-not-deliver-on-the-promise-to-protect-individuals-from-harm/

Krupiy, Tetyana (Tanya). “A Vulnerability Analysis: Theorising the Impact of Artificial Intelligence Decision-Making Processes on Individuals, Society and Human Diversity from a Social Justice Perspective.” Computer Law and Security Review 38 (2020): 105429. https://doi.org/10.1016/j.clsr.2020.105429

Leavy, S. (2018). Gender Bias in Artificial Intelligence: The Need for Diversity and Gender Theory in Machine Learning,” 2018 IEEE/ACM 1st International Workshop on Gender Equality in Software Engineering (GE), 14-16.

Mesch, G. S., & Dodel, M. (2018). Low self-control, information disclosure, and the risk of online fraud. American Behavioral Scientist62(10), 1356-1371 https://doi.org/10.1177/0002764218787854

Mullaney et al. (eds.) (2021). Your Computer Is on Fire. MIT Press. https://direct.mit.edu/books/book/5044/Your-Computer-Is-on-Fire

Patton, D. U., Brunton, D. W., Dixon, A., Miller, R. J., Leonard, P., & Hackman, R. (2017). Stop and frisk online: Theorizing everyday racism in digital policing in the use of social media for identification of criminal conduct and associations. Social Media+ Society3(3). https://doi.org/10.1177/2056305117733344

Rodrigues, R. (2020). Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. Journal of Responsible Technology4, 100005. https://doi.org/10.1016/j.jrt.2020.100005

Tzanou, Maria. “The Future of Eu Data Privacy Law: Towards a More Egalitarian Data Privacy.” Journal of International and Comparative Law 7, no. 2 (2020): 449–70. https://ssrn.com/abstract=3710528

Van Bekkum, M. and Zuiderveen Borgesius, F. (2022). Using sensitive data to prevent discrimination by artificial intelligence: does the GDPR need a new exception?. Available at SSRN: https://ssrn.com/abstract=4104823 or http://dx.doi.org/10.2139/ssrn.4104823

- NGO Reports & Articles

- Data Protection Authorities' Guidance

- Laws

- Case Law

- European Court of Human Rights

- Court of Justice of the European Union

- Policy documents

- Global Developments

- Others