Informed consent and algorithmic discrimination – is giving away your data the new vulnerable?

Informed consent and algorithmic discrimination – is giving away your data the new vulnerable?
Research Article
Hauke Behrendt, Wulf Loh
Review of Social Economy, 1 March 2022
Abstract
This paper discusses various forms and sources of algorithmic discrimination. In particular, we explore the connection between – at first glance – ‘voluntary’ sharing or selling of one’s data on the one hand and potential risks of automated decision-making based on big data and artificial intelligence on the other. We argue that the implementation of algorithm-driven profiling or decision-making mechanisms will, in many cases, disproportionately disadvantage certain vulnerable groups that are already disadvantaged by many existing datafication practices. We call into question the voluntariness of these mechanisms, especially for certain vulnerable groups, and claim that members of such groups are oftentimes more likely to give away their data. If these existing datafication practices exacerbate prior disadvantages, they ‘compound historical injustices’ (Hellman, 2018) and thereby constitute forms of morally wrong discrimination. To make matters worse, they are even more prone to further algorithmic discriminations based on the additional data collected from them.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s