Virtual brain above blurred cityscape showing right to explanation for algorithmic decision making

Carnegie Mellon University: End Users Deserve “Right to Explanation” About How Algorithmic Decision-Making Models Profile Them

Social media and the internet advertising industry now almost entirely run on algorithmic decision-making models that attempt to determine who the end user is, how their mind works and what they will be most receptive to (and engage with). Researchers at Carnegie Mellon University, fresh off of an analysis of these models published in Business Ethics Quarterly, are now advocating for a “right to explanation” to shed light on these secretive models that influence the mood, behavior and even actions of millions around the world each day.

The researchers examine this proposed right within the framework of existing General Data Protection Regulation (GDPR) rules, drawing a comparison to the established “right to be forgotten” (also a feature of certain other national data protection laws). Among other ideas, the paper imagines a new position of “data interpreter” to serve as a good faith liaison between the public and the output of these opaque algorithmic decision-making models.

Right to explanation championed by researchers

The use of algorithmic decision-making is well documented among social media platforms and search tools such as Google and Bing, but it is also used for matching and pricing systems in ridesharing and delivery apps. It also has a variety of functions in ad tech, not the least of which is profiling people as they move about the internet and determining what ads are most relevant to them. Another point of concern is its use in job search systems, where there have been claims of bias based on ethnicity and other demographic categories.

As the study points out, the European Union’s GDPR does directly regulate these systems. However, much of this regulation consists of broad language and it does not directly address a “right to explanation” for data subjects; the closest it gets is in requiring some disclosure of how collected data is used in some circumstances.

The Carnegie researchers believe that the existing “right to be forgotten” provisions in the GDPR can be interpreted in such a way that EU data subjects should be entitled to a right to explanation. The “right to be forgotten,” primarily established under Article 17 of the GDPR, grants EU citizens the right to access certain types of personal data and have it deleted upon request.

A fundamental cornerstone of both Article 15 (the right to access personal data) and Article 17 is that EU data subjects must consent to collection as a lawful basis for processing. The perspective that some researchers take is that one cannot meaningfully consent to data collection and use when it is being done by algorithmic decision-making technology that the user is not allowed to see or understand the workings of. This is doubly true when that algorithm changes over time, perhaps moving beyond the scope of what the end user originally consented to participate in.

Technology companies are naturally reticent about a right to explanation of their algorithmic decision-making, in no small part because it is a valuable trade secret for many of them. But as the study notes, some have voluntarily provided at least some limited form of explanation in the interest of public relations. One example is Facebook, which in late 2019 added a “Why am I seeing this ad?” link to targeted advertisements. This reveals very general connections, such as locations the user may have checked in at or previous pages or ads they may have interacted with. But it does not get deep into the technical workings of profiling or how ads are prioritized for an individual.

A legal basis for unmasking algorithmic decision-making?

Though a number of researchers agree that the GDPR’s present terms validate the idea of a right to explanation, there is much disagreement on exactly what form would be legal.

There is some debate over whether the right to explanation should be in the form of system functionality, a mechanical breakdown of the algorithmic decision-making that could very well encompass trade secrets, or a “post facto” decision-based system in which the variables that led to the individual end result are explained to the user upon request (somewhat similar to what Facebook has already put into place).

Carnegie researchers believe that the existing ‘right to be forgotten’ provisions in the #GDPR can be interpreted in such a way that EU data subjects should be entitled to a right to explanation. #privacy #respectdataClick to Tweet

The paper ultimately does not take a side on what exactly the GDPR terms allow, but instead makes a case for a moral obligation for explanations to be provided both before and after algorithmic decision-making models process an individual’s data. However, it also acknowledges a company’s right to protect its technology and trade secrets. Among other suggestions, a specialist “data interpreter” at each organization might bridge this gap, ensuring that “right to explanation” duties are fulfilled in a way that balances both needs and provides a continual and central point of contact for public concerns.

 

Senior Correspondent at CPO Magazine