Man holding tablet with virtual brain showing the guideline from Council of Europe for algorithmic decision making

Council of Europe Drawing the Lines on Algorithmic Decision Making

On 8 April the Council of Europe published its guidelines on how to use algorithms and automation while at the same time protecting human rights.

The Council of Europe (CoE) is made up of 47 member states – no, it’s not a European Union institution, but a larger grouping with less legal authority, but with a strong track record in making robust recommendations to protect civil liberties, in particular data protection and the right to privacy.

The CoE opted to take a “precautionary approach” to “the development and use of algorithmic systems and adopt legislation, policies and practices” to ensure that they fully respect human rights. The “Precautionary Principle” is one that crops up a lot in EU environmental legislation. Essentially, it holds that where there is insufficient, inconclusive or uncertain scientific evidence for a practice and there are reasonable grounds for concern about potentially dangerous effects on the environment, human, animal or plant health that stringent protections may be put in place without 100% scientific proof of harm. In short: better safe than sorry.

According to the European Commission: “Incomplete information, inconclusive evidence and public controversy can make it difficult to achieve consensus over the appropriate response to hazardous substances or activities, but these are precisely the sorts of conditions that often demand hard and fast decisions. The precautionary principle is designed to assist with decision-making under uncertainty.”

Although we are not talking about environmental law per se, the labels “incomplete information, inconclusive evidence and public controversy” can all apply to  algorithmic decision making. The question of harm or damage can also be applied when thinking about human rights.

In its recommendation, the CoE’s Committee of Ministers called on governments to ensure that they “do not breach human rights through their own use, development or procurement of algorithmic systems.”

The recommendation acknowledges the vast potential of algorithmic processes to foster innovation and economic development in numerous fields, including communication, education, transportation, governance and health systems, but said that “ as regulators, [governments] should establish effective and predictable legislative, regulatory and supervisory frameworks that prevent, detect, prohibit and remedy human rights violations, whether stemming from public or private actors.”

Recently, there has been a lot of concern about how tracking and surveillance may gain a foothold due to the need to tackle the current COVID-19 pandemic. The CoE acknowledges  that “algorithmic systems are being used for prediction, diagnosis and research on vaccines and treatments. Enhanced digital tracking measures are being discussed in a growing number of member States – relying, again, on algorithms and automation.”

“At the same time, the recommendation warns of significant challenges to human rights related to the use of algorithmic systems, mostly concerning the right to a fair trial; privacy and data protection; freedom of thought, conscience and religion; the freedoms of expression and assembly; the right to equal treatment; and economic and social rights,” continues the statement.

“Given the complexity, speed and scale of algorithmic development, the guidelines stress that member states must be aware of the human rights impacts of these processes and put in place effective risk-management mechanisms. The development of some systems should be refused when their deployment leads to high risks of irreversible damage or when they are so opaque that human control and oversight become impractical. Serious and unexpected consequences may occur due to the growing interdependence and interlocking of multiple algorithmic systems that are deployed in the same environments,” said the CoE.

However what is often forgotten is that the EU’s General Data Protection Regulation (GDPR) already provides clarity on so-called “automated decision making.”

Article 22 states: “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”

This means that in effect, the GDPR restricts businesses  from making solely automated decisions, including those based on profiling, that have a “significant” effect on individuals. “Significant” may be a vague term, but in keeping with the precautionary principle, a legitimate concern of potential harm may be valid.

Where matters become more complicated is outside the scope of the GDPR, where governments and state authorities are trying to battle a significant threat to the greater public good. However the CoE says: “As a matter of principle, states should ensure that algorithmic systems incorporate safety, privacy, data protection and security safeguards by design. States must further carefully consider the quality and provenance of datasets, as well as inherent risks, such as the possible de-anonymisation of data, their inappropriate or de-contextualised use, and the generation of new, inferred, potentially sensitive data through automated means.”