3D rendering of a human brain showing the who and what shaping the future of AI policy in Europe
AI: Ethics and Algorithms by Jennifer Baker, EU Policy Correspondent

AI: Ethics and Algorithms

What will shape the EU’s plans and who’s pushing back?

In October a German group published its position on ethics and AI. With the new European Commission tipped to put forward a comprehensive AI policy in the first 100 days of office, a lot of scrutiny has been given to the proposals.

German consumer rights organisation, vzbv, CEO, Klaus Müller explained: “We expect this report – which was drafted by experts for the German Ministries of the interior and justice/consumers – to influence the plans of the European Commission. At the presentation of the report people already said that its findings should now become part of the EU-level debate.”

There are several reasons to take the report seriously. Incoming Commission President Ursula von der Leyen, has been explicit about her plans for “horizontal” legislation on artificial intelligence, and as the former German defense minister, was part of the government that commissioned the report.

Leader of that government, German Chancellor Angela Merkel has also gone on the record saying that the EU should establish strong AI rules building on the success of the General Data Protection Regulation. Even Silicon Valley has one eye on proceedings as an indicator of which way the EU will jump.

In his Parliamentary hearing, Justice Commissioner designate, Didier Reynders also promised AI draft legislation in 100 days. His boss, Commission Vice President Margrethe Vestager also said that ethics must be at the heart of AI policy.

So far, so good. European Digital Rights NGO, EDRi, welcomed the Commissioners-designate focus on fundamental rights when implementing AI-based technologies.

According to the German suggestions, AI systems fall into three distinct levels of algorithmic involvement in decision-making, based on the distribution of tasks between the human and the machine:

  • Decisions are human decisions, based either in whole or in part on information obtained using algorithmic calculations.
  • Decisions are human decisions shaped by the outputs of algorithmic systems in such a way that the human’s factual decision-making abilities and capacity for self-determination are restricted.
  • Algorithm-determined decisions trigger consequences automatically and no provision is made for a human decision in the individual case.

The report also applies categories of risk from level 1 – no risk — to level 5 – high risk.

According to the proposals, applications with zero or negligible potential for harm, level 1, would require no special measures. Level 2 applications, those with some potential for harm, would warrant transparency obligations, publication of a risk assessment, or monitoring  and audit procedures. Level 3 applications with regular or significant potential for harm, would need additional measures such as ex-ante approval procedures.

Applications at level 4 are those defined as having “serious potential for harm” and would need additional monitoring, “such as live interface for always on oversight by supervisory institutions.” Level 5 applications with “an untenable potential for harm” would face a complete or partial ban of the system.

It is these last two levels that have provoked the most reaction from industry lobbyists worried that the new law could curtail their use.

Germany’s digital association Bitkom, is worried about what it calls “excessive regulation.” The association says the proposals for the regulation of algorithms are too far-reaching, and an obligation on forced data access should be the exception, rather than the rule.

Bitkom president Achim Berg said that the German expert group should have identified a narrow group of high risk algorithms instead of putting all algorithms under suspicion. He feels that a risk assessment for every algorithm would be impossible to implement as well as unnecessary. “To avoid abuse or corruption, existing law such as consumer and contractual law, anti-discrimination and liability rules, or data protection laws should be applied. What we need is a better understanding of algorithms, not their prohibition,” he said.

However it is worth pointing out that the report’s “pyramid of risk” puts the vast majority of algorithms in the level 1 category with only a tiny number expected in the banned level 5 category.

Berg is also worried about mandatory data access. “We cannot force companies to provide their data, for example, to competitors. Our international competitors would be delighted to get wide access to our data, but it should only ever be targeted and well justified,” he added.

Meanwhile the the Center for Data Innovation, takes the view that “ethics cannot be enforced.” CDI Senior Policy Analyst Eline Chivot explained: “The German Data Ethics Commission’s recommendations on AI send a worrying signal to businesses that they risk adopting AI at their own peril. Businesses are already subject to the often stringent rules and impractical requirements set by the GDPR, and adding another layer of regulation for how companies use automated systems will have a chilling effect on AI adoption in the EU. The German recommendations, particularly calls for economy-wide rules on AI rather than industry-specific ones, are yet another cue that policymakers intend to treat firms using AI as a liability rather than an asset.”

She continued: “Europe wants to be more competitive in the digital economy. But it cannot substitute regulation for innovation. Rather than trying to achieve competitiveness in AI through policies designed to disadvantage foreign providers and promote European digital sovereignty, European policymakers should instead focus on developing an AI strategy that invests in people, data, and digital infrastructure, and creates a more innovation-friendly regulatory environment, so that European firms can better compete with China and the United States.”

But, according to BEUC, the European Consumers umbrella group: “AI is changing the way  markets and societies function. AI evokes big promises to make our lives easier and our societies better. It is powering a whole range of new types of products and services, from digital assistants to autonomous cars and all sorts of ‘smart’ devices. All this can bring benefits for consumers, but the widespread use of AI also raises many concerns. Consumers are at risk of being manipulated and becoming subject to discriminatory treatment and arbitrary, non-transparent decisions.”

The new European Commission is currently stalled for political reasons, but at least we have a potential sneak preview of what the eventual EU AI draft could look like.