AUTHOR: Alison Berthet Via

It’s clear that regulation of AI must start now, but why do emerging frameworks primarily talk about ethics rather than law and human rights?

Artificial intelligence. Everyone talks about it, many pretend to understand it, but how many of us truly measure its disruptive potential? Without being as alarmist as Yuval Noah Harari—who identifies the rise of AI and bioengineering as an existential threat to human kind on a par with nuclear war and climate change—the potential for AI to affect how we lead our lives is very real, for better and for worse. 

It’s clear that regulating the development of such disruptive technology must start now, and many initiatives are already emerging within companies and civil society to guide the “ethical” development and use of AI. But why are we talking about ethics rather than law and human rights?

Ethical guidelines for AI

Many voices have called for a human rights approach to AI regulation, culminating notably in the Toronto Declaration on protecting the right to equality and non-discrimination in machine learning systems (May 2018), which was promisingly heralded as “a first step towards making the human rights framework a foundational component of the fast-developing field of AI and data ethics”. 

However, the discussion to date has predominantly been framed in terms of “ethical” guidance rather than a legal or rights-based framework. In a geopolitical context of intense technological competition between Europe, the United States and China, the challenge is to devise a regulatory framework that controls potential excesses without stifling innovation. This competitive pressure not to constrain innovation may explain the preference for more general “ethical” guidelines. 

Although the EU Guidelines recognize the global reach of AI and encourage work towards a global rights-based framework, they are also expressly intended to foster European innovation and leadership in AI. In that light, the language of ethics may have been more encouraging than talk of human rights protection. According to the HLEG, the priority was speed, in order to keep pace with and inform the wider AI debate, and the EU Guidelines were not intended as a substitute for regulation (on which the HLEG was separately tasked with making recommendations to the EU Commission – see its “Policy and Investment Recommendations” published on 26 June 2019). 

But if we agree that AI must be regulated (through existing or new laws), this seems a missed opportunity for the EU to have framed the upcoming regulatory debate in human rights terms. 

A human rights-based approach?

It is of course encouraging that ethical principles are being adopted with support from the AI industry. But a human rights-based approach would offer a more robust framework for the lawful and ethical development and use of AI. 

Human rights are an internationally agreed set of norms that represent the most universal expression of our shared values, in a shared language and supported by mechanisms and institutions for accountability and redress. As such, they offer clarity and structure and the normative power of law. 

For example, human rights law provides a clear framework for balancing competing interests in the development of technology: its tried and tested jurisprudence requires restrictions to human rights (like privacy or non-discrimination) to be prescribed by law, pursue a legitimate aim, and be necessary and proportionate to that aim. Each term is a defined concept against which actions can be objectively measured and made accountable. 

The EU Guidelines suggest that adherence to ethical principles sets the bar higher than formal compliance with laws (which are not always up to speed with technology or may not be well suited to addressing certain issues), but the reality is that ethics are much more easily manipulated to support a given company or government’s agenda, with limited recourse for anyone who disagrees. 

Human rights offer a holistic framework for comprehensively assessing the potential impact of a disruptive technology like AI on all our rights and freedoms (civil, political, economic, cultural and social), leaving no blind spots. 

For more details click here.