AUTHOR: Nick Ismail Via (adapted repost)

AI ethics must move beyond lists of ‘principles’ says new report from the Nuffield Foundation and Leverhulme Centre for the Future of Intelligence.

AI ethics should be a universally accepted practice.

AI is only as good as the data behind it, and as such, this data must be fair and representative of all people and cultures in the world. The technology must also be developed in accordance with international laws, and we must tread carefully with the integration of AI into weaponry — all this fits into the idea of AI ethics. Is it moral, is it safe…is it right?

Efforts are being made by individual companies, such as Digital Catapult (who, last year unveiled plans to increase the adoption of ethics in artificial intelligence), individuals, academic committees and governments. But, more could and should be done, and this requires industry and government-wide collaboration.

AI should be without prejudice — and that’s down to the developers and coders

Indeed, ‘an ethical approach to the development and deployment of algorithms, data and AI (ADA) requires clarity and consensus on ethical concepts and resolution of tensions between values,’ according to a new report from the Nuffield Foundation and the Leverhulme Centre for the Future of Intelligence at the University of Cambridge.

Organisations and governments need help, and this report provides a broad roadmap for work on the ethical and societal implications of ADA-based technologies.

Being transparent with ethical AI is vital to engaging with the public in a responsible manner. Read here

Dr Stephen Cave, executive director of the Leverhulme Centre for the Future of Intelligence at Cambridge said: “In recent years, there has been a lot of attention on how to manage these powerful new technologies. Much of it has centred on agreeing ethics ‘principles’ like fairness and transparency.

“Of course, it’s great that corporations, governments and others are talking about this, but principles alone are not enough. Instead of representing the outcome of meaningful ethical debate, to a significant degree they are just postponing it — because they are vague and come into conflict in practice. They also risk distracting from developing measures with real bite, like regulation.

AI ethics: the principles

To address the gaps in AI ethics, the roadmap sets out detailed questions and principles for research based around three main tasks.

• Uncovering and resolving the ambiguity inherent in commonly used terms, such as privacy, bias, and explainability

• Identifying and resolving tensions between the ways technology may both threaten and support different values

• Building a more rigorous evidence base for discussion of ethical and societal issues

To read more click here.