Human and Machine Collaboration for your Reading Selections

A symbiotic collaboration between expert human curation and machine algorithms will be used to select readings for this portal.  

Over time, machines have inevitably taken over the role of information gatekeeper, with their scoring algorithms deciding what is the most “relevant” and “reputable.”  Even academic literature reviews, a key component to the research process, have devolved into keyword searches of Google Scholar, leaving it to Google’s algorithms to arbitrate what constitutes the most relevant and significant in a field’s literature.  

But, in this portal we will use both human and machine curation to select text, media, data, and contributions.  Below are two articles as examples of our current selection criteria.  

We focused our research efforts on both “grey literature” and academic publishing and distribution channels.  Grey literature are materials and research produced by organizations outside of the traditional commercial or academic publishing channels.  Common grey literature included reports, working papers, governments documents, white papers and evaluations.  But both academic and grey literature will be critically evaluated to be considered for inclusion.  

The Governance of Artificial Intelligence : The Case of Interstate Cyber Conflicts

MariaRosaria Taddeo – an mp3 lecture

Einordnung

Urheber(in):
Taddeo, Mariarosaria GND
Datum der Veröffentlichung:
01.04.2019
DOI:
10.17176/20190722-103830-0
URN:
urn:nbn:de:0301-20190722-103830-0-0
Sprache:
Englisch
Link zum Volltext:
https://www.rechtimkontext.de/veranstaltungen/veranstaltung/the-governance-of-artificial-intelligence-the-case-of-interstate-cyber-conflicts/

FOR FULL DETAILS CLICK HERE

Artificial Intelligence: the global landscape of ethics guidelines

Anna Jobin, Marcello Ienca, Effy Vayena

* Corresponding author: effy.vayena@hest.ethz.ch

Health Ethics & Policy Lab, ETH Zurich, 8092 Zurich, Switzerland  Preprint version 

© The authors 2019

Abstract

In the last five years, private companies, research institutions as well as public sector organisations have issued principles and guidelines for ethical AI, yet there is debate about both what constitutes “ethical AI” and which ethical requirements, technical standards and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analyzed the current corpus of principles and guidelines on ethical AI. Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted; why they are deemed important; what issue, domain or actors they pertain to; and how they should be implemented. Our findings highlight the importance of integrating guideline development efforts with substantive ethical analysis and adequate implementation strategies. 

MAIN ARTICLE 

Introduction 

Artificial Intelligence (AI), or the theory and development of computer systems able to perform tasks normally requiring human intelligence, is widely heralded as an ongoing “revolution” transforming science and society altogether. While approaches to AI such as machine learning, deep learning and artificial neural networks are reshaping data processing and analysis, autonomous and semi-autonomous systems are being increasingly used in a variety of sectors including healthcare, transportation and the production chain. In light of its powerful transformative force and profound impact across various societal domains, AI has sparked ample debate about the principles and values that should guide its development and use5,6. Fears that AI might jeopardize jobs for human workers7, be misused by malevolent actors, elude accountability or inadvertently disseminate bias and thereby undermine fairness have been at the forefront of the recent scientific literature and media coverage. Several studies have discussed the topic of ethical AI, notably in meta-assessments or in relation to systemic risks and unintended negative consequences like algorithmic bias or discrimination. 

FOR FULL TEXT PRESS HERE.

A Unified Framework of Five Principles for AI in Society

Luciano Floridi and Josh Cowls

June 22, 2019

Abstract

Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest-profile sets of ethical principles for AI. We assess whether these principles converge upon a set of agreed-upon principles, or diverge, with significant disagreement over what constitutes ‘ethical AI.’ Our analysis finds a high degree of overlap among the sets of principles we analyze. We then identify an overarching framework consisting of five core principles for ethical AI. Four of them are core principles commonly used in bioethics: beneficence, non-maleficence, autonomy, and justice. On the basis of our comparative analysis, we argue that a new principle is needed in addition: explicability, understood as incorporating both the epistemological sense of intelligibility (as an answer to the question ‘how does it work?’) and in the ethical sense of accountability (as an answer to the question: ‘who is responsible for the way it works?’). In the ensuing discussion, we note the limitations and assess the implications of this ethical framework for future efforts to create laws, rules, technical standards, and best practices for ethical AI in a wide range of contexts.

FOR FULL TEXT PRESS HERE.

TECHNICAL REPORT  

Standards for AI Governance:  International Standards to Enable Global  Coordination in AI Research & Development 
Peter Cihon   

Research Affiliate, Center for the Governance of AI   Future of Humanity Institute, University of Oxford  

petercihon@gmail.com    

April 2019  

Executive Summary  

Artificial Intelligence (AI) presents novel policy challenges that require coordinated global responses.   Standards, particularly those developed by existing international standards bodies, can support the global  governance of AI development. International standards bodies have a track record of governing a range of  socio-technical issues: they have spread cybersecurity practices to nearly 160 countries, they have seen firms  around the world incur significant costs in order to improve their environmental sustainability, and they have  developed safety standards used in numerous industries including autonomous vehicles and nuclear energy.  These bodies have the institutional capacity to achieve expert consensus and then promulgate standards across  the world. Other existing institutions can then enforce these nominally voluntary standards through both de  facto and de jure methods.    AI standards work is ongoing at ISO and IEEE, two leading standards bodies. But these ongoing standards  efforts primarily focus on standards to improve market efficiency and address ethical concerns, respectively.  There remains a risk that these standards may fail to address further policy objectives, such as a culture of  responsible deployment and use of safety specifications in fundamental research. Furthermore, leading AI  research organizations that share concerns about such policy objectives are conspicuously absent from ongoing  standardization efforts.     Standards will not achieve all AI policy goals, but they are a path towards effective global solutions where  national rules may fall short. Standards can influence the development and deployment of particular AI systems  through product specifications for, i.a., explainability, robustness, and fail-safe design. They can also affect the  larger context in which AI is researched, developed, and deployed through process specifications. The creation,  dissemination, and enforcement of international standards can build trust among participating researchers, labs,  and states. Standards can serve to globally disseminate best practices, as previously witnessed in cybersecurity,  environmental sustainability, and quality management. Existing international treaties, national mandates,  government procurement requirements, market incentives, and global harmonization pressures can contribute  to the spread of standards once they are established. Standards do have limits, however: existing market forces  are insufficient to incentivize the adoption of standards that govern fundamental research and other  transaction-distant systems and practices. Concerted efforts among the AI community and external  stakeholders will be needed to achieve such standards in practice.  

FOR FULL TEXT PRESS HERE.

ROADMAP REPORT  

Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research
  

Leverhulme Center for the Future of Intelligence via the Nuffield Foundation 

Executive Summary  

The aim of this report is to offer a broad roadmap for work on the ethical and societal implications of algorithms, data, and AI (ADA) in the coming years. It is aimed at those involved in planning, funding, and pursuing research and policy work related to these technologies. We use the term ‘ADA-based technologies’ to capture a broad range of ethically and societally relevant technologies based on algorithms, data, and AI, recognising that these three concepts are not totally separable from one another and will often overlap. 

A shared set of key concepts and concerns is emerging, with widespread agreement on some of the core issues (such as bias) and values (such as fairness) that an ethics of algorithms, data, and AI should focus on. Over the last two years, these have begun to be codified in various codes and sets of ‘principles’. 

Agreeing on these issues, values and high-level principles is an important step for ensuring that ADAbased technologies are developed and used for the benefit of society. However, we see three main gaps in this existing work: 

(i) a lack of clarity or consensus around the meaning of central ethical concepts and how they apply in specific situations; 

(ii) insufficient attention given to tensions between ideals and values; 

(iii) insufficient evidence on both (a) key technological capabilities and impacts, and (b) the perspectives of different publics. 

In order to address these problems, we recommend that future research should prioritise the following broad directions (more detailed recommendations can be found in section 6 of the report): 

1. Uncovering and resolving the ambiguity inherent in commonly used terms (such as privacy, bias, and explainability), 

2. Identifying and resolving tensions between the ways technology may both threaten and support different values, 

3. Building a more rigorous evidence base for discussion of ethical and societal issues

FOR FULL TEXT PRESS HERE.

Leave a Reply

Your email address will not be published. Required fields are marked *