In the future, the guides database will include a QUALITY INDEX, a ranking of sorts, for the quality of each AIS ethics guide selected. As the fields and features of global guides and categorical groups grow; a quality rubric of diverse guide components will be shared among volunteers to determine a quality indices and ranking system.  

Quality A
Component A Ranking 50%
Quality B
Component B Ranking 58%
Quality C
Component C Ranking 93%



The Japanese Society

for Artificial Intelligence Ethical Guidelines (2017)


Preamble Artificial Intelligence (“AI”) research focuses on the realization of AI, which is the enabling of computers to possess intelligence and become capable of learning and acting autonomously. AI will assume a significant role in the future of mankind in a wide range of areas, such as Industry, Medicine, Education, Culture, Economics, Politics, Government, etc. However, it is undeniable that AI technologies can become detrimental to human society or conflict with public interests due to abuse or misuse.

To ensure that AI research and development remains beneficial to human society, AI researchers, as highly-specialized professionals, must act ethically and in accordance with their own conscience and acumen. AI researchers must listen attentively to the diverse views of society and learn from it with humility. As technology advances and society develops, AI researchers should consistently strive to develop and deepen their sense of ethics and morality independently.

The Japanese Society for Artificial Intelligence (JSAI) hereby formalizes the Ethical Guidelines to be applied by its members. These Ethical Guidelines shall serve as a moral foundation for JSAI members to become better aware of their social responsibilities and encourage effective communications with society. JSAI members shall undertake and comply with these guidelines.

1 (Contribution to humanity) Members of the JSAI will contribute to the peace, safety, welfare, and public interest of humanity. They will protect basic human rights and will respect cultural diversity. As specialists, members of the JSAI need to eliminate the threat to human safety whilst designing, developing, and using AI.

2 (Abidance of laws and regulations) Members of the JSAI must respect laws and regulations relating to research and development, intellectual property, as well as any other relevant contractual agreements. Members of the JSAI must not bring harm to others through violation of information or properties belonging to others. Members of the JSAI must not use AI with the intention of harming others, be it directly or indirectly.

3 (Respect for the privacy of others) Members of the JSAI will respect the privacy of others with regards to their research and development of AI. Members of the JSAI have the duty to treat personal information appropriately and in accordance with relevant laws and regulations.

4 (Fairness) Members of the JSAI will always be fair. Members of the JSAI will acknowledge that the use of AI may bring about additional inequality and discrimination in society which did not exist before, and will not be biased when developing AI. Members of the JSAI will, to the best of their ability, ensure that AI is developed as a resource that can be used by humanity in a fair and equal manner.

5 (Security) As specialists, members of the JSAI shall recognize the need for AI to be safe and acknowledge their responsibility in keeping AI under control. In the development and use of AI, members of the JSAI will always pay attention to safety, controllability, and required confidentiality while ensuring that users of AI are provided appropriate and sufficient information.

6 (Act with integrity) Members of the JSAI are to acknowledge the significant impact which AI can have on society. They will therefore act with integrity and in a way that can be trusted by society. As specialists, members of the JSAI will not assert false or unclear claims and are obliged to explain the technical limitations or problems in AI systems truthfully and in a scientifically sound manner.

7 (Accountability and Social Responsibility) Members of the JSAI must verify the performance and resulting impact of AI technologies they have researched and developed. In the event that potential danger is identified, a warning must be effectively communicated to all of society. Members of the JSAI will understand that their research and development can be used against their knowledge for the purposes of harming others, and will put in efforts to prevent such misuse. If misuse of AI is discovered and reported, there shall be no loss suffered by those who discover and report the misuse.

8 (Communication with society and self-development) Members of the JSAI must aim to improve and enhance society’s understanding of AI. Members of the JSAI understand that there are diverse views of AI within society, and will earnestly learn from them. They will strengthen their understanding of society and maintain consistent and effective communication with them, with the aim of contributing to the overall peace and happiness of mankind. As highly-specialized professionals, members of the JSAI will always strive for self-improvement and will also support others in pursuing the same goal.

9 (Abidance of ethics guidelines by AI) AI must abide by the policies described above in the same manner as the members of the JSAI in order to become a member or a quasi-member of society. The guidelines will be announced after the committee meeting. The interpretation and review of the guidelines will be conducted during the committee meeting and approval from the committee must be obtained prior to its finalization.




5 28 2019

The development of Artificial Intelligence (AI) concerns the future of the whole society, all humankind, and the environment. The principles below are proposed as an initiative for the research, development, use, governance and long-term planning of AI, calling for its healthy development to support the construction of a human community with a shared future, and the realization of beneficial AI for humankind and nature.

Research and Development
The research and development (R&D) of AI should observe the following principles:

  • Do Good: AI should be designed and developed to promote the progress of society and human civilization, to promote the sustainable development of nature and society, to benefit all humankind and the environment, and to enhance the well-being of society and ecology.
  • For Humanity: The R&D of AI should serve humanity and conform to human values as well as the overall interests of humankind. Human privacy, dignity, freedom, autonomy, and rights should be sufficiently respected. AI should not be used to against, utilize or harm human beings.
  • Be Responsible: Researchers and developers of AI should have sufficient considerations for the potential ethical, legal, and social impacts and risks brought in by their products and take concrete actions to reduce and avoid them.
  • Control Risks: Continuous efforts should be made to improve the maturity, robustness, reliability, and controllability of AI systems, so as to ensure the security for the data, the safety and security for the AI system itself, and the safety for the external environment where the AI system deploys.
  • Be Ethical: AI R&D should take ethical design approaches to make the system trustworthy. This may include, but not limited to: making the system as fair as possible, reducing possible discrimination and biases, improving its transparency, explainability, and predictability, and making the system more traceable, auditable and accountable.
  • Be Diverse and Inclusive: The development of AI should reflect diversity and inclusiveness, and be designed to benefit as many people as possible, especially those who would otherwise be easily neglected or underrepresented in AI applications.
  • Open and Share: It is encouraged to establish AI open platforms to avoid data/platform monopolies, to share the benefits of AI development to the greatest extent, and to promote equal development opportunities for different regions and industries.

The use of AI should observe the following principles:

  • Use Wisely and Properly: Users of AI systems should have the necessary knowledge and ability to make the system operate according to its design, and have sufficient understanding of the potential impacts to avoid possible misuse and abuse, so as to maximize its benefits and minimize the risks.
  • Informed-consent: Measures should be taken to ensure that stakeholders of AI systems are with sufficient informed-consent about the impact of the system on their rights and interests. When unexpected circumstances occur, reasonable data and service revocation mechanisms should be established to ensure that users' own rights and interests are not infringed.
  • Education and Training: Stakeholders of AI systems should be able to receive education and training to help them adapt to the impact of AI development in psychological, emotional and technical aspects.

The governance of AI should observe the following principles:

  • Optimizing Employment: An inclusive attitude should be taken towards the potential impact of AI on human employment. A cautious attitude should be taken towards the promotion of AI applications that may have huge impacts on human employment. Explorations on Human-AI coordination and new forms of work that would give full play to human advantages and characteristics should be encouraged.
  • Harmony and Cooperation: Cooperation should be actively developed to establish an interdisciplinary, cross-domain, cross-sectoral, cross-organizational, cross-regional, global and comprehensive AI governance ecosystem, so as to avoid malicious AI race, to share AI governance experience, and to jointly cope with the impact of AI with the philosophy of "Optimizing Symbiosis". 
  • Adaptation and Moderation: Adaptive revisions of AI principles, policies, and regulations should be actively considered to adjust them to the development of AI. Governance measures of AI should match its development status, not only to avoid hindering its proper utilization, but also to ensure that it is beneficial to society and nature.
  • Subdivision and Implementation: Various fields and scenarios of AI applications should be actively considered for further formulating more specific and detailed guidelines. The implementation of such principles should also be actively promoted – through the whole life cycle of AI research, development, and application.
  • Long-term Planning: Continuous research on the potential risks of Augmented Intelligence, Artificial General Intelligence (AGI) and Superintelligence should be encouraged. Strategic designs should be considered to ensure that AI will always be beneficial to society and nature in the future.



The Guidelines were revised & published in April 2019.


Building trust in human-centric AI

The Ethics Guidelines for Trustworthy Artificial Intelligence (AI) is a document prepared by the High-Level Expert Group on Artificial Intelligence (AI HLEG). This independent expert group was set up by the European Commission in June 2018, as part of the AI strategy announced earlier that year.



The aim of the Guidelines is to promote Trustworthy AI. Trustworthy AI has three components, which should be met throughout the system's entire life cycle: (1) it should be lawful, complying with all applicable laws and regulations (2) it should be ethical, ensuring adherence to ethical principles and values and (3) it should be robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm. Each component in itself is necessary but not sufficient for the achievement of Trustworthy AI. Ideally, all three components work in harmony and overlap in their operation. If, in practice, tensions arise between these components, society should endeavour to align them.

These Guidelines set out a framework for achieving Trustworthy AI. The framework does not explicitly deal with Trustworthy AI’s first component (lawful AI).1 Instead, it aims to offer guidance on the second and third components: fostering and securing ethical and robust AI. Addressed to all stakeholders, these Guidelines seek to go beyond a list of ethical principles, by providing guidance on how such principles can be operationalised in sociotechnical systems. Guidance is provided in three layers of abstraction, from the most abstract in Chapter I to the most concrete in Chapter III, closing with examples of opportunities and critical concerns raised by AI systems.


Leave a Reply

Your email address will not be published. Required fields are marked *