Building trust in human-centric AI

The Ethics Guidelines for Trustworthy Artificial Intelligence (AI) is a document prepared by the High-Level Expert Group on Artificial Intelligence (AI HLEG). This independent expert group was set up by the European Commission in June 2018, as part of the AI strategy announced earlier that year.

Languages available:

BG | CSDE | DAEL | EN | ES | ET | FI | FR | HR | HU | IT | LT | LV | MTNL | PL | PT | RO | SK | SL | SV

/futurium/en/file/capture1jpgcapture_1.jpg

The AI HLEG presented a first draft of the Guidelines in December 2018. Following further deliberations by the group in light of discussions on the European AI Alliance, a stakeholder consultation and meetings with representatives from Member States, the Guidelines were revised and published in April 2019.

In parallel, the AI HLEG also prepared a revised document which elaborates on a definition of Artificial Intelligence used for the purpose of its deliverables.

Languages available:

BG | CSDE | DAEL | EN | ES | ET | FI | FR | HR | HU | IT | LT | LV | MTNL | PL | PT | RO | SK | SL | SV

Next Steps

Based on fundamental rights and ethical principles, the Guidelines list seven key requirements that AI systems should meet in order to be trustworthy:

  1. Human agency and oversight
  2. Technical robustness and safety
  3. Privacy and Data governance
  4. Transparency
  5. Diversity, non-discrimination and fairness
  6. Societal and environmental well-being
  7. Accountability

Aiming to operationalise these requirements, the Guidelines present an assessment list that offers guidance on each requirement's practical implementation. This assessment list will undergo a piloting process to which all interested stakeholders can participate, in order to gather feedback for its improvement. In addition, a forum to exchange best practices for the implementation of Trustworthy AI was created.

European Commission welcomes the Guidelines

In the Commission’s human-centric approach as set out in its Communication of 8 April 2019, AI is seen as a tool operating in the service of humanity and the public good, aiming to increase individual and collective human well-being. Since people will only be able to confidently and fully reap the benefits of a technology that they can trust, AI’s trustworthiness must be ensured.

As a means to creating an environment of trust for the successful development, deployment and use of AI, the Commission encouraged all stakeholders to implement the seven key requirements of the Guidelines.

Moreover, the Commission will bring the Union’s human-centric approach to the global stage and aims to build an international consensus on AI ethics guidelines.

Not yet a member of the European AI Alliance?

The European Alliance is a multi-stakeholder forum for engaging in a broad and open discussion of all aspects related to AI policy in Europe. It is steered by the AI HLEG.

Become a member of the European AI Alliance by following these two steps:

back to top