The European Commission has presented a draft of ethical principles for the use of artificial intelligence whose final version will be released next March 2019.
The debate on robotics and artificial intelligence it is becoming increasingly prominent from a social point of view given its economic consequences. For this reason, it has begun to consider the ethics for future legislation on artificial intelligence.
The three laws of robotics formulated in 1942 by the author of science fiction, Isaac Asimov, can be considered an ethical basis for the current development of intelligent machines :” A robot will not harm a human being or, by inaction, allow a human being to suffer harm. A robot must comply with the orders given by humans, except for those that conflict with the first law.”
Therefore, when talking about artificial intelligence, voices full of enthusiasm arise, as well as others full of anguish. What there is no doubt about is that this technology is part of the current change in society.
The first draft on ethics in artificial intelligence
The European Union, aware of this debate, has been working for months on the development of a technology policy at European level on artificial intelligence for its regulation. In this line, the European Commission has released the first draft in Brussels of ethical principles for “a reliable artificial intelligence” focused on the human being.
A group of experts in artificial intelligence of the EC (AI HLEG) has been meeting for several months to draft guidelines for the use of this technology based on principles such as monitoring, respect for privacy and transparency.
The group is made up of 52 experts from academia, business and civil society, and together they have established the basic rights among which they emphasize that artificial intelligence “it should be developed, deployed and used for an ethical purpose”. Similarly, Andrus Ansip, vice president of the EC, has pointed out that “for people to accept and use systems based on artificial intelligence they need to trust them and know that their privacy is respected”.
Fundamental principles
The 37-page document refers to several chapters on artificial intelligence: first, respect for fundamental rights, principles and values; second, the requirements necessary to develop a trusted artificial intelligence.
Regarding the first, as the draft points out, this technology must respect human rights and regulation. In addition, you must ensure that its use will not cause unintentional damage:
- Principle of beneficence: “do good”.
- Principle of non-maleficence: “do no harm””
- Principle of autonomy of human beings.
- Principle of justice.
- Principle of transparency.
On the other hand, to ensure that trust in artificial intelligence a number of requirements listed by the European Commission in this way are necessary:
- Responsibility.
- Data governance.
- Design for everyone.
- Governance of AI autonomy. Human supervision.
- Non-discrimination.
- Respect for human autonomy.
- Respect for privacy.
- Technological robustness.
- Security.
- Transparency.
The final AI ethical principles document will be published next March 2019. On the other hand, the European Commission also highlighted during the presentation of this draft its plan to encourage the adoption of strategies in member countries about the use of this technology. For this, it has set the goal that governments and companies invest nearly 20,000 million euros annually for the development and research of AI from 2020.