From the Davos Forum they reflect on the future role of artificial intelligence and on how to guarantee the coexistence of innovation and human rights.
World Economic Forum takes seriously new production trends no matter how future they may seem. Let alone those that have demonstrated the potential to impact societies and their economies.
One of these trends that the entity has paid attention to in recent times is artificial intelligence. This technology, one of the most transversal and decisive for the next few years or decades, it is in the spotlight. The Davos Forum, which often addresses the latest technologies capable of removing economic foundations, highlights the need to reflect on AI.
The institution has an open project dedicated to AI, International Council. It is a tool to reflect on the need to make innovation and human rights coexist within artificial intelligence.
And the automation of multiple processes, together with access to a huge amount of data, some of high sensitivity, poses an ideal scenario to develop innovative products. But at the same time it is a risky exercise for rights such as privacy freedom of choice for consumers.
The Davos Forum is about creating international standards for artificial intelligence. And in this case it is not about technological technicalities. The important thing here is to establish a set of ethical principles that moderate AI activities.
Ethical recipes for AI
The woman leading the efforts in artificial intelligence and machine learning at the World Economic Forum is Kay Firth-Butterfield, who offers some recipes on how to combine human rights and innovation in AI. In the first place, it is committed to companies having ethical advice dedicated to guiding the development of this technology.
It is not a question of guiding it only by ethics, but that certain ethical principles are always taken into account. Microsoft, Google and Facebook have already created such bodies within their structures. Firth-Butterfield believes more companies should do it and he also thinks they should be given more power.
This kind of advice should be a must for AI product development. Just like an R & amp; D department.It shouldn’t be easy for companies not to take them into account or to put them aside. Firth-Butterfield proposes to create even a figure that deals with this field. It would be the chief technology ethics officer , which would monitor how AI solutions are developed and implemented. The goal is to prevent AI from taking rights when it reaches consumers or users. Another untouchable pillar for this is transparency. Companies need to communicate what they do and how they do it.
Images: TheDigitalArtist, NNSA News