Researchers have proposed Falcon, a privacy-friendly protocol for training artificial intelligence algorithms.
A team of researchers, from Princeton University, Microsoft, Alogrand Foundation and Technion, have proposed Falcon. It is about a framework for training artificial intelligence algorithms is produced with all the guarantees of privacy and security.
The proliferation of artificial intelligence has led to a part of the industry cares about its future. More than artificial intelligence for the data it handles. In short, for the people, who are in many occasions where the data with which we work comes from.
The concern has reached such a point that the European Commission itself has proposed a set of ethical principles that AI must adhere to. Some of them are understandable and approved by all, such as the principles of privacy and transparency. These kinds of limiting factors should not only come into play in the application of AI. Also in your training.
This is what the creators of Falcon propose. The new protocol is aimed at protecting the data with which you work. In the same way, its design makes automatically abort training if it detects the presence of malicious files or attackers. It is a formula to prevent an attack from disrupting the effectiveness of an algorithm by having influenced its training.
Two kinds of users
The Falcon protocol divides the types of users involved in an AI usage scenario into two. The first are data holders, those who have the data sets for training. The ‘ query users’ are the second. They will be the ones to ask the AI model questions once it has trained.
For training to be secure and private, data holders have to share their information. This passes to the servers where the model will be trained. From there the’ query users ‘ will be able to ask questions to the system. And, in between Falcon allows data set privacy to be respected, while queries to the algorithm will also remain secret.
Falcon uses ‘semi-honest’ protocols, where both parties have to follow specific rules. In this way they ensure that the contributions to the training and the results of it are not modified. In addition, the framework reduces the complexity of communication, making it easier to operate on smaller data types.