The US Air Force (USAAF) and MIT will launch an Artificial Intelligence incubator for the development of technologies that optimize different processes of everyday life.
Artificial Intelligence (AI) is the set of technologies aimed at creating systems, supported by ICTs, with the ability to feel, understand and act in a similar way to the human brain.
If we relate this type of technology to the army it usually causes some rejection, but the reality is that public bodies and technology entities are increasingly aware with the ethical use of technologies.
The U.S. Air Forces have pledged not to use AI for the development and creation of autonomous weapons. Both organizations have signed an agreement with MIT to develop technological projects with the aim of “doing the common good” such as humanitarian work or the simplification of daily tasks.
The secretary of the USAAF, Heather Wilsonstated at the MIT Technology Review that AI would be a key component of your Science and Technology strategy. This technological potential applied in the army is a great effort to optimize and rethink the capabilities of the American army.
The army’s investment in AI
Today, the Air Forces finance a multitude of research and development projects with more than 10,000 different entities, carrying out an investment of 2,240 million euros per year in basic and early stage research, and others 22,400 million euros in R&D of applied technologies.
The U.S. Department of Defense already has an MIT research center that joins this new project. With this new incubator, the relationship between the two organizations is expanded, whereby it is estimated that the USAAF will contribute 13.4 million euros per year for topics related to research, as well as a total of 11 members of the organization, who will work hand in hand with MIT professors and students.
According to MIT vice president Maria Zuber for MIT Technologies Review,”no one will be forced to collaborate.” Anyone who is concerned that this collaboration involves the development of destructive weapons technology should not be alarmed, as its vice-president has stressed that “MIT doesn’t do weapons research”.
Other attempts involved in controversy
The army’s past attempts to collaborate with the tech industry have sparked numerous controversies from society. This was the case of maven project, which aimed to use Cloud to identify objects in aerial images. This project sparked a certain tension, triggering Google not to renew its contract and that the technology giant published your own AI code of ethicswhich included a ban on study with weapons.
Last February, the Pentagon published a document that outlines its strategy, aimed at becoming world leaders at the military level in the application of Artificial Intelligence.
If you want to keep reading about Artificial Intelligence, don’t miss this post about how Aura, Telefónica’s Artificial Intelligence, thinks.
SOURCE: MIT Technology Review