The singularity proposes a scenario that looks like science fiction, but at the speed we see science and computing advance, a world controlled by machines does not end up sounding completely crazy.
The great master of science fiction, Isaac Asimov, for many a prophet of the times we live, postulated in 1942 within his story Runaround the Three Laws of Robotics, rules devised by the writer that robots would have to adhere to for humans to stay safe.
- A robot must not harm a human being or, by its inaction, let a human being suffer harm.
- A robot must obey orders given to it by a human being, except if these orders conflict with the First Law.
- A robot must protect its own existence, to the extent that this protection does not conflict with the First or Second Law.
Laws that sounded like total fantasy in 1942 make so much sense today that they have already been extended and modified to fit the real world. Standards for responsible use of robotic technologies have been proposed, as have been established ethical principles that adapt to our reality. So far the robots we know are responsible for the assembly of products such as cars, sophisticated machinery, microprocessors, etc. Others are responsible for serving restaurants in Japan. The question of whether machines could replace humans has already been answered. Many jobs have ceased to exist because it is no longer necessary for a person to do them, when a machine is more efficient and has lower cost in the long term. The next question is could they one day completely replace us?
Photo by: OnInnovation
Postulated by Gordon Moore in 1965, this empirical law has succeeded in anticipating the behavior pattern of microprocessors, indicating that every 18 months approximately, the number of transistors in an integrated circuit doubles, and their cost is reduced. Moore’s Law has been enforced for more than half a century, and if it continues to do so, technological progress would be infinite?
The fact that technology can advance at this speed without stopping, has made many think what will happen when such accelerated growth affects the artificial intelligence of machines that are already capable of so many things today.
Although we may be nearing the end of Moore’s Law, imposed only by the laws of natural physics that put a limit on the level of shrinkage that circuits can reach, this possible wall for advancement, to the scale on which it has been experimenting for years, is considered to be bypassed thanks to nanotechnology. When the limits of miniaturization are reached at atomic levels, what comes next?
The technological uniqueness it is a hypothesis that suggests that the speed at which technology progresses so rapidly will cause artificial intelligence sooner or later to exceed the intellectual capacity of humans and therefore the control we have over it. This will forever change civilization or end it. An incredibly interesting idea and quite frightening at the same time. Singularity, if we think about it, is the theme in films like Matrix or Terminator, which pose a scenario in which civilization has reached a state in which machines have surpassed human intelligence and taken control.
If Moore’s Law continues to be complied with and artificial intelligence reaches a point where robots begin to create new, better, smarter robots, why not forget any variant of Asimov’s laws that humans have been able to implement in their programming? After all they would be machines creating machines, and humanity could become an obsolete species that does not deserve to be preserved. Good-bye, John Connor.
Although all of these possibilities are probably impossible to understand, much less to predict, many have already dared to say that in 20 or 30 years we will reach a level of superintelligence that will allow the singularity to happen at some point. And even if everything looks like something out of a science fiction book, if we look closely at the world in which we are living, we realize that not everything is funny coincidences, and that there is a lot based on reality.