The Artificial Intelligence has its risks

The Artificial Intelligence has been already many years with us but of expert systems, programs that supposedly could consider smart, has arrived at the paradigm that is revolutionizing this science: neural networks, which have to do with machine learning, with learning reinforced with the fact that soon the “big data” on almost any topic is the key element for training the neural network and that surprised us with their results.

DeepMind, the company that brought the program AlphaGo, post a record to the best player in the complex game of eastern Go, or, to AlphaZero, that supposedly learned to play as a super chess grandmaster by playing millions of games with itself and drawing conclusions that in 500 years humans we have not been warned, now speaks to us of the risks that may have the science of artificial intelligence in the future.

There are a lot of philosophical questions interesting and challenging… which we will have to answer about how to control these systems, what values we want them to have, how to put it into action and what we want to use them”, said Demis Hassabis, in an interview a couple of days.

Ubisoft applies artificial intelligence to your code

Hassabis spoke about the documentary on AlphaGo has been spreading, which is the system of neural networks and Artificial Intelligence, which astonished literally to the world in 2016 to defeat the strongest player in the game of strategy chinese, the Go.

In a question raised at the end of the talk at the University College London, said that the IA is “an amazing tool to accelerate the discovery of scientific knowledge”, adding: “we Believe it will be one of the technologies more beneficial to the human race.” However, like any other powerful technology, “there are risks,” he said, adding: “it depends on how as a society we decide to put into practice to resolve the challenges in the future.”

It is evident that there are ethical problems. You must use the AI to control other human beings? Do we have to listen to the suggestions of the smart systems and forget about our own solutions, perhaps based on the intuition? Hassabis indicates that there are questions that “are in the front of our mind” in DeepMind, who founded in 2010 and is now part of Google.

Since then that in all this there is a hint of speculation-educated, because no one can predict the future. It is clear that we must be prepared to deal with the ethical implications of intelligent systems, matter of fact, it takes years to in discussions. We will put an example: if there is a system of medical diagnostic computer, based on the neural network more profound than you can imagine. Are we to believe in the diagnostics that the sick human? How and when he proposes a treatment, we should use it without asking us for anything more? Who will be responsible of the possible errors in treatments, which might even jeopardize the life of a patient? There are No simple answers to these questions obvious.

Check out more Related Articles around Cool Life Hacks