Thousands of scientists have signed a commitment that they do not play any role in building AI, which have the ability to kill without human supervision.
When many think about artificial intelligence, at least they think about the dishonest AIS seen in science fiction movies such as the infamous Skynet in the terminator.
In an ideal world, artificial intelligence would never be used in any military nature. However, it was almost certainly developed in one way or another due to the advantage that he would provide to the opponent without similar possibilities.
Russian President Vladimir Putin, when asked about his thoughts about AI, said recently: “Whoever becomes the leader of this sphere will become the ruler of the world.”
Putin's words caused fears A race in the development of artificial intelligence similar to the nuclear arms race and one that can be potentially reckless.
Instead of trying to stop the military development of AI, the more achievable goal is to at least assure that any AI decision to kill is subject to human supervision.
Demis Hassabis in Google Deepmind and Elon Musk from Spacex belong to over 2,400 scientists who have signed a commitment not to develop artificial intelligence or robots that kill without human supervision.
The promise was created by The Future of Life Institute and calls on governments to agree on regulations and regulations that stigmatize and effectively prohibit the development of deadly robots.
“We sign the following, we agree that the decision to take human life should never be transferred to the machine,” we read the commitment. He warns “fatal autonomous weapon, choosing and engaging goals without human intervention, it would be dangerously destabilizing for each country and individual.”
Programming humanity
Human Compasion is difficult to program, we have certainly been for many years since the possibility of this. However, this is important when it comes to life or death.
Think about the AI rocket defense created to protect the nation. Based on pure logic, it can determine that the destruction of another nation that begins a rocket program is the best way to protect its own. People would take into account that these are people's lives and you should look for alternatives such as diplomatic resolutions.
Works can one day be used for the police to reduce the risk for human officers. They can be armed with firearms or tapes, but responsibility for fire should always be reduced to the human operator.
Although it will improve over time, it has been proven that AI has a serious problem with prejudice. AND Study 2010 By scientists from Nist and the University of Texas in Dallas, they said that algorithms designed and tested in East Asia better recognize Eastern Asians, while those designed in Western countries are more accurate in detecting the Caucasus.
An armed robot, who mistakenly identifies someone with another person, can ultimately kill this person simply because of the defect with his algorithms. Confirmation of AI assessment in a human operator may be enough to prevent such a disaster.
Read more: Interpol examines how AI will affect crimes and police
Do you agree with the promise of scientists? Let us know in the comments.
Are you interested in interrogating industry leaders discussing such topics and sharing your use? Take part in the cycling AI and Big Data Expo Events with upcoming programs in the Silicon Valley, London and Amsterdam to learn more. Located with IoT Tech ExpoIN Blockchain Expo AND Cyber security and clouds So you can examine the future of corporate technology in one place.