In January Elon Musk donated $10 million to the Future For Life organization, and now a large portion of that money has been assigned to help research teams develop ways of keeping AI technologies safe.
A 37 projects out of a total of around 300 were selected and will receive grants from the Open Philanthropy Project and Elon Musk. The $7 million dollar budget will be spent in the following 3 years on different studies that will help humanity keep control over artificial intelligence.
An interesting coincidence is that the new “Terminator Genesis” movie is set to appear in the same period as research is starting, scheduled for release in Singapore on June 25 2015.
However the research is aimed at more practical and immediate issues related to AI technology like law, economics, computer science and philosophy. Some studies will focus on robotics research while others will concentrate on more theoretical application like AI-human interactions.
This type of research needs to be developed soon, not for fear of AI controlled assaults on humans but because learning algorithms that are now being developed should have an integrated component which deals with issues of morality and practicality.
Technologies that are now being used for war are part of a new wave of drastic changes applied to warfare strategy that are sweeping the globe. Recent wars have been characterized by the increased use of drones and other types of machines which do not require humans to be in the direct vicinity of battle.
While using more robots to attack or defend strategic locations in war reduces the number of human casualties, the practice might raise several moral issues.
Using robots could decrease the level of thought and feeling put into a decision and result in a less morally viable strategy.Robot drones and tanks could soon be programmed to react to movement and assess threats before using on board weaponry, and such crucial decisions need to be made taking into consideration moral arguments .
The identity of the target, the age and the level of threat it poses to the machine, should all be introduced as factors in the algorithm that an AI based war machine might use.
For now Elon Musk and the group he sponsored are trying to develop ways that allow for better engineering practices and basic ethical rules that should be used when developing “self-learning” machinery.
After the 3 year funding period is over sources say that the most promising team studies and ideas will continue to receive grants in thus furthering their advances.
Image Source: dailygalaxy.com