The non-profit organization The Future of Life Institute (FLI) has released a mini-film calling to ban the use of “killer robots”. The video highlights the risks associated with autonomous weapons and the steps you can take to prevent their proliferation.
Video Slaughterbots – if human: kill () is in news release format. The authors identified several scenarios in which robots can eliminate people:
- Bank robbery;
- attack on police stations;
- hunting for the military.
The authors of the video also demonstrated autonomous robops with weapons on their backs.
“In the case of a drone, the decision to strike is made by a human operator, while an autonomous weapon independently determines who is left alive and who is killed,” the video says.
FLI, in conjunction with the International Committee of the Red Cross, proposed to prohibit algorithms from independently making decisions about eliminating a target. They gave four reasons for limiting such systems:
- the likelihood of escalation of conflicts due to the widespread use of AI;
- the massive nature of autonomous weapons due to the low cost of technology;
- unpredictability of machine learning algorithms in real combat conditions;
- selectivity of robots in determining the target of an attack.
“We urgently need new international legislation to prohibit autonomous weapons from killing people and to impose restrictions on other types of AI weapons,” reads the end credits.
Recall that in December, at the opening of the Review Conference of the Convention on “inhuman” weapons in Geneva, UN Secretary General Antonio Guterres called for action against “killer robots”.
China supported the head of the organization and was one of the first to oppose the use of AI for military purposes.
In late December, it became known that the countries could not reach a consensus on the regulation of autonomous weapons and agreed to continue discussions.