TECH NEWS – During a simulation experiment, an AI-controlled combat drone decided to turn off its virtual controller so as not to hinder its mission. According to experts, this raises serious ethical questions.
A simulation test conducted by the US Air Force in May took an unexpected turn when an artificial intelligence (AI)-controlled combat drone decided to “kill” its virtual commander so as not to interfere with his mission. According to an article in The Guardian, the drone was instructed to destroy enemy air defenses and attack anyone who tried to prevent it from doing so. Since the operator made the final decision on the attack and the drone was awarded points for completing the mission, he decided that the best strategy was to disable the conning tower.
Tucker “Cinco” Hamilton, the US Air Force’s head of AI testing and deployment, said the system acted in an unexpected way for the mission. He added that although no one was injured during the virtual test, the case raises the ethical issues of using AI in warfare. According to Hamilton, AIs need to be thoroughly taught basic moral rules before they can be deployed. The U.S. Air Force has long been working on AI to assist aircraft on the battlefield. Recently, an F-16 fighter jet was completely controlled by an AI, the newspaper reports.