Last month, the U.S. Army asked private companies for ideas about how to improve its planned semi-autonomous, AI-driven targeting system for tanks. "In its request, the Army asked for help enabling the Advanced Targeting and Lethality Automated System (ATLAS) to 'acquire, identify, and engage targets at least 3X faster than the current manual process,'" reports Gizmodo. "But that language apparently scared some people who are worried about the rise of AI-powered killing machines. And with good reason." Slashdot reader darth_borehd summarizes the U.S. Army's response: Robot (or more accurately, drone) tanks will always have a human "in the loop" just like the drone plane program, according to the U.S. Army. The new robot tanks, officially called the Multi Utility Tactical Transport (MUTT), will use the Advanced Targeting and Lethality Automated System (ATLAS). The Department of Defense assures everyone that they will adhere to "ethical standards." Here's the language the Defense Department used: "All development and use of autonomous and semi-autonomous functions in weapon systems, including manned and unmanned platforms, remain subject to the guidelines in the Department of Defense (DoD) Directive 3000.09, which was updated in 2017. Nothing in this notice should be understood to represent a change in DoD policy towards autonomy in weapon systems. All uses of machine learning and artificial intelligence in this program will be evaluated to ensure that they are consistent with DoD legal and ethical standards." Directive 3000.09 requires that humans be able to "exercise appropriate levels of human judgement over the use of force," which is sometimes called being "in the loop," as mentioned by above.
Read more of this story at Slashdot.