artificial intelligence (AI) machine learning military operations

artificial intelligence (AI) machine learning military operations

ASIMOV will autonomy benchmarks — not autonomous systems or algorithms for autonomous systems — will include an ethical, legal, and societal implications group to advise the performers and provide guidance throughout the program. ASIMOV will use the Responsible AI (RAI) Strategy and Implementation (S&I) Pathway published in June 2022 as a guideline for developing benchmarks for responsible military AI technology. This document lays out the five U.S. military responsible AI ethical principles: responsible, equitable, traceable, reliable, and governable. Email questions or concerns about the ASIMOV project to DARPA at [email protected]. More information is online at https://sam.gov/opp/bebfb61ed56e4d78bdefde9575b2d256/view.

Trust in AI

AI also can be a touchy subject when it comes to creating teams of humans and AI computers. The core issue: can humans really trust machine intelligence, and how can humans be sure that AI is making the best decisions?

DARPA launched the Exploratory Models of Human-AI Teams (EMHAT) project last January to help answer some of these questions. This project seeks to develop modeling and simulation of teaming humans with AI to evaluate understand capabilities and limitations of such teams. EMHAT seeks to create a human-AI modeling and simulation framework that provides data that helps evaluate human-machine teams in realistic settings. The project will use expert feedback, AI-assembled knowledge bases, and generative AI to represent a diverse set of human teammate simulacra, analogous to digital twins.

Teams are critical to accomplishing tasks that are beyond the ability of any one individual, researchers explain. Insights in human teaming have come from observing team dynamics to identify processes and behaviors that result in success or failure. Comparatively little progress has been made, however, in applying human team analysis or in developing new ways of evaluating human-machine teams; machines traditionally have not been considered as equal members.

EMHAT researchers will capitalize on digital twins to model human interaction with AI systems in human-machine task completion; and adapting AI to simulated human behavior. While the U.S. Department of Defense (DoD) has forecast the importance of human-machine teaming, significant gaps remain in understanding and evaluating the expected behaviors of human-AI teams. The project seeks to define when, where, why, and how humans and machines can function together productively as teammates. Email questions or concerns to William Corvey, the EMHAT program manager, at [email protected].

Just last June DARPA began the Artificial Intelligence Quantified (AIQ) project to find ways to guarantee the performance and accuracy of artificial intelligence (AI) in future aerospace and defense applications, and stop relying on what amounts to be ad-hoc guesswork. 

AIQ seeks to find ways of assessing and understanding the capabilities of AI to enable mathematical guarantees on performance. Successful use of military AI requires ensuring safe and responsible operation of autonomous and semi-autonomous technologies. Still, methods for guaranteeing the capabilities and limitations of AI do not exist today. That’s where the AIQ program comes in. AIQ will develop technology to assess and understand the capabilities of AI to enable guaranteed performance and accuracy, which up to now has not been possible.

The program will test the hypothesis that mathematical methods, combined with advances in measurement and modeling will enable guaranteed quantification of AI capabilities. The program will address three interrelated capability levels: specific problem level; classes of problem level; and natural class level. The state-of-the-art methods for assessment are ad hoc, deal with the simplest of capabilities, and are not properly grounded in a rigorous theory.

AIQ brings together two technical areas: providing rigorous  foundations for understanding and guaranteeing capabilities; and finding ways to evaluate AI models. This program to guarantee the performance of AI has two 18-month phases — one that focuses on specific problems; and the other that focuses on compositions of classes and architectures. Email questions or concerns to DARPA at [email protected]. More information is online at https://sam.gov/opp/78b028e5fc8b4953acb74fabf712652d/view.

Munitions control, guidance, and targeting

Among the chief goals of military AI and machine learning are to enable smart munitions to navigate, maneuver, detect targets, and carry out attacks with little or no human intervention. The U.S. Air Force Research Laboratory has reached out to industry for enabling technologies that would do just this.

The 2024 Air Dominance Broad Agency Announcement program, launched in January, seeks to develop modeling and simulation; aircraft integration; target tracking; missile guidance and control; and artificial intelligence (AI) for swarming unmanned aircraft. This project seeks to uncover the state of the art in 13 air munitions-research areas: modeling, simulation, and analysis; aircraft integration technologies; find fix target track and datalink technologies; engagement management system technologies; high velocity fuzing; missile electronics; missile guidance and control technologies; advanced warhead technologies; advanced missile propulsion technologies; control actuation systems; missile carriage and release technologies; missile test and evaluation technologies; and artificial intelligence and machine autonomy.

The technical contact for each topic is Terrance Dubreus, whose email address is [email protected]. The technical contact is Sheli Plenge, whose email is [email protected]. Companies interested were asked to email white papers describing their capabilities and expertise, relevant past experience no later than 2 Feb. 2024 to the Air Force’s Misti DeShields at [email protected]. Email questions or concerns to Misti DeShields at [email protected]. More information is online at https://sam.gov/opp/f7fac729dbf543ee8d31256c5c71bba5/view.

The U.S. Army also is interested in AI-aided target recognition and detection. The Army Tank-Automotive & Armaments Command (TACOM) in Warren, Mich., sent out a request for information last December for the Aided Target Detection and Recognition (AiTDR) project, which seeks to develop machine-learning algorithms to reduce the time it takes to detect, recognize, and attack enemy targets. AiTDR seeks to shorten sensor-to-shooter engagement time with machine learning algorithms. The RFI seeks to understand the state of aided target recognition technology to detect trained and untrained new targets.

Traditional machine learning techniques focus on aided target recognition, Army researchers say. This requires a large training image database of target images captured under conditions such as background terrain, target pose, lighting, and partial occlusion. This limits the ability to detect new targets or trained targets under untrained new conditions. The emphasis of the AiTDR project is on detecting generic classes of targets, rather than on identifying specific targets with the risk of missing a target because of insufficiently trained algorithms. Achieving this will help accelerate engagement times and optimize crew performance by developing reliable, intuitive, and adaptive automated target detection for crewed vehicles by no later than 2026.

link

Leave a Reply

Your email address will not be published. Required fields are marked *