Pages

10 March 2024

Air Force provides more details about plans for ‘battle management’ of AI

JON HARPER

The Air Force updated a broad agency announcement and offered additional insights into the service’s vision for adapting AI capabilities on the battlefield.

The amendment to the BAA, published Feb. 29, revises a key technical area and adds three subsections to the document that was originally issued in August 2023 regarding “artificial intelligence and next generation distributed command and control.”

The changes provide more information about the operational context for the Air Force’s pursuit of new techniques to tailor and replace faulty algorithms on deployed systems, as well as its plans for addressing related challenges. They also come as the service is working to develop and field new AI and machine learning tools for a variety of tasks, including target identification and operating autonomous drones known as collaborative combat aircraft.

Officials recognize that some artificial intelligence capabilities developed in a lab might not be up to snuff when they’re sent into a warzone.

“Because models are trained a priori on data (and simulations) in an anticipatory fashion, AI-based systems encounter situations in the real world that are incompatible with training feature distributions and parameterization of employed algorithms. The result is degradation to model performance that can negatively impact mission effectiveness and safety. Therefore, the Air Force requires new battle management processes to monitor the performance of AI-based systems and update incumbent models in response to changing battlespace conditions,” the amended BAA states in the section for Technical Area 1, which deals with command and control of artificial intelligence systems to achieve mission-tailored AI.

“In the trivial case, operators will simply repurpose a pretrained model that fortuitously fulfills unanticipated mission requirements. In the extreme case, operators will coordinate a distributed workflow, known as an AI COA, to retrain, test, and deploy new models in line with mission execution, so that dependent systems can continue to function as intended with minimal loss of service,” it added.

As an example of a use case where this type of “battle management” of AI could be applied, the document described a hypothetical intelligence, surveillance and reconnaissance mission being carried out by an unmanned aerial vehicle operating in bad weather.

“A new kind of battle manager within the forward tent, deemed the AI Interface Officer, monitors the performance of computer vision models hosted on UAVs and looks for cases of ‘AI drift’ — unexpected behavior caused when the domain of the learned function is no longer compatible with input data. In this case, an object detection model is no longer performant with live sensor data due to significant changes from a weather event. The AI Safety officer must evaluate the risk to mission success posed by continued employment of the model. If the risk is deemed too high, the AI Safety Officer will coordinate with remote operators via cloud-based services to determine the root cause of the drift and propose new AI adaptation strategies (e.g., replace model, fine tune, transfer learn, etc.) and deployment options (e.g., ‘use uplink to replace model on UAV at 1500 hours’) that accommodate the environment while also adhering to imposed mission timelines,” the BAA explains.

In another use case, the software on combat drones could be tweaked if too many of the platforms are getting shot down, not attacking enemy targets effectively, or coming up short in other ways.

“For example, operators could update control policies onboard autonomous collaborative platforms … with improved skills to evade adversary forces or provide cover fire. Some alert mechanism, perhaps triggered by unacceptable platform attrition rates or poor mission performance, should help operators decide when and how to update the autonomy,” the document states.

To get after the challenges related to implementing its vision, the Air Force added three technical subareas to the BAA related to decision-making aids, manufacturing and deployment processes, and situational awareness.

For the first subarea, the Air Force wants capabilities that can use a government-provided inventory of available datasets, algorithms, models and host platforms to perform tradeoff analyses, rank different adaptation and deployment options according to various factors, and provide an interface for battle managers to review options and make their selections.

“When an automatic target capability … exhibits aberrant behavior, an operator must choose the best adaptation and deployment options to resolve the drift. Because each step has a local cost, the technology should help operators understand the overall compounded cost as it pertains to fulfillment of mission requirements and resource availability … For example, the system could communicate tradeoffs in terms of model accuracy versus deployment readiness, assuming better models take longer to train and evaluate,” the BAA states.

The Air Force is looking for “novel presentation approaches” to facilitate this type of selection process.

For the second subarea, which is related to the manufacturing, test and deployment of AI models, the service notes that it needs operators to be able to coordinate workflow execution at next-gen battle management stations.

“Once a newly manufactured AI model is approved and ready for deployment, the system should post model updates onto the host platforms, perhaps while in motion assuming sufficient comms. Platforms should expose software interfaces that enable operators to dynamically read, update, and delete models,” per the BAA, which notes that offerors will be expected to develop adaptors and processes around open platform interfaces furnished by the Defense Department.

For the third subarea, focused on situational awareness, the Air Force wants capability trackers that provide location data and other critical information about the status of mobile weapons platforms and the AI models they’re loaded with.

It also needs monitoring tools that enable “plug-and-play integration of drift detection algorithms and live performance feedback” from operators, so that battle managers can be alerted to underperforming AI and consider courses of action to address the problem.

“Because drift detection research is still in its infancy, [this announcement] solicits frameworks to integrate existing (possibly rudimentary) drift detection methods as opposed to development of new state-of-the-art algorithms,” the BAA states.

It notes that the Air Force isn’t soliciting proposals for new platforms, communications hardware, communications networks, or AI development frameworks. Rather, it seeks “innovation for how to connect existing AI development frameworks within battle management control stations and workflows to configure the behavior of AI” as outlined in the document.

The Defense Department plans to conduct live experiments of companies’ solutions at various military exercise to assess completion of technical milestones.

The BAA — which includes five other technical areas — runs through fiscal 2028. However, the Air Force is encouraging contractors to submit white papers by March 15 if they want to align with projected funding opportunities for fiscal 2025.

No comments:

Post a Comment