Pages

6 April 2026

Stimulating Creativity in Human-Machine Teams

Luke M. Herrington

Introducing HWIT: Using Prompt Engineering to Stimulate Creativity in Human-Machine Teams

The question of how best to use artificial intelligence (AI) resources in the military, including for writing, is an important one. AI marketing materials (and other rhetorical efforts) frame AI and large language models (LLMs) as capable of enhancing human creativity or capable of being creative in their own right. Some of this hype stems from the desire to sell these products to writers—including those in the military. As a practical matter, however, AI’s creative potential remains a subject of debate. For example, most of the ideas generated by AI fall into the category of conventional, while AI’s ability to achieve surprise or novelty is limited. Additionally, while some research finds that AI improves individual performance and creativity, it also stifles the creativity of larger groups. For my part, while I find value in employing AI in the classroom, I too am skeptical about its inherent “creativity.”

If AI is not actually all that creative, it poses a real problem for military writing, especially as the U.S. military progressively turns to AI to maintain its competitive warfighting advantage over our adversaries. This is because working with LLMs is often associated with a process of cognitive offloading, whereby human users shift the burden of creative and critical thought to AI. Yet, not only do creative and critical thinking usually require higher-order thinking, the arts, to include that of military planning, involve fundamentally creative and critical processes. It is one thing to offload tasks that can and should be automated, but it would be an altogether different thing to shift creative tasks requiring higher-order thought to an AI. For instance, using AI to summarize meeting notes or format briefing slides can be as invaluable to staff officers drowning in paperwork as it can be for students to employ AI to check for spelling, grammar, and formatting mistakes.

But what about using AI to draft orders, to craft options for complex military problems, or even to write in an academic setting? The risk, when shifting the burden of creative expression to an AI, is in regression to the mean. LLMs are built on powerful statistical models and massive datasets that treat frequency as synonymous with importance. Thus, their output is generally going to reflect the underlying averages found in their original training data. In fact, this is the origin, at least in part, of both AI hallucination and the proclivity for AI-generated writing to overuse certain words and punctuation. An AI that anchors on a term is likely to produce output correlated with related terms in the data regardless of either the relevance of those associations to a given prompt or the veracity of any ostensible truth claims found in its output.

No comments:

Post a Comment