Pages

28 March 2019

Should supercomputers design the Pentagon’s next prototypes?

By: John Walker  

With a computational capacity of 7 billion processor core hours, 100 petabytes of storage and classified networks moving data at 40 gigabytes per second, there are few organizations with comparable assets. Perhaps more importantly, U.S. Army-managed programs like Engineered Resilient Systems and the High-Performance Computing Modernization Program are charged with helping marshal these capacities to accelerate system development. The Office of the Secretary of Defense, for its part, has placed a heavy emphasis on rapid prototyping and fast-to-fail philosophies to reimagine research and development processes. At the same time, academia, industry and government are moving advanced manufacturing processes forward to up-end manufacturing timelines.

What has not happened yet is the marrying of these capabilities into a seamless and connected view of development. What would that look like? Imagine supercomputers churning through millions of possible configurations for a high-speed, high-payload drone using physics-based modeling and simulation tools that then send the most promising designs over secure networks to rapid prototyping assets. These machines would replicate the designs in near real-time at any scale and simultaneously complete finish-machining operations. Moreover, 3D-printing allows the impregnation of pressure and other sensors, essentially delivering a fully instrumented prototype. The prototypes would then undergo testing and evaluation in an adjacent facility. This approach enables a new, highly fluid form of design evolution. More than just co-located or connected assets, the approach is focused on accelerated experimentation, innovation and rapid learning — all leading to faster cycle times and more resilient designs.

Sound too good to be true? There are people within DoD, academia and industry that have begun to operationalize this concept. Why has it taken so long? The problem with the DoD capability development process is that it has gotten too large and organizationally complex. The spirit of entrepreneurship and innovation has been submerged in layers of bureaucracy. Furthermore, risk-taking in early stages is poorly understood (even though it clearly saves money when performance and life-cycle costs are considered) and development processes do not encourage it. It takes special people with a special mindset to spot opportunity and these individuals can be few and far-between. It’s easier in many ways to do the same old thing rather than experiment with new approaches — yet this is precisely what it will take to accelerate development.

Some within the DoD are trying to approach the problem differently — rather than a step-wise process, they are developing approaches to run things in parallel and iteratively to speed risk reduction and optimize performance. What does it take to speed program development timelines? There is certainly no shortage of academic writings on the subject, and lots of industry bluster, but increasingly the DoD and its partners are taking a more pragmatic approach.

In DoD circles people joke about PowerPoint development programs. In real-life, capabilities must operate in a world characterized by physical properties. This means a computational platform capable of accurately depicting laws that govern aerodynamics, electromagnetics and numerous other complexities. Processor capacity is nice, but what users really need are software tools that can make sense of very complex trade-offs in motion, entropy, control and system performance. The smallest change in design can have frightful implications in terms of the complex set of interactions that exist in any system. High-performance computing can overcome the cascade of uncertainty that is set off every time a design is modified. To be clear, this capability exits today. What is truly new, however, is moving from the theoretical and computational world to the pragmatic world of actually building things and doing it all in near-real-time.

Even in disciplines that are relatively well understood, such as computational fluid dynamics as applied to lifting bodies, there is still the reality that models don’t always tell the full story. This is especially true when dealing with physics-based models which attempt to explain phenomena based on first principles. That being the case, there will always be the need to validate what the models are telling us by building and then testing actual hardware.

In the traditional acquisition process, building hardware based on a conceptual design can literally take years. While still in its infancy, a U.S. Army-led effort is contemplating a different approach — to use high performance computing, physics-based modeling, advanced manufacturing processes, and sensor emplacement to move from concept to design to build seamlessly and efficiently. The idea is revolutionary and has the potential to bring unheard of time, cost and technical efficiencies to acquisition.

Of course, there are complexities, and this solution certainly will not solve all the problems inherent in research and development processes but merging design, build and test into a seamless orchestration can accelerate development. We have all complained about the length of time it takes to field new equipment, but few organizations are working on pragmatic solutions. Those that are have melded the best of computational science with a page from our past — rapid experimentation that inspires learning and innovation.

John Walker is managing director at Navigant, a consulting firm, where he focuses on defense and national security.

No comments:

Post a Comment