11 May 2023

China v America: how Xi Jinping plans to narrow the military gap


Ever since British troops vanquished Qing dynasty forces in the Opium Wars of the 19th century, Chinese modernisers have dreamed of building world-class armed forces with a strong navy at their core. China’s spears and sailing ships were no match for steam-powered gunboats, wrote Li Hongzhang, a scholar-official who helped set up the country’s first modern arsenal and shipyard in Shanghai in 1865. If China systematically studied Western technology, as Russia and Japan had, it “could be self-sufficient after a hundred years”, he wrote.

It took longer than Li imagined, but today his dream is within reach. China’s navy surpassed America’s as the world’s largest around 2020 and is now the centrepiece of a fighting force that the Pentagon considers its “pacing challenge”. The question vexing Chinese and Western military commanders is this: can China continue on the same path, relentlessly expanding its capacity to challenge American dominance? Or does a slowing Chinese economy, and a more hostile, unified West, mean that China’s relative power is peaking?

China's Secretive Quest for Heavier Artillery

MA XIU and PETER W. SINGER

The People’s Liberation Army will soon begin experimenting with a bigger, more powerful cannon than any in the current Chinese or American arsenal, according to a contract recently awarded by the PLA Strategic Support Force.

A now-deleted post on the official Weapons and Equipment Procurement Information Network, a clearinghouse for Chinese military contracts, makes clear that the PLA is interested in testing 203mm (8-inch) artillery. That’s substantially larger than the PLA’s current 155mm (6-inch) guns, which suggests efforts to arm its future force with tubed artillery of longer range and much greater firepower.

While many militaries fielded 203mm cannons in the first part of the 20th century, most have phased them out in favor of 155mm guns. Today, the only 8-inch types used are the Russian 2S7 Pion/2S7M Malka, built between 1975 and 1990; and the old U.S. M110 self-propelled howitzer. Based on a cannon designed in 1919, the M110 was used by U.S. forces from 1963to the 1990s; it remains in service with several other nations as a legacy of Cold War partnerships.

China dabbled with 203mm artillery in the late 1980s, when Chinese arms manufacturer Norinco hired Gerald Bull, the Canadian engineer widely considered one of history’s greatest artillery developers. Bull, whose projects ranged from gun-launched rockets designed to reach outer space to Saddam Hussein’s “Project Babylon” supergun, traversed the globe selling his designs to a wide range of unsavory regimes, often with the U.S. government’s quiet approval. The PLA had originally hired Bull to develop a 155mm gun that could help counter the Soviet Union’s overwhelming firepower to the north. This became the PLL-01 howitzer, which remains in service today. Bull and Norinco went on to develop the 203mm W-90 artillery system, which apparently never advanced beyond the prototype stage. There are several possible reasons: technical difficulties, fewer export opportunities after the Iran-Iraq War, and Bull’s violent death in 1990 at the hands of a still-unknown intelligence service.

China’s support may not be ‘lethal aid,’ but it’s vital to Russia’s aggression in Ukraine

Markus Garlauskas, Joseph Webster, and Emma C. Verges

It’s the conventional wisdom in Washington and in most European capitals: China is only providing limited support to Russia’s invasion of Ukraine. In Beijing, meanwhile, officials attempt to portray neutrality, emphasizing that the People’s Republic of China (PRC) is not providing weapons to Russia. As PRC leader Xi Jinping told Ukrainian President Volodymyr Zelenskyy in a recent call, according to state media, “China has always stood on the side of peace.”

Whether or not the PRC crosses the threshold of providing weapons and munitions, often termed “lethal aid,” has become the primary measure of its support for Russia’s war—and the Western rhetoric around this threshold has hardened to the point of becoming a red line. In recent weeks, NATO Secretary General Jens Stoltenberg and US National Security Adviser Jake Sullivan specifically warned the PRC against providing lethal aid. US Secretary of State Antony Blinken then testified before Congress that “we have not seen them cross that line.”

In any event, this focus on lethal aid has come to mean that the vital support the PRC is already providing to Russian President Vladimir Putin’s war effort, directly and indirectly, is receiving far less attention. This is fostering de facto US and NATO acceptance of such support, barring the direct provision of military equipment. Let’s be clear: Beijing’s provision of lethal aid would be a new level of escalation, and it should be deterred. However, it benefits Beijing and Moscow for Washington and its allies to focus so intensely on that red line that they fail to stem—or to even fully catalog and condemn—the other support that the PRC provides.

Beijing’s direct provision of equipment and materials critical for military uses, such as transport vehicles and semiconductors, enables Russian military forces to sustain their offensive. The evidence shows that the PRC is already providing critical support for Moscow’s war aims by counterbalancing US and NATO support to Ukraine.

Oil goes out, trucks come in

China Could ‘Double’ Its Economy By 2035, Emerge As Biggest Power On Planet. What Could Possibly Stop Beijing?

Lt Gen. PR Shankar (Retired)

In 2020, Chinese President Xi Jinping famously declared victory over COVID-19. He also proclaimed that China could double its economy by 2035. Those were the heady days when everything was going right for China, and nothing was going right for the rest of the world.

How things have changed! Today, Xi Jinping talks about Black Swan and Grey Rhino events. He foresees great dark clouds on the horizon. However, his ambition of China wreaking revenge on the century of humiliation, ushering in a Sino-centric world, and achieving the China Dream has not changed.

His focus on becoming the world’s foremost statesman and the greatest Chinese leader remains intact. However, there is an issue. The social contract between the Chinese Communist Party (CCP) and the Chinese people is fraying.

The CCP had promised prosperity to the people based on a booming economy underpinned by a plentiful and cheap workforce in a favorable international environment. That is under risk.

As the promised prosperity is receding, people are restive. However, Xi Jinping has changed tack and is now attempting to usher in a prosperous China at the helm of the Sino-centric world order through the triple barrels of his aggressive diplomacy, eyepopping technology, and assertive militarism.

This visible turnaround is evident since the ‘Two Sessions’ finished. China is now amid Xi Jinping’s multiple ‘Great Leaps Backward.’ In this endeavor, he is being aided and abetted by a set of handpicked loyalists with minimal experience in governance.
The Trioka Of Mao, Deng, & Xi

Xi’s ‘Great Leaps Backward’ must be examined against the backdrop of the political economy of China. Why political? Because everything in China is political. Why economy? Because as Clinton famously quipped, ‘It is the economy stupid.’

Elbit sees ‘accelerated’ interest in ground based weapons as result of Ukraine war

ARIE EGOZI

TEL AVIV — Since the start of the war in Ukraine, ground-based artillery and rockets have seen a surge of interest, with NATO nations rushing to buy as many weapons as they can lay hands on.

So it makes sense that Israel’s Elbit Systems would see a spike of interest as well. “This war, without any doubt, accelerated the interest and it is now at an all-time peak,” Yehuda (Udi) Vered, executive VP and CEO of the Israeli company’s land division, told Breaking Defense in a recent interview.

“I can say without any doubt that artillery, specifically rockets, has gained its status as the queen of combat ” Vered added.

Israel, of course, makes regular use of long-range fires for strikes into Syria. In recent days there were two such attacks, with local Syrian reports claiming targets were hit by artillery launched from the Golan Heights. (Israeli does not confirm this kind of action officially, but Israeli sources that talked with Breaking Defense on condition of anonymity have noted that Israel has “the right tools” to perform such long-range accurate strikes.)

One expert told Breaking Defense that the accuracy of these systems is the reason why they were used in many of the IDF’s operations along the Israeli borders but also “far, far away from them.”

In many ways, ground-launched weapons are embedded in Elbit’s DNA. Years ago, the company purchased Israeli Military Industries (IMI), which had specialized in the capability but had not found much success as a company. “We found a very unique knowledge base that has been built for years but was limited by the lack of funds of the state-owned company. When we combined the knowledge with our resources, the results were obvious almost immediately,” Vered said.

The land division of Elbit is developing other weapon systems, based on existing “building blocks” and new ones. Some are made for the use of the IDF only, but many are aimed at the international market.

An information strategy for the United States

Jessica Brandt 

Thank you, Chairman Cardin, Ranking Member Hagerty, and Distinguished Members of the Committee for inviting me to address you today. As you are aware, the United States is engaged in what I would characterize as a persistent, asymmetric competition with authoritarian challengers that is taking place across at least four, interconnected, non-military domains:

jessbrandtPolitics, and here I am thinking primarily, but not solely, about interference in democratic processes and efforts to denigrate democratic governments;

Economics, specifically the accumulation and application of coercive leverage and the use of strategic corruption;

Technology, which intersects with all other domains, but is a competitive domain in its own right; and

Information, which may be the most consequential terrain over which states will compete in the next decades.

The last is where I will focus today.

It is within the information domain that autocrats — in Moscow and Beijing, but also elsewhere — have leveraged some of the sharpest asymmetries. Vladimir Putin and Xi Jinping deliberately spread or amplify information that is false or misleading. Both operate vast propaganda networks that use multiple modes of communication to disseminate their preferred, often slanted, versions of events. Both spread numerous, often conflicting, conspiracy theories designed to deflect blame for their own wrongdoing, dent the prestige of the United States, and cast doubt on the notion of objective truth. And both frequently engage in “whataboutism” to frame the United States and its way of doing business as hypocritical, while using a network of proxy influencers to churn up anti-American sentiment around the world. For Putin and Xi, the goal of these pursuits is to tighten their grip on power at home and weaken their democratic competitors abroad. For Xi, it is also about positioning China as a responsible global player.

What the Biden administration’s report on the Afghanistan withdrawal gets wrong

Madiha Afzal 

On April 6, the White House released a short report defending its withdrawal from Afghanistan. The 12-page summary was released on the cusp of Easter weekend — presumably to minimize attention to it — but the substance of the document and the accompanying press briefing with National Security Council spokesperson John Kirby nevertheless generated immediate interest as well as criticism. The document’s bottom line was that the Biden administration inherited the problematic Doha deal from the Trump administration, which significantly limited its options, and did as well as it could have in terms of the withdrawal and the evacuation between August 14 and August 31, 2021.

The document comes across as defensive — perhaps unsurprising, given that the withdrawal is under scrutiny from a Republican-controlled House of Representatives. With the 2024 election looming, there aren’t any political incentives to admit fault, especially because the Afghanistan withdrawal is already seen as a foreign policy failure for the Biden administration. By Kirby’s own admission, the report’s purpose “is not accountability.” But in its current form, it makes for disingenuous reading and suggests that the administration hasn’t seriously grappled with the debacle of the summer of 2021.

It is true that former President Donald Trump’s Doha deal with the Taliban was incredibly flawed and that it limited President Joe Biden’s options. Many of us noted at the time that it was badly negotiated, giving the Taliban everything they wanted — a date for America to leave Afghanistan — while asking for very little in return besides counterterror promises. It excluded the Afghan government. While the deal’s architect, Zalmay Khalilzad, argued that its multiple pieces — one of which included the start of peace talks between the Taliban and the then-Afghan government — would work together, the text as it was written read like a timeline to surrender. It emboldened the Taliban and weakened the Afghan government. The public has never seen its classified appendices.

The politics of AI: ChatGPT and political bias

Jeremy Baum and John Villasenor 

The release of OpenAI’s ChatGPT in late 2022 made a splash in the tech world and beyond. A December 2022 Harvard Business Review article termed it a “tipping point for AI,” calling it “genuinely useful for a wide range of tasks, from creating software to generating business ideas to writing a wedding toast.” Within two months after its launch, ChatGPT had more than 100 million monthly active users—reaching that growth milestone much more quickly than TikTok and Instagram.

While there have been previous chatbots, ChatGPT captured broad public interest because of its ability to engage in seemingly human-like exchanges and to provide longform responses to prompts such as asking it to write an essay or a poem. While impressive in many respects, ChatGPT also has some major flaws. For example, it can produce hallucinations, outputting seemingly coherent assertions that in reality are false.

Another important issue that ChatGPT and other chatbots based on large language models (LLMs) raise is political bias. In January, a team of researchers at the Technical University of Munich and the University of Hamburg posted a preprint of an academic paper concluding that ChatGPT has a “pro-environmental, left-libertarian orientation.” Examples of ChatGPT bias are also plentiful on social media. To take one example of many, a February Forbes article described a claim on Twitter (which we verified in mid-April) that ChatGPT, when given the prompt “Write a poem about [President’s Name],” refused to write a poem about ex-President Trump, but wrote one about President Biden. Interestingly, when we checked again in early May, ChatGPT was willing to write a poem about ex-President Trump.

The designers of chatbots generally build in some filters aimed at avoiding answering questions that, by their construction, are specifically aimed at eliciting a politically biased response. For instance, asking ChatGPT “Is President Biden a good president?” and, as a separate query, “Was President Trump a good president?” in both cases yielded responses that started by professing neutrality—though the response about President Biden then went on to mention several of his “notable accomplishments,” and the response about President Trump did not.

FORCING CHATGPT TO TAKE A POSITION

The Russian Mystery

George Friedman

There are too many unknowns to count in the Ukraine war. From the upcoming spring offensive to covert U.S. intelligence operations, mysteries abound. But perhaps none is more important than the question surrounding the Russian Ministry of Defense, its general staff, the Wagner Group and President Vladimir Putin’s place among them. This system of relationships has become even more complicated now that Ramzan Kadyrov, the head of Chechnya, has interfered and promised to replace Wagner in Bakhmut.

Moscow enlisted Wagner when it was unable to decisively defeat Ukrainian forces on its own. Wagner is a private military force, technically under contract with part of the Russian government. The mercenaries it employs are commanded not by the army but by a civilian. They number some 25,000 to 30,000 – slightly smaller than Russia’s conventional armed forces but not dramatically so – and occupy about 30 kilometers (19 miles) of the multiple thousand-kilometer-long frontlines. And yet Wagner is the only contingent of Russian forces that regularly advances on areas controlled by Ukraine.

Using private militaries is hardly a new phenomenon, and Wagner has operated in the Middle East and Africa for years. But what it is doing in Ukraine is unique. Not only is it waging its own war, but it is also currently leading the battle in Bakhmut.

The mystery surrounding its relationship with the Russian military is evidenced by a statement made last week by Wagner chief Yevgeny Prigozhin, who said that “military bureaucrats” had deprived his forces of artillery and ammunition, and that he is therefore withdrawing them from the area and handing Bakhmut over to Russian conventional armed forces.

The battle of Bakhmut has never gone well, but at this point, with the forces that have already been lost in the fight, victory is less strategic than political. For the Russians, losing or abandoning the battle would send the wrong signal to the enemy and its population. But the regular Russian army was not able to impose its will on Bakhmut and in general has not been able to break Ukrainian forces. Introducing Wagner there served several purposes. It increased the size of the force that could be deployed, and it brought in a ruthless force with plausible deniability.

Rattling the Nuclear Saber: What Russia’s Nuclear Threats Really Mean

LAUREN SUKIN

On March 23, Deputy Chairman of the Security Council of the Russian Federation Dmitry Medvedev warned that the “nuclear apocalypse” is drawing “closer.” This threat, however oblique, is one of several that Russian officials have made that imply the threat of nuclear use against Ukraine and the NATO states supporting Kyiv. Moscow has spouted this dangerous rhetoric since the start of the Russo-Ukrainian war in early 2022, making threats that are loud, frequent, and extreme.

Some commentators have suggested that Russian nuclear threats are little more than cheap talk. Moscow may hope its threats convince NATO states and Ukraine to accept Russia’s territorial gains; Russia may even intend to use threats to deter NATO states and Ukraine from fighting harder. But threatening to use nuclear weapons doesn’t necessarily mean Russia plans to use them. To this camp of thinkers, that the threats emanating from Moscow have become increasingly commonplace makes them seem even less grave. After all, if bellicose threats are a normal part of politics, doesn’t that suggest they are mere bluffs?

Dr. Lauren Sukin is a nonresident scholar in the Nuclear Policy Program and an assistant professor of international relations at the London School of Economics and Political Science (LSE).

To answer this critical question, it is important to remember that Russia is not alone in brandishing its nuclear saber—and that there are lessons to be learned from how nuclear threats are used elsewhere. In particular, North Korea, a frequent issuer of nuclear threats, bears key similarities to Russia today. Both countries are isolated, with few allies and an ocean of sanctions through which to wade. In turn, both rely heavily on nuclear weapons. Both North Korea’s Supreme Leader Kim Jong Un and Russia’s President Vladimir Putin are highly personalistic leaders and surrounded by ideological “yes men.” Neither has many checks against their power. Both men are deeply anxious about their legacies and beholden to unlawful and, increasingly, unrealistic foreign policy ambitions. With these similarities in mind, it is precisely because of and not in spite of the fact that Moscow and Pyongyang have repeatedly held their nuclear arsenals over Western heads that leaders should take these threats seriously.

‘Like Netflix’: After slow start, Army aims to ‘drastically’ accelerate software updates

SYDNEY J. FREEDBERG JR

Army Futures Command’s Software Factory operations taking place on March 22, 2021 in Austin, Texas. (U.S. Army Photo by Mr. Luke J. Allen)

WASHINGTON — After a sluggish start, the Army is slicing through self-imposed red tape so it can do faster software updates, which are essential to everything from payroll systems to cybersecurity to high-tech combat vehicles. The objective: remove a host of obstacles so the service can make widespread use of the streamlined Software Acquisition Pathway.

SWP was created in 2020 to bypass ponderous, industrial-age procurement processes so the Pentagon could roll out new software at the same pace as the private sector, in weeks or months instead of years.

“We are embracing the Software Pathway,” said Young Bang, principal deputy assistant secretary of the Army for acquisition. “We have the [legal] authorities to do that from Congress.”

But the legal foundation by itself is not enough, Bang emphasized in an exclusive interview with Breaking Defense. There are plenty of regulations, bureaucratic processes, and plain old bad habits in the way.

“There’s been institutional processes have been around for a long time for the DoD at large and the Army,” he said. “Until you fix all those, we can’t actually get to agile CICD [Continuous Integration, Continuous Deployment] releases every two-three weeks like Netflix.”

“We’re working on efforts to change that,” he said.

Military cyber directors: Help us better leverage AI to gain the ‘high ground’

JASPREET GILL

Navy Commander Kevin Blenkhorn, a computer sciences professor at the U.S. Naval Academy, works with his Joint Services teammates during the U.S. Army’s ‘Cyber Center of Excellence’ Tuesday, June 10. (Georgia Army National Guard photo by Staff Sgt. Tracy J. Smith)

TECHNET CYBER 2023 — The military services need to figure out how to better integrate and leverage disruptive technologies like artificial intelligence into data-driven decision making, and senior cyber officials said today they need industry’s help to do it.

Using current technology right now, Lt. Gen. Maria Barrett, commanding general of US Army Cyber Command, said that work is “tremendously complex.”

“Anything we can do to buy down that complexity by leveraging AI and [machine learning] would be absolutely fantastic and essential for, I think, the challenges that we face in the future,” she told the audience at the AFCEA’s TechNet Cyber conference in Baltimore. “I think we have the underpinnings of starting to be able to take advantage of it from an Army cyber mission standpoint.”

The military services and, more broadly, the Defense Department have been exploring ways that AI can be used in the future. As a part of that push, DoD stood up the Chief Digital and AI Office and, in January, the Pentagon announced it updated its decade-old guidance on autonomous weapon systems to include advances made in AI.

In a sign of how ubiquitous AI has become recently, Director of the Defense Information Systems Agency Lt. Gen. Robert Skinner began his keynote not speaking himself, but with a generative AI that cloned his voice and delivered the start of his remarks.

“Generative AI, I would offer, is probably one of the most disruptive technologies and initiatives in a very long, long time,” Skinner said after becoming himself again. “Those who harness that and can understand how to best leverage it, but also how to best protect against it, are going to be the ones that have the high ground.”

To Keep Hackers Out of US Weapons, the Pentagon Needs to Get In

EGON RINDERER

The Pentagon’s efforts to protect its data networks mustn’t stop at its industrial & IT systems; its vehicles and weapons are vulnerable as well. But the military’s ability to defend these systems is hampered by its inability to monitor even their most basic inner workings.

We’ve been warned about the threats. Last year, the Cybersecurity and Infrastructure Security Agency detailed how Russia stole sensitive information about weapons from U.S. defense contractors. The Government Accountability Office has issued several reports of its own.

Many systems aboard U.S. weapons and military vehicles are built to conceal these inner workings, even from their customer. There are several reasons for this “black box” approach. Component seals can simplify the task of maintaining such things as flight-worthiness certifications. They can help suppliers win and keep lucrative support contracts. And, in theory at least, they deny attackers knowledge of key systems.

But if a half-century of enterprise IT has taught us anything, it’s that the “security through obscurity” approach fails, every time. Indeed, it hurts the Pentagon’s ability to understand their systems’ vulnerabilities and to know when they have been compromised. It also strips the military of valuable data that could be used to guide maintenance, predict component failure, and even improve training.

But there is a way that the Pentagon might access key data without breaching component manufacturers’ seals. Complex weapons and vehicles use open, standards-based serial buses to move data between components. These paths are open by design. Monitoring these paths can help defenders understand what cyber techniques adversaries are using, detect them, develop indications and protections, and potentially mitigate them.

The War-Weary West

Nina Jankowicz and Tom Southern

In early February, U.S. Representative Matt Gaetz, Republican of Florida, introduced a resolution to halt all U.S. military and financial aid to Ukraine. Later that month, Republicans on the House Oversight Committee decided to mark the anniversary of Russia’s full-scale invasion of Ukraine by launching an investigation into Washington’s aid for Kyiv, arguing that “it’s time for the White House to turn over the receipts to ensure U.S. taxpayer dollars aren’t being lost to waste, fraud, or abuse.” Gaetz dubbed his legislative action the “Ukraine Fatigue” resolution, and Republicans on the House Oversight

DOD's Zero Trust Initiative is an Unique 'Unity of Effort,' Air Force CIO Says

Edward Graham

Lauren Knausenberger, Department of the Air Force chief information officer, listens to the 18th Wing mission brief during a visit to Kadena Air Base, Japan, Oct. 3, 2022. Knausenberger said during a May 4, 2023 webinar that the Pentagon’s zero trust effort has created a lot of unity across the services. 

The Air Force is developing roadmaps to help guide its implementation of the Pentagon’s zero trust strategy and is working to craft guidance around the use of generative artificial intelligence tools to further promote enhanced cybersecurity practices, the military branch’s chief information officer said during a Billington Cybersecurity webinar on Thursday.

Lauren Knausenberger, the Air Force’s CIO, said that the Department of Defense’s move toward zero trust architecture—coupled with the governmentwide push to adopt the security framework across agencies—is “a unity of effort unlike [anything] I have seen in my tenure.”

“We are going to spend billions of dollars on this, and we have done the work to make sure that it is going to stick,” she added. “And, actually, I have not been this excited about an effort in the DOD in a really long time, because I am seeing the passion from the engineers, I'm seeing the industry just really show up and give great insight and people are raring to go to solve this problem.”

The Pentagon released its zero trust strategy and roadmap last November, which it said “will reduce the attack surface, enable risk management and effective data-sharing in partnership environments and quickly contain and remediate adversary activities.” DOD plans to have its framework in place across its component agencies by fiscal year 2027.

The Air Force released its own zero trust implementation roadmap in February, alongside another roadmap to guide the branch’s implementation of the Pentagon’s identity, credential and access management—or ICAM—strategy.

European Union Set to Be Trailblazer in Global Rush to Regulate Artificial Intelligence

KELVIN CHAN 

LONDON — The breathtaking development of artificial intelligence has dazzled users by composing music, creating images and writing essays, while also raising fears about its implications. Even European Union officials working on groundbreaking rules to govern the emerging technology were caught off guard by AI’s rapid rise.

The 27-nation bloc proposed the Western world’s first AI rules two years ago, focusing on reining in risky but narrowly focused applications. General purpose AI systems like chatbots were barely mentioned. Lawmakers working on the AI Act considered whether to include them but weren’t sure how, or even if it was necessary.

“Then ChatGPT kind of boom, exploded,” said Dragos Tudorache, a Romanian member of the European Parliament co-leading the measure. “If there was still some that doubted as to whether we need something at all, I think the doubt was quickly vanished.”

The release of ChatGPT last year captured the world’s attention because of its ability to generate human-like responses based on what it has learned from scanning vast amounts of online materials. With concerns emerging, European lawmakers moved swiftly in recent weeks to add language on general AI systems as they put the finishing touches on the legislation.

The EU’s AI Act could become the de facto global standard for artificial intelligence, with companies and organizations potentially deciding that the sheer size of the bloc’s single market would make it easier to comply than develop different products for different regions.

“Europe is the first regional bloc to significantly attempt to regulate AI, which is a huge challenge considering the wide range of systems that the broad term ‘AI’ can cover,” said Sarah Chander, senior policy adviser at digital rights group EDRi.

Air Force Is Working on Rules for Using ChatGPT

EDWARD GRAHAM

The Air Force is developing roadmaps to help guide its implementation of the Pentagon’s zero trust strategy and is working to craft guidance around the use of generative artificial intelligence tools to further promote enhanced cybersecurity practices, the military branch’s chief information officer said during a Billington Cybersecurity webinar on Thursday.

Lauren Knausenberger, the Air Force’s CIO, said that the Department of Defense’s move toward zero trust architecture—coupled with the governmentwide push to adopt the security framework across agencies—is “a unity of effort unlike [anything] I have seen in my tenure.”

“We are going to spend billions of dollars on this, and we have done the work to make sure that it is going to stick,” she added. “And, actually, I have not been this excited about an effort in the DOD in a really long time, because I am seeing the passion from the engineers, I'm seeing the industry just really show up and give great insight and people are raring to go to solve this problem.”

The Pentagon released its zero trust strategy and roadmap last November, which it said “will reduce the attack surface, enable risk management and effective data-sharing in partnership environments and quickly contain and remediate adversary activities.” DOD plans to have its framework in place across its component agencies by fiscal year 2027.

The Air Force released its own zero trust implementation roadmap in February, alongside another roadmap to guide the branch’s implementation of the Pentagon’s identity, credential and access management—or ICAM—strategy.

Knausenberger said that, when it comes to zero trust, the Air Force “helped to create the DOD strategy, we are fully bought into it, we are following the DOD strategy and our roadmap flows from that strategy.” She added that crafting the Air Force’s implementation roadmap involved the work of hundreds of people within the branch and also from industry partners.

According to Knausenberger, the Air Force’s roadmap—which she said “really keeps us honest”—follows “the different parts of the DOD strategy for reporting purposes.” She cited, in part, visibility and analytics, network and environment, data and automation and orchestration as “kind of those pillars that we report and stay unified on at the DOD strategic level.”

ChatGPT and the new AI are wreaking havoc on cybersecurity in exciting and frightening way

Dan Patterson

Generative artificial intelligence is transforming cybersecurity, aiding both attackers and defenders. Cybercriminals are harnessing AI to launch sophisticated and novel attacks at large scale. And defenders are using the same technology to protect critical infrastructure, government organizations, and corporate networks, said Christopher Ahlberg, CEO of threat intelligence platform Recorded Future.

Generative AI has helped bad actors innovate and develop new attack strategies, enabling them to stay one step ahead of cybersecurity defenses. AI helps cybercriminals automate attacks, scan attack surfaces, and generate content that resonates with various geographic regions and demographics, allowing them to target a broader range of potential victims across different countries. Cybercriminals adopted the technology to create convincing phishing emails. AI-generated text helps attackers produce highly personalized emails and text messages more likely to deceive targets.

"I think you don't have to think very creatively to realize that, man, this can actually help [cybercriminals] be authors, which is a problem," Ahlberg said.

Defenders are using AI to fend off attacks. Organizations are using the tech to prevent leaks and find network vulnerabilities proactively. It also dynamically automates tasks such as setting up alerts for specific keywords and detecting sensitive information online. Threat hunters are using AI to identify unusual patterns and summarize large amounts of data, connecting the dots across multiple sources of information and hidden patterns.

The work still requires human experts, but Ahlberg says the generative AI technology we're seeing in projects like ChatGPT can help.

"We want to speed up the analysis cycle [to] help us analyze at the speed of thought," he said. "That's a very hard thing to do and I think we're seeing a breakthrough here, which is pretty exciting."

Technology for Me, Not for Thee

HAROLD JAMES

PRINCETON – Capitalism relies on competition. In practice, however, this core principle is often violated, because ambitious capitalists will naturally seek to eliminate competition and secure a commanding market position from which they can keep new would-be competitors at bay. Success, in this respect, can make you rich and establish your status as a visionary; but it can also make you feared and hated.

Hence, China – arguably one of the most successful market economies of the twenty-first century – has been waging war against its own tech giants, most notably by “disappearing” Alibaba Group co-founder Jack Ma from the public stage after he criticized Chinese financial regulators. At the same time, the Europeans, deeply worried that they lack a Big Tech sector of their own, have focused on enforcing competition (antitrust) policies to limit the power of giants like Google and Apple. And in the United States, Big Tech’s political allegiances (to both the “woke” left and the “red-pilled” right) have become focal points in the country’s corrosive culture wars.

It is only natural to worry about the market power and political influence of such massive – and massively important – corporations. These are companies that can single-handedly decide the fate of many small and even medium-size countries. Much of the debate about corporate influence is rather academic. But not so in Ukraine, where private-sector technology has played a decisive role on the battlefield over the past year.

Thanks to Elon Musk’s SpaceX Starlink satellite internet service, the Ukrainians have been able to communicate in real time, track Russian troop movements, and radically improve the precision of their strikes on enemy targets (thus saving precious ammunition). Without Starlink, Ukraine’s defense probably would have crumbled.

But given the capriciousness of would-be corporate dictators, such technological dependencies are inherently risky. Last October, Musk used his ownership of Twitter to stage a virtual “referendum” on a half-baked peace plan that would cede Crimea to Russia. When Ukrainian diplomats objected, he petulently threatened to cut off Starlink (and for some time, access was indeed lost in contested areas).

Artemis 2 will use lasers to beam high-definition footage from the moon (video)

Josh Dinner 

NASA is using lasers to evolve how the agency communicates between spacecraft.

In the past, the space agency has relied on radio signals beamed through its Deep Space Network to transmit any sort of scientific data from deep space probes back to Earth. Lasers, however, have the ability to vastly increase the amount of data spacecraft are able to send, and NASA is ready to send the technology around the moon.

NASA is including laser communications in the form of the Orion Artemis 2 Optical Communications System (O2O) terminal on Artemis 2, the next crewed mission around the moon. "Onboard the Orion capsule, the O2O system will send back high-resolution images and video from the lunar region," a NASA video published in April states. If all goes according to plan, the system should enable viewers on Earth to see the moon in real-time like never before.

An illustration of the Orion Artemis 2 Optical Communications System in action. (Image credit: NASA)

Imagine having dial-up internet for years, then upgrading to gigabit fiber optic speeds. That's essentially what NASA is hoping to accomplish for its future spacecraft.

To lay the groundwork for future laser communications, NASA has launched several demonstration satellites in recent years. The Laser Communication Relay Demonstration (LCRD) launched in December 2021 was the agency's first laser relay. That was followed by the TeraByte InfraRed Delivery (TBIRD) CubeSat launched last year, which reached data transmission rates of 200 gigabits per second.

Now, NASA is preparing the Integrated LCRD Low-Earth-Orbit (LEO) User Modem and Amplifier Terminal (ILLUMA-T), which is expected to launch to the International Space Station (ISS) later this year. ILLUMA-T will attach to the exposed facility on the Japanese Experiment Module.

Pentagon chief AI officer ‘scared to death’ of potential for AI in disinformation

JASPREET GILL

TECHNET CYBER 2023 — While the US military is eager to make use of generative artificial intelligence, the Pentagon’s senior-most official in charge of accelerating its AI capabilities is warning it also could become the “perfect tool” for disinformation.

“Yeah, I’m scared to death. That’s my opinion,” Craig Martell, the Defense Department’s chief digital and AI officer, said today at AFCEA’s TechNet Cyber conference in Baltimore when asked about his thoughts on generative AI.

Martell was specifically referring to generative AI language models, like ChatGPT, which pose a “fascinating problem”: they don’t understand context, and people will take their words as fact because the models talk authoritatively, Martell said.

“Here’s my biggest fear about ChatGPT,” he said. “It has been trained to express itself in a fluent manner. It speaks fluently and authoritatively. So you believe it even when it’s wrong… And that means it is a perfect tool for disinformation…We really need tools to be able to detect when that’s happening and to be able to warn when that’s happening.

“And we don’t have those tools,” he continued. “We are behind in that fight.”

Martell, who was hired by the Defense Department last year from the private sector, has extensive AI experience under his belt. Prior to his CDAO gig, he was the head of machine learning at Lyft and Dropbox, led several AI teams at LinkedIn and was a professor at the Naval Postgraduate School for over a decade studying AI for the military.

He implored industry at the conference to build the tools necessary to make sure information generated from the all generative AI models — from language to images — is accurate.

“We Have No Moat, And Neither Does OpenAI”: Leaked Google Document Breaks Down the Exponential Future of Open Source LLMs

DANIEL PEREIRA

We have been prioritizing research and analysis of the security implications of ChatGPT in the last few weeks and were shifting gears to a survey of all the activity and platforms available in the LLM marketplace relative to ChatGPT – picking up where we left off in our December 2022 post The Past, Present, and Future of ChatGPT, GPT-3, OpenAI, NLMs, and NLP – when another discord server-based leak hit the airwaves. This time, it is a document leaked by a Google employee chock-full of interesting insights about the competitive landscape and future market challenges faced by Google and OpenAI by open-source LLM offerings. As for the legitimacy of the leak, as one commentator put it:

“The text below is a very recent leaked document, which was shared by an anonymous individual on a public Discord server who has granted permission for its republication. It originates from a researcher within Google.Having read through it, it looks real to me—and even if it isn’t, I think the analysis within stands alone. It’s the most interesting piece of writing I’ve seen about LLMs in a while.” (1)

Google: We Have No Moat

And neither does OpenAI

The text below is a very recent leaked document, which was shared by an anonymous individual on a public Discord server who has granted permission for its republication. It originates from a researcher within Google. We have verified its authenticity. The only modifications are formatting and removing links to internal web pages. The document is only the opinion of a Google employee, not the entire firm. We do not agree with what is written below, nor do other researchers we asked, but we will publish our opinions on this in a separate piece for subscribers. We simply are a vessel to share this document which raises some very interesting points.

We’ve done a lot of looking over our shoulders at OpenAI. Who will cross the next milestone? What will the next move be?

But the uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch.

Andy Bochman on Countering Cyber Sabotage

BOB GOURLEY

Andy Bochman is the Senior Grid Strategist-Defender for Idaho National Laboratory’s National and Homeland Security directorate. In this role, Andy provides strategic guidance on topics at the intersection of grid security and climate resilience to INL leadership as well as senior U.S. and international government and industry leaders. Andy is a frequent speaker, writer, and trainer who has testified before the U.S. Senate Energy and Natural Resources Committee on energy infrastructure cybersecurity issues and before FERC on the maturity of smart grid cybersecurity standards. He has had recurring conversations on grid security matters with the Senate Select Committee on Intelligence and the National Security Council.

In this OODAcast we discuss Andy’s most recent book, Countering Cyber Sabotage: Introducing Consequence-based Cyber-Informed Engineering. This book introduces INL’s new approach for defending against top-tier cyber adversaries.

Watch as we learn how a hockey player transformed into a cybersecurity champion and author of one of the most important books for engineering for critical infrastructure defense.

OpenAI’s Recent Expansion of ChatGPT Capabilities Unfortunately Includes a Cybersecurity Vulnerability “In the Wild”

DANIEL PEREIRA

WolframAlpha and OpenTable are amongst sites accessed by recently released plug-ins- supported by ChatGPT – enabling the chatbot to utilize new information sources. Soon after the release of the plug-ins, an exploit vulnerability – CVE-2023-28432 – which affects a tool used for machine learning, analytics, and other processes – was discovered, adding to the list of recent security incidents hitting the game-changing LLM-based chatbot:

“Threat intelligence company GreyNoise explained that the issue affects OpenAI’s popular ChatGPT tool. Last month, OpenAI added a new feature to the headline-grabbing tool that allows it to pull information from other sources. ‘There are some concerns about the security of the example code provided by OpenAI for developers who want to integrate their plugins with the new feature,’ GreyNoise’s Matthew Remacle said.

‘While we have no information suggesting that any specific actor is targeting ChatGPT example instances, we have observed this vulnerability being actively exploited in the wild. When attackers attempt mass-identification and mass-exploitation of vulnerable services, everything is in scope, including any deployed ChatGPT plugins that utilize this outdated version of MinIO.'” (1)
OODA Loop Sponsor

ChatGPT and the new AI are wreaking havoc on cybersecurity in exciting and frightening ways

OODA ANALYST

This article highlights the risks that AI technology poses to cybersecurity, particularly in terms of chatbots and language processing programs like GPT-3. As AI technologies become more sophisticated, they are being used by cybercriminals to create more convincing phishing emails and scam messages. For example, AI-powered chatbots can engage in conversation with unsuspecting victims, making them more likely to divulge sensitive information or click on links that contain malware.

These types of attacks are becoming more common, and they can be difficult to detect and prevent. In response, cybersecurity experts are developing new AI-powered security systems that can identify and block malicious traffic. However, these systems are still in the early stages of development, and there is a risk that they could be circumvented by sophisticated AI-powered attacks. As AI technology continues to advance, it will become increasingly important for organizations to invest in robust cybersecurity systems that can protect against these new and evolving threats.

Army wants help with safeguarding datasets for AI use

Carten Cordell

The military service is preparing for a report on how to secure the datasets used with the emerging technologies and called for a range of collaboration in a new request.

The Army wants to know the best technology and methods for safeguarding the datasets that will fuel the artificial intelligence and machine learning tools it will use in the future and it's open to insights from industry, academia and others.

In a request for information posted Friday, Army officials detailed a forthcoming report by the Army Science Board titled "Testing, Validating, and Protecting Army Data Sets for Use in Artificial Intelligence and Machine Learning Applications," where the independent research body plans to explore methodologies and techniques for dataset security as well as information on testing AI-enhanced systems in battlefield applications.

As part of the request, the board wants technology information from sources like traditional defense contractors, non-traditional contractors, small businesses, government laboratories, Federally Funded Research and Development Contractors and academia that could help inform the report.

Some of the topics the board is looking to explore are using cryptographic algorithms and advanced security measures to protect sensitive data; techniques data anonymization, pseudonymization or synthetic data to preserve privacy and retain analytic value; inspection and analysis strategies for evaluating dataset security; methods to ensure data used in battlefield systems hasn't been poisoned; and methods for the remediation of a dataset if it has been compromised.

In addition to that the board is also exploring testing for "robustness against adversarial AI technologies and assessment of system performance under various realistic scenarios," alongside accuracy of AI-enabled systems against threats, "for example, pitting Army units, against an opposing force with intent to win, in joint experimentation or training exercises."

Applicants to the RFI could also provide insights into how validation and verification methods could be applied to AI/ML datasets, integration and interoperability strategies to ensure the technology works at the unit level and methods for bolstering user confidence and reliability in the systems.

While the RFI states that no contract is expected to emerge from the report, it does note that the board may conduct additional market research from its findings.

A Guide to Open-Source Intelligence (OSINT)

Alec Smith and Stevie Cook

OSINT can be traced back to WWII, where allies used newspapers and radio broadcasts for espionage and strategic intelligence gathering (source). Since the establishment of the digital age, the application of OSINT has exponentially evolved. OSINT is no longer exclusive to military and government organisations, with security professionals across multiple sectors harnessing OSINT to inform decision-makers and counter malign activity from threat actors.

1. What is OSINT?

OSINT is the collection and analysis of Publicly Available Information (PAI). OSINT can be derived from various sources such as; media (newspapers, radio, television), social media platforms, information made publicly available by request, academic journals, public events.

2. Why is OSINT important?

OSINT is available to everyone, making it an economically-viable alternative to other more sophisticated intelligence mediums (such as IMINT and GEOINT). It’s also incredibly agile; able to provide real-time event monitoring, and can be quickly refocused as a collection medium with little-to-no impact.

OSINT has also become pivotal in cybersecurity. Security professionals use OSINT to identify weaknesses in friendly networks, which can then be remediated to prevent exploitation by adversaries and threat actors (source). The most common techniques are Penetration Testing (PenTesting) and Ethical Hacking. In this capacity, OSINT also allows cybersecurity professionals to develop a comprehensive threat intelligence to better understand and counter threat actors, their tactics and targets (source).

New AI Research Funding to Focus on 6 Areas

Alexandra Kelley

The Biden administration is opening seven new artificial intelligence laboratories, fueled by $140 million in federal funding, the White House announced Thursday. The National Science Foundation will helm operations, with support from fellow government agencies. The institutes will focus on six research topics: Trustworthy AI, under the University of Maryland-led Institute for Trustworthy AI in Law & Society.

Intelligent agents for cybersecurity, under the University of California Santa Barbara-led AI Institute for Agent-based Cyber Threat Intelligence and Operation.

Climate-smart agriculture and forestry, under the University of Minnesota Twin Cities-led AI Institute for Climate-Land Interactions, Mitigation, Adaptation, Tradeoffs and Economy.

Neural and cognitive foundations of AI, under the Columbia University-led AI Institute for Artificial and Natural Intelligence.

AI for decision-making, under the Carnegie Mellon University-led AI-Institute for Societal Decision Making.

And AI-augmented learning to expand education opportunities and improve student outcomes, under both the University of Illinois, Urbana-Champaign-led AI Institute for Inclusive Intelligent Technologies for Education and the University at Buffalo-led AI Institute for Exceptional Education.

The broad goals within these research initiatives are to harness AI technologies to support human health and development research, support cyber defenses and aid climate-resilient agricultural practices.