10 November 2023

Alternate Suez Canal (The Israeli Ben Gurion Canal)

Patial RC

Heard of the Alternate Suez Canal? Here is the West’s dream project the proposed ‘Ben Gurion Canal’ or the Israeli Canal project. It is no new revelation; In July 1963, the US Department of Energy and the Lawrence Livermore National Laboratory created a classified document that outlined a plan to use 520 buried nuclear explosions to help in the excavation process through the hills in the Negev Desert. The document was declassified in 1993.

The project would connect the Gulf of Aqaba to the Mediterranean Sea. David Ben Gurian after whom it would be named, is considered the Founding Father of Israel and was the first Prime Minister Israel. The canal would rival the Suez Canal that runs through Egypt, which has had many disturbances in its history, such as the closure of the Suez Canal (1956-1957 and from 1967-1975) and the blockage due to a large sized ship capsizing in 2021. It would be almost one third longer than the Suez Canal (194 km) meaning almost 300 km. The cost of creating the Israeli canal is estimated to be between $16 – $55 billion.

The capsizing of the cargo ship in the Suez Canal for more than six days and failing to float the ship was not the news, or the reasons behind the accident. But the real news behind it was the need to revive the “Ben-Gurion Canal” project. It is clear now who is the main beneficiary of this calamity, which hit one of the most important global navigation points, namely the Suez Canal. The Suez Canal is an essential maritime chokepoint, and the longer passage is suspended, the more impact it will have on civilian and military transits.

C.I.A. Director Visits Israel and the Middle East Amid Israel-Hamas War

Julian E. Barnes

William J. Burns, the C.I.A. director, arrived in Israel on Sunday for discussions with leaders and intelligence officials, the first stop in a multicountry trip in the region, according to U.S. officials.

The visit comes as the United States is trying to prod Israel to pursue a more targeted approach to attacking Hamas, allow pauses in the fighting for aid to enter Gaza and do more to avoid civilian casualties.

The United States is also looking to expand its intelligence sharing with Israel, providing information that could be useful about hostage locations or any follow-on attacks by Hamas. A U.S. official briefed on Mr. Burns’s trip said he planned to reinforce the American commitment to intelligence cooperation with partners in the region.

Mr. Burns will travel to several Middle Eastern countries for discussions about the situation in Gaza, ongoing hostage negotiations and the importance of deterring the war with Hamas from widening to a broader context, the U.S. official said.

U.S. officials have been visiting Israel at a regular cadence since war broke out after Hamas fighters attacked Israeli towns on Oct. 7 and killed more than 1,400 people, mostly civilians. Israel has retaliated with a punishing air campaign and ground invasion into Gaza, where Hamas is in control. More than 9,000 Palestinians have been killed in airstrikes since Israel began retaliating, according to Gaza’s health ministry. U.S. officials said their estimates of the number of Palestinians killed were similar.

Iran Might Have Miscalculated in Gaza

Walter Russell Mead

Most news and commentary describes the war in Gaza as the latest brutal episode in the conflict between Israelis and Arabs. That is one dimension, but from the perspective of world-power politics, it isn’t the most important. What really matters in the Middle East is the battle between Iran, increasingly backed by Russia and China, and the loose and uneasy group of anti-Iranian powers that includes Israel and the American-backed Arab states.

There is much about the Gaza war that we still don’t know: how long it will last, what the death toll will be, how many hostages can be rescued or returned, and how successful Israel will be in its declared objective of destroying Hamas.

Theories on the Gaza War

George Friedman

Many of our readers have written in to ask why we haven’t had much to say on the war between Hamas and Israel. The reason, lamentably, was that we had nothing to say that wasn’t being said by the mainstream media, and that what we were hearing from other sources was not absolutely reliable. This includes claims that North Korea, of all places, was behind the attack. Now that the dust has settled somewhat, I’d recommend reading Kamran Bokhari’s piece published on Thursday that lays out some of the strategic elements of what Israel must consider. But today, I’d like to attempt to define what is known and unknown.

The first question everyone asked was who was behind the attack. The most common answer is Iran. The problem with this is the sheer number of rockets possessed by Hamas, thought to number some 1,500. It would be extremely difficult to send that many rockets overland into Gaza without being detected.

Because it was so hard to imagine who gave Hamas the missiles, we considered that the munitions might have been delivered by ships through the Mediterranean. The question became who might ship the rockets to the region. We considered Russia, which we hypothesized might have wanted to force the U.S. to send weapons and resources to Israel and thus away from Ukraine. We quickly dismissed this theory, however, because it was all but logistically impossible.

This left us with the theory that Hamas had manufactured them in their tunnels. It’s not impossible that Hamas is skilled enough to do so, but the problems of manufacturing them underground could create defects in the munitions. The key question, then, was still unanswered: Where did a load of rockets come from, and why weren’t they detected?

Resolving India’s AI Regulation Dilemma

Shaoshan Liu and Jyotika Athavale

In a previous article, we introduced India’s AI regulation dilemma. The Indian government has weighed a non-regulatory approach, emphasizing the need to innovate, promote, and adapt to the rapid advancement of AI technologies, with a more cautious one, focusing on the risks associated with AI, notably job displacement, misuse of data, and other unintended consequences.

We argue that this dilemma is a result of the lack of a cohesive national AI strategy in India. In this article, we examine existing AI regulation approaches from the United Kingdom, the United States, the European Union, and China, and analyze India’s current economic and geopolitical situations to develop a proposal to resolve India’s AI regulation dilemma.

With a robust university system and a talent pool, the United Kingdom has the potential to become a leading AI powerhouse. To boost domestic AI technology development, the U.K. has recently adopted a “pro-innovation” strategy toward AI regulation. This strategy offers non-legally binding guidance, assigning regulatory responsibilities to existing entities, such as the Competition and Markets Authority. It serves as a mechanism for collecting feedback and insights from various stakeholders.

U.S. technology conglomerates have already dominated the global AI market. To consolidate its advantages, the United States has adopted an “industry-specific” strategy, where the government solicited proposals from these global AI conglomerates for AI regulation. This strategy was reflected in the White House’s request for voluntary commitments from leading AI companies to manage AI risks.

India Once Was a Strong Ally of Palestine. What Changed?

Jannatul Naym Pieal

People hold placards and a banner in solidarity with Israel in Ahmedabad, India, Oct. 16, 2023.

In 1947, India voted against the partition of Palestine at the United Nations General Assembly. India was the first non‐Arab state to recognize the Palestinian Liberation Organization (PLO) as sole and legitimate representative of the Palestinian people in 1974. India was also one of the first countries to recognize the State of Palestine in 1988.

All these historical records attest to India’s long-standing diplomatic ties with Palestine being on great terms. On the other hand, while India recognized the creation of Israel in 1950, they did not establish diplomatic relations until 1992, and previous Indian governments mostly kept dealings with Israel quiet.

Fast forward to October 27, 2023. This same India was among the countries that did not back a U.N. resolution calling for a “humanitarian truce” in Gaza, instead choosing to abstain.

Just like that, India is clearly taking a side in the ongoing Gaza war. That’s the side of Israel, from whom India now buys about $2 billion worth of arms every year, making up over 30 percent of Israel’s total exports of armaments.

Only a few hours after Hamas launched its assault on Israel on October 7, India’s Prime Minister Narendra Modi was one of the first world leaders to respond. He vehemently denounced the “terrorist attacks” and declared that India “stands in solidarity with Israel at this difficult hour” in a statement posted on X, formerly known as Twitter.

The Curious Case Of Pakistan’s Mianwali Air Base Terror Attack

Nilesh Kunwar

In its statement on last Saturday’s terrorist attack on the Mianwali Pakistan Air Force [PAF] training base, Pakistan army’s media wing Inter Services Public Relations [ISPR] mentioned that “3 terrorists were neutralised while entering the base, while remaining 3 terrorists have been cornered/isolated due to timely and effective response by the troops.” It also went on to add that “during the attack, some damage to three already grounded aircraft and a fuel bowser [tanker] also occurred.”

Skeptics may contend that since air force bases house aircrafts and extremely costly aviation equipment, comprehensive measures are instituted in such facilities to ensure safety of these valuable assets, both through all-weather surveillance devices as well as boots on the ground and hence, there’s nothing unusual about a terror attack on this air force base being foiled.

Nevertheless, foiling a terrorist attack is no mean achievement and those who thwarted it rightly deserve due appreciation for [to borrow ISPR’s words], “demonstrating exceptional courage, while delivering a timely and effective response.” What’s even more commendable is that as per ISPR, “only some damage was done to three already phased out non-operational aircraft during the attack,” and that “No damage has been done to any of the PAF’s functional operational assets.”

A dispassionate examination of ISPR’s narrative on this incident however throws up several questions. By stating that terrorists were able to inflict “some damage to three already grounded aircraft and a fuel bowser [tanker],” ISPR has acknowledged that some terrorists were not only able to successfully breach multiple surveillance and security layers gaining access to the most sensitive aircraft hangar area, but also managed to damage three of them- and this is where doubts start arising about ISPR’s narrative of this incident.

The Taliban’s Campaign Against the Islamic State: Explaining Initial Successes

Dr Antonio Giustozzi

Despite a recent decline, the Islamic State (IS), and its South Asian branch IS-K, remains one of the most resilient terrorist organisations on the planet – as recent reports of it planning attacks in Turkey and Europe show. Research carried out in late 2021 to mid-2022 with Taliban and IS members shows that IS-K represented a serious challenge for the Taliban in Afghanistan in this period. While they initially dismissed the threat from IS-K, the Taliban soon developed capabilities to confront it – these capabilities, and IS-K’s responses to them, are the subject of this paper.

The paper outlines five key counter-IS techniques that the Taliban adopted after August 2021: indiscriminate repression; selective repression; choking-off tactics; reconciliation deals; and elite bargaining.

While their initial response was to indulge in indiscriminate repression, the Taliban gradually moved towards an approach focused on selective repression, with the aim of leaving the local communities in areas of IS-K activity relatively untouched. They also considerably improved their intelligence capabilities in this period. By the second half of 2022, the Taliban had succeeded in destroying enough IS-K cells and blocking enough of the group’s funding to drive down its activities and contain the threat. The Taliban also experimented with reconciliation and reintegration, and managed to persuade a few hundred IS-K members in Afghanistan’s Nangarhar province to surrender, contributing decisively to the dismantling of most of IS-K’s organisation there.

Putin Heads to Kazakhstan, With Strategic Ties and Trade on the Agenda

Catherine Putz

Russian President Vladimir Putin is expected to visit Kazakhstan on November 9 for an official visit. Putin’s trip marks just his third international foray in 2023, after visits last month to Kyrgyzstan and China, and comes on the heels of French President Emmanuel Macron’s trip through Central Asia, including a stop in Astana.

The Kazakh presidential press service announced on November 6 that Putin would visit this week at the invitation of President Kassym-Jomart Tokayev. The announcement noted that the two leaders planned to hold talks on bilateral issues and their strategic partnership. The two presidents will also participate, virtually, in the 19th Russia-Kazakhstan Interregional Cooperation Forum that will be held in the city of Kostanay in northern Kazakhstan.

In comments to the media on November 7 after the Kremlin confirmed the meeting, spokesman Dmitry Peskov pushed back on the suggestion that the visit was related to the recent overtures by Europe to Kazakhstan. During Macron’s visit to Astana last week, the French and Kazakh sides signed a declaration of intent on cooperation on critical minerals. The French are also interested in Kazakhstan’s uranium, given that the Central Asian country is the world’s largest producer of uranium.

Putin’s visit, Peskov said, “is not associated with any other contacts [made by Kazakhstan]. We will further develop our good neighborliness and cooperation with Kazakhstan.”

A Perfect Storm: Managing Conflict in the Taiwan Strait

Randall G. Schriver

The United States’ approach to security in the Taiwan Strait has long been defined by its “One China Policy” while maintaining “strategic ambiguity.” However, without coordinated, whole-of-government policies, the People’s Republic of China will continue to act aggressively in the Taiwan Strait and cross-Strait tensions will continue to rise. The United States must creates policies that support U.S. security interests in the Indo-Pacific and develop and execute a strategy for deterring an attack on Taiwan and defeating aggression should deterrence fail. The objective of this report is to offer recommendations for shaping near- and long-term U.S. policy regarding Taiwan in five different areas: political, science and technology, economic, military, socio-cultural, and non-traditional security.

The Fallacy of Finite Deterrence

Kyle Balzer

What’s old is new again. The onset of intensive great-power nuclear competition has revived calls for finite deterrence and its associated industrial (countervalue) targeting doctrine. A relatively modest strategic posture, its advocates contend, one designed to ride out a massive attack and inflict societal devastation in return, constitutes a readily measurable force-sizing standard. The fixed threat of industrial ruin is enough—irrespective of political context and adversarial disposition.

Finite deterrence has a long history stretching back to the 1950s, notwithstanding its repeated failure to steer U.S. nuclear strategy away from broader attack options. In the Cold War, American proponents deemed a relatively small nuclear stockpile, uploaded predominantly on survivable ballistic missile submarines, sufficient. The purported advantage of this posture was its fixed requirements—in terms of numbers and types of weapons, delivery systems, and target sets. After all, targeting a set number of “soft” industrial assets—as opposed to “hardened” military targets—required a smaller and comparatively unsophisticated arsenal. This deterrence standard would reputedly freeze or reduce strategic force levels, extinguish arms racing, and therefore staunch a simmering nuclear “volcano.”

Today, advocates of finite deterrence have been energized by a bipartisan report sensibly urging a strategic nuclear overhaul and expansion. Their prescriptions vary. For instance, some analysts allow for a degree of counter-military targeting but still emphasize countervalue missions. The assorted proposals nonetheless coalesce around three interrelated precepts: 1) reduced targeting requirements provide a natural ceiling on force levels; 2) targeting should prioritize soft infrastructure assets; and 3) a failure to adopt these measures will likely ignite an action-reaction arms competition. Nuclear restraint, it is hoped, will encourage China and Russia to reciprocate and forestall a mindless “arms race.” In this sense, tranquilizing nuclear competition and sealing the volcano constitutes a vital objective—if not the objective.

North Korea’s Nuclear Buildup Means Mutually Assured Destruction, Not Coercion

Denny Roy

In 2017, North Korea conducted a very large, possibly thermonuclear, test explosion and proved it could build long-range ballistic missiles. Unfortunately, Kim Jong Un did not stop there. Since then his government has demonstrated an interest in submarine-launched ballistic missiles, hypersonic glide vehicles, solid-fuel rocket motors, and tactical nuclear weapons. Many observers worry that Kim intends to use this expanding and versatile nuclear arsenal not just to deter South Korea or its treaty ally, the United States, from attacking North Korea, but for coercion or “nuclear blackmail.”

This blackmail might take one of two possible forms. First, Pyongyang might use its nuclear weapons to keep the United States from intervening to help South Korea fight off a North Korean conventional military attack. North Korean nukes could negate the U.S. “nuclear umbrella” over South Korea by holding American cities at risk of incineration, or warn that the participation of U.S. conventional forces in the defense of South Korea would trigger nuclear retaliation against the United States.

Second, Pyongyang might threaten to use nuclear weapons against South Korea unless Seoul accommodates specific North Korean demands. These demands might include severing the South Korea-U.S. alliance and expelling U.S. forces based in South Korea, supplying North Korea with economic aid, or agreeing to some form of political unification of the Korean Peninsula that gives Pyongyang the upper hand over Seoul.

Using Game Theory To Understand Our Dangerous Times

Todd Royal

“If you want to kick the tiger in his ass, you’d better have a plan for dealing with his teeth.” – Tom Clancy, The Teeth of the Tiger

According to the Stanford Encyclopedia of Philosophy: “Game theory is the study of the ways in which interacting choices of economic agents produces outcomes with respect to the preferences (or utilities) of those agents, where the outcomes in question might have been intended by none of the agents.”

Two quotes which could not be more different but have the same defining results. Here’s why – we live in a dangerous world, created by the naïve wisdom that the U.S. can run up more than $600 billion in debt in one month and have zero consequences, negotiate away realist balancing with Iran, while having the U.S. national security advisor, Jake Sulivan state publicly on September 29th two weeks before the slaughter against Israel, “The Middle East region is quieter today than it has been in two decades,” and allow over 8 million people illegally into America since Joe Biden dubiously took office for the dual purpose of cheap labor and they will vote for the Democratic Party.

Millions of these economic agents hate America, American citizens, and American values based on the Judeo-Christian understanding of separation of government and human rights for all. Our elite universities have embraced this fatalistic nihilism exhibited by their worship of Hamas, rabid antisemitism, and demonstrations in most Democratic-run cities glorify violence against Jews, including babies being ripped out of mother’s wombs.

To make my point, the Wall Street Journal reported, “Hamas operatives behind the Oct. 7 terrorist attack were given specialized training in Iran, and it involved up to 500 militants.”

What Does Talent Management Mean for the U.S. Military?

Steve Deal

If there is one most overused and misunderstood phrase used throughout the modern military—and there are thousands competing for that title, no doubt—it might be “talent management.”

At worst, it might be the leftovers of extant human resources mechanisms warmed over and replated—at best, it could encompass the strenuous testing, training, and narrow career tracks leading to the movable feast of our flag and general officer corps, for whom extant talent management worked fairly well.

Despite the great political courage of a few, and precious momentum earned on the shoulders of many, we have missed some grand opportunities over the past decade to define talent management and make it into the most transformative weapons system the nation has ever seen. Like the term itself, the goals have been too ambiguous, the costs indefensible, and the return on investment incalculable. It is literally the programmatic challenge of the century.

With current technology, the military has the capacity to learn about its people, provide feedback to those people as well as those entrusted to lead them, and therefore create a new vision about what service means for the nation. The data created from that learning must be used to create a deeper bench of measured, differentiated skills for our national security. Far too involved for “HR” to do alone; to have a chance, talent management must be one of the highest personal priorities of any appointed leader, civilian or uniformed. We are failing our future, which includes American sovereignty and our entire way of life, if we do not reinvent our approach to talent management now.

China’s Position on Russia’s Invasion of Ukraine


Key Events and Statements from February 21, 2022 through October 27, 2023

Below is a list of key actions and statements summarizing China’s official position on Russia’s unprovoked invasion of Ukraine, which began on February 24, 2022. Items highlighted include China’s official government statements, press conferences, messages to the international community, media publications, and where available, leaked internal Chinese Communist Party (CCP) guidance for media and propaganda outlets.

Entries are organized into the following categories:

[Action]: Chinese government activity which impacts the conflict in Ukraine

[Sanctions]: Chinese efforts to undermine international sanctions on Russia

[Statement]: Statements by Chinese government officials on the Ukraine conflict

[Media]: Chinese media stories reflecting official positions and censorship guidelines

October 18, 2023

[Action] General Secretary Xi meets with Russian President Vladimir Putin in Beijing on the sidelines of the Third Belt and Road Forum for International Cooperation, where the two sides discuss the conflict between Israel and Hamas and the pursuit of further cooperation in emerging industries of strategic importance. Xi also requests greater progress in the China-Mongolia-Russia natural gas pipeline. While the official readout does not include mention of Ukraine, CNN reports that during a press conference Putin said he provided Xi with an update on the “situation that is developing on the Ukrainian track.” This is the second known trip outside of Russian-controlled territory Putin has made since the International Criminal Court issued a warrant for his arrest, the first was to Kyrgyzstan on October 12.

Moving Enough People Enough

Erik Van Hegewald

Disinformation, capturing headlines and permeating the spheres of research and politics, has evolved into an issue potentially influencing the daily lives of nearly anyone connected to the internet. The COVID-19 pandemic was by all measures, a novel event, but with a corresponding information vacuum, and a subsequent surge of disinformation. The actual effectiveness of this disinformation, however, remains a lingering question.

This dissertation assesses the effects of COVID-19 disinformation narratives on behavioral outcomes, particularly vaccine uptake and ivermectin consumption. By analyzing Twitter rhetoric of users from Kansas and Nebraska, I quantified the "dosage" of disinformation and assessed its impact on changes in discourse, vaccine uptake, and ivermectin consumption. This research provides initial empirical evidence that exposure to disinformation can prompt an observable change in individual rhetoric, subsequently leading to changes in behavior. Additional key findings indicate COVID-19 disinformation had limited effectiveness in changing vaccination outcomes but potentially more pronounced impacts in areas where individuals lack strongly held beliefs such as ivermectin consumption. Importantly, this research also suggests an ability for even a minor event to trigger small shifts in behavior and sentiment, crossing thresholds to ultimately generate to significant changes in outcome — a phenomenon I term as "effects on the margin". However, for an event to spur a major shift in rhetoric and subsequent behavior the presence of three critical conditions were required: volume, language resonance, and the capacity of the event to permeate through topic adjacent conversations.

The study recommends that organizations and governments establish frameworks to assess their own threat thresholds, maintain "informational firebreaks" using some of the already existing methods to counter disinformation, and continue implementing policies and programs such as lotteries to encourage desired behavior. The study concludes that while disinformation poses a societal threat, its impact is not as severe as headlines convey. However, efforts must continue to counter disinformation and mitigate future risks.

Troops' data is for sale. That puts national security at risk

ALEXANDRA KELLEY

A new report found that it is easy to buy personal data on U.S. servicemembers online, where it can cost as little as one cent to obtain records through data brokers.

Conducted by researchers at Duke University, the study Data Brokers and the Sale of Data on U.S. Military Personnel examined the availability of sensitive data for U.S. military personnel, including names, home addresses, emails and specific branch information being sold on third-party data-broker platforms.

After scraping or buying data from hundreds of data broker sites, researchers explored how freely available military service information could pose national security threats.

Researchers found only minimal identity verification protocols when buying potentially sensitive data online.

“We found a lack of robust controls when asking some data brokers about buying data on the U.S. military and when actually purchasing data from some data brokers, such as identity verification, background checks or detective controls to ascertain our intended uses for the purchased data,” the report reads.

Researchers were able to purchase demographic characteristics including religious practices, health information and financial data of thousands of both active-duty members and veterans. Some datasets available for sale were so specific that they could list the office of service, such as the U.S. Marine Corps and Pentagon Force Protection Agency.

DISA wants to use AI as a ‘digital concierge’ for its workforce

JASPREET GILL

As the Pentagon assesses ways artificial intelligence can be applied to military operations, one defense agency wants the technology to serve as a “digital concierge” for its future operations.

Speaking to reporters at the Defense Information Systems Agency (DISA) Forecast to Industry, Steve Wallace, chief technology officer and director of DISA’s emerging technology directorate, said large language models, especially, could be a “digital concierge” that could help the workforce “in all aspects of their job.”

“Whether that be the back office work, or whether that be, say, the analyst sitting on the floor and that ability to quickly diagnose and deal with things,” he said.

AI can also specifically help DISA’s defensive cyber operator analysts by automating nearly 80 percent of the data they review, Brian Hermann, program executive officer for cyber, told reporters. “And then their brains can be applied to those really high-end problems,” he said. “And I would argue that that’s really the only way we could react with speed to appear competitive.”

While DoD is seeing adoption of AI across the department, Wallace added that DISA is trying to better understand the ethical use of the technology. His comments come after DoD released an AI strategy just last week to accelerate the department’s adoption of AI capabilities that took into account generative AI tools like large language models.

Industry and Government Collaboration on Security Guardrails for AI Systems

Gregory Smith, Sydney Kessler, Jeff Alstott, Jim Mitre

The rapid evolution of artificial intelligence (AI) technology offers immense opportunity to advance human welfare. However, this evolution also poses novel threats to humanity. Foundation models (FMs) are an AI trained on large datasets that show generalized competence across a wide variety of domains and tasks, such as answering questions, generating images or essays, and writing code. The generalized competence of FMs is the root of their great potential, both positive and negative. With proper training, FMs could be quickly deployed to enable the creation and use of chemical and biological weapons, exacerbate the synthetic drugs crisis, amplify disinformation campaigns that undermine democratic elections, and disrupt the financial system through stock market manipulation.

Reflecting these concerns, the RAND Corporation and the Carnegie Endowment for International Peace hosted a series of workshops in July 2023 with government and AI industry leaders to discuss developing security guardrails for FMs. Participants identified concerns about AI's impact on national security, potential policies to mitigate such risks, and key questions to inform future research and analysis.

Exploring red teaming to identify new and emerging risks from AI foundation models

Marie-Laure Hicks, Ella Guest, Jess Whittlestone, Jacob Ohrvik-Stott, Sana Zakaria, Cecilia Ang, Chryssa Politi, Imogen Wade, Salil Gunashekar

On 12 September 2023, RAND Europe and the Centre for Long-Term Resilience organised a virtual workshop to inform UK government thinking on policy levers to identify risks from artificial intelligence foundation models in the lead up to the AI Safety Summit in November 2023. The workshop focused on the use of red teaming for risk identification, and any opportunities, challenges and trade-offs that may arise in using this method.

The workshop brought together a range of participants from across academia and public sector research organisations, non-governmental organisations and charities, the private sector, the legal profession and government. The workshop consisted of interactive discussions among the participants in plenary and in smaller breakout groups. The views and ideas discussed at the workshop have been summarised in this short report to stimulate further debate and thinking as policy around this topical issue develops in the coming months.

DoD releases new AI adoption strategy building on industry advancements

JASPREET GILL

The Pentagon today released a new strategy to accelerate the department’s adoption of artificial intelligence capabilities, one which accounts for industry advancements in federated environments, decentralized data management and generative AI tools like large language models.

But relying on commercial capabilities means the technologies available may not yet be compatible with the department’s own ethical AI principles, Deputy Secretary of Defense Kathleen Hicks acknowledged to reporters at the Pentagon.

“Unlike some of our strategic competitors, we don’t use AI to censor, constrain, repress or disempower people,” Hicks said. “By putting our values first and playing to our strengths, the greatest of which is our people, we’ve taken a responsible approach to AI that will ensure America continues to come out ahead.

“Meanwhile, as commercial tech companies and others continue to push forward the frontiers of AI, we’re making sure we stay at the cutting edge with foresight, responsibility and a deep understanding of the broader implications for our nation,” she added.

The “2023 Data, Analytics, and AI Adoption Strategy” [PDF] is the first update to DoD’s AI Strategy since the 2018 edition. That older strategy designated the now-defunct Joint AI Center as the “focal point” for carrying out its vision. The JAIC was subsumed into the Chief Digital and AI Office (CDAO) — which was stood up last year as the Pentagon’s central hub for all things AI — along with the Defense Digital Service and ADVANA teams.

Big Tech Ditched Trust and Safety. Now Startups Are Selling It Back As a Service

VITTORIA ELLIOTT

Massive layoffs across the tech sector have hit trust and safety teams hard over the past year. But with wars raging in Ukraine and the Middle East and more than 50 elections taking place in the next 12 months, experts worry that a nascent industry of startups created to keep people safe online won’t be able to cope.

The cuts made headlines a year ago, when X (then Twitter) fired 3,700 people—including hundreds in trust and safety roles. Since then, Meta, Alphabet, and Amazon have made similar cuts. The layoffs at X inspired other platforms to do the same, argues Sabhanaz Rashid Diya, founding director at tech policy think tank the Tech Global Institute and a former member of Meta’s policy team. “In many ways, Twitter got away with it,” she says. “That’s given the other companies the confidence to say, ‘You know what? It’s OK. You can survive and not face a terrible consequence.’”

Still, the cost of these cuts is arguably already evident in the way major platforms have scrambled to respond to the war between Israel and Hamas. And the shift away from in-house trust and safety teams has created an opening for consultancies and startups to offer something new: trust and safety as a service.

These companies, many of them founded and staffed by people with Big Tech pedigrees, let platforms “buy rather than build” trust and safety services, says Talha Baig, a former Meta engineer whose startup, Sero AI, recently received backing from accelerator Y Combinator. “There is a lot more labor out on the marketplace, and there’s also a lot more customers willing to buy that labor.”

OpenAI Wants Everyone to Build Their Own Version of ChatGPT

WILL KNIGHT

OpenAI’s ChatGPT became a phenomenon thanks to its wide-ranging abilities, such as drafting college essays, writing working computer programs, and digging up information from across the web.

Now the company aims to further widen the range of tricks up ChatGPT’s sleeve by making it possible for anyone to build a custom chatbot powered by the technology—without any coding skills. OpenAI suggests people might want to build custom bots to help with specific problems or interests in their life, such as helping someone learn the rules of a board game, teach their kids math, or help design stickers using AI-generated art.

To create one of these custom bots or AI agents, which OpenAI calls “GPTs,” a user need only specify, by talking with ChatGPT, what they would like the bot to do. Behind the scenes, ChatGPT will write the code needed to create and run the new bot. The bots can plug into other sites and services to do things like access databases, search emails, and automate ecommerce orders, OpenAI says.

“GPTs are a new way for anyone to create a tailored version of ChatGPT to be more helpful in their daily life, at specific tasks, at work, or at home—and then share that creation with others,” OpenAI said in a blog post today. It will launch an online chatbot store later this month where users will be able to find GPTs from “verified builders” chosen by the company.

Elon Musk Announces Grok, a ‘Rebellious’ AI With Few Guardrails

WILL KNIGHT

LAST WEEK, ELON Musk flew to the UK to hype up the existential risk posed by artificial intelligence. A couple of days later, he announced that his latest company, xAI, had developed a powerful AI—one with fewer guardrails than the competition.

The AI model, called Grok (a name that means “to understand” in tech circles), “is designed to answer questions with a bit of wit and has a rebellious streak, so please don’t use it if you hate humor!” reads an announcement on the company’s website. “It will also answer spicy questions that are rejected by most other AI systems.”

The announcement does not explain what a “spicy” or “rebellious” means, but most commercial AI models will refuse to generate sexually explicit, violent, or illegal content, and they are designed to avoid expressing biases picked up from their training data. Without such guardrails, the worry is that an AI model could help terrorists develop a bomb or could result in products that discriminate against users based on characteristics such as race, gender, or age.

xAI does not list any contact information on its website, and emails sent to common addresses bounced back. An email sent to the press address for X received an automated response reading, “Busy now, please check back later.”

‘Goodbye Mr Chips?’ Modernising Defence Training for the 21st Century

Paul O’Neill and Major Patrick Hinton

Better practices are needed to improve the effectiveness of defence training.

Training is crucial for enabling UK Defence to deliver operational success, and broadens the potential talent pool by allowing Defence to recruit people who can develop the necessary skills, rather than simply competing for pre-trained talent (which often is in short supply). The breadth and scale of military training is significant, with a clear management process – the Defence Systems Approach to Training (DSAT) – in which requirement-setters identify training needs that are passed to delivery authorities, who design and deliver the training; the requirement-setters then review the training to ensure that it provides what is needed. While this sets a structured framework for training, there are challenges Defence must overcome to improve the efficiency and effectiveness of its training system. These challenges exist across several areas: culture; system governance; processes; training delivery; the wider learning environment; and workforce capacity.

Pockets of good practice exist in Defence, and much could be gained from sharing these more widely, but lessons should also be learned from training practice outside Defence. This paper identifies improvements in four key areas to help modernise Defence training and prepare the armed forces for the challenges to come:
  • Upskilling the whole training workforce by improving the training given to any personnel engaged in training others (‘train the trainer’).
  • Improving training delivery through more personalised ‘learning journeys’, active learning and greater use of technology.
  • A better understanding of Defence training as a system and as a crucial component of military capability via clearer lines of accountability, better use of data, and mechanisms allowing training to be more responsive to changing individual and organisational needs.
  • Partnering with external organisations that can complement Defence’s skillset by supplying adult education (andragogical) expertise.