6 September 2023

How Democrats’ climate change agenda is blocking real change for America

MANDY GUNASEKARA

In the face of Maui’s devastating wildfires that have claimed more than 100 lives, with many still unaccounted for, climate activists are exploiting the tragedy to advance their agenda. But declaring a climate emergency at the behest of the climate lobby would do nothing except make life more expensive for everyday Americans.

Extremist climate groups made their goal clear when President Biden briefly stopped on the island to assess the damage: The president must declare a national climate emergency. The chorus includes versions of “if not now, then when?” alongside demands for action “now” and even some claims that it’s now normal for people to choose between burning in fires or jumping into the ocean.

A Biden-declared climate “emergency” won’t address the root cause of the Maui tragedy, which was mostly due to poor planning, incompetent leadership and distracted priorities. The leaders of Lahaina were well aware of an “extremely high risk of burning” since a 2014 report both defined the problem and proposed a number of mitigation measures.

The report’s author recently stated that a “lack of funding, [and] logistical hurdles in rugged terrain and competing priorities” is why some of the most important measures, including a call to ramp up emergency management response, were never implemented. One official, Maui’s emergency chief, resigned after the tragedy exposed the breadth of ineptitude.

Facing mounting challenges, schools embrace the 4-day week

LEXI LONAS

Hundreds of U.S. school districts have sought to combat the teacher shortage and raise the quality of life for their students and faculty by making a big change: a four-day week.

The trend of a four-day week has been rising among American companies and schools since the beginning of the COVID-19 pandemic, with many finding benefits to having an extra day off. In schools, most students and teachers are getting Friday or Monday off while having slightly longer school days the rest of the week to make up for the missing day.

“The number of school districts with a four-day school week has increased to about 850 districts nationally. Two years ago, it was around 650, so it’s going up,” said Aaron Pallas, professor of sociology and education at Teachers College, Columbia University.

The trend towards four-day school weeks is in part a response to multiple educational issues that flared up during the pandemic, including teacher retention and absenteeism.

Governors in multiple states have turned to creative solutions to tackle teacher shortages, including bringing educators from other states or even veterans to take over classrooms.

The four-day week has largely been implemented in smaller communities grappling with the problem, according to Pallas.

US adds 187K jobs in August, jobless rate rises to 3.8 percent

TAYLOR GIORNO

The U.S. added 187,000 jobs and the unemployment rate rose to 3.8 percent in August, according to data released Friday by the Labor Department.

The jobs report showed the labor market plateaued in August as the Federal Reserve considers another interest rate hike. Economists expected the U.S. to have gained 170,000 jobs and maintain the July jobless rate of 3.5 percent, according to consensus estimates.

While the jobless rate rose 0.3 percentage points in August, the labor force participation rate rose 0.2 percentage points after being largely flat since March.

The Fed has hiked interest rates to their highest level in more than two decades as part of its crusade to cool off an economy overrun by inflation a year ago.

Inflation ticked up slightly in July, according to Commerce Department’s personal consumption expenditures price index released Thursday, although it has fallen from its peak of 9.1 percent last summer.

As Fed officials consider another rate hike, they are looking for signs that the job market is slowing under the weight of past increases.

Job openings fell below 9 million for the first time in more than two years, and the rate of Americans quitting their jobs was the lowest it’s been since January 2021, according to the Job Opening and Labor Turnover Survey released Tuesday.

Alaska Board of Education votes to ban transgender girls from high school sports

BROOKE MIGDON

Alaska’s state Board of Education voted Thursday to approve a proposal to ban transgender girls from competing on girls’ high school sports teams, advancing one of Republican Gov. Mike Dunleavy’s policy priorities that was thwarted by the state Legislature earlier this year.

The proposal, which board members argued during a special meeting Thursday is necessary to ensure fairness for cisgender female athletes, states that “if a separate high school athletics team is established for female students, participation shall be limited to females who were assigned female at birth.”

The new regulation, if given final approval by Alaska Attorney General Treg Taylor (R), would apply to schools and districts that join the Alaska School Activities Association (ASAA), the state’s regulating body for high school sports.

Current ASAA guidelines allow member schools to decide for themselves whether transgender athletes should be permitted to play on sports teams that do not match their sex assigned at birth. However, if a school determines that a transgender student is eligible to compete, that determination “shall remain in effect for the duration of the student’s high school eligibility.”

Transgender students attending member schools that do not have written policies in place “may only participate based upon [their] gender assigned at birth,” according to ASAA guidelines.

Philippines Suspends New Travel Rules Amid Public Outcry

Mong Palatino

The Philippine government suspended its revised travel guidelines for Filipinos going abroad after legislators, business groups, travel agencies, and migrants described the new requirements as “coercive, restrictive, and redundant.”

The Inter-Agency Council Against Trafficking (IACAT) released the draft in August as part of its intensified campaign against human trafficking. Aside from standard travel documents, some travelers could be required to submit notarized documents from those who sponsored their trips. Critics pointed out that it would entail additional costs for relatives abroad who need to notarize documents in Philippine consulates.

The Bureau of Immigration (BI) emphasized that the revised rules would streamline the process by identifying categories of travelers who may be required to undergo secondary inspection in airports. It also seeks to avoid complaints from passengers who may fail to board their flights on time because of their failure to present numerous documents under the current guidelines.

“The new guidelines issued by the IACAT would ensure that immigration officers look at specific requirements and not require frivolous documents which could later be a cause for complaints,” said BI Commissioner Norman Tansingco. He is referring to the viral video of a tourist who missed her flight because she was unable to show her college yearbook and graduation photo.

American Power Just Took a Big Hit

Sarang Shidore

For more than a decade, the United States mostly ignored BRICS. The grouping, formed by Brazil, Russia, India, China and South Africa, rarely registered on Washington’s radar. When it did, the impulse — as shown by Jake Sullivan, the national security adviser, recently stressing that the coalition is not “some kind of geopolitical rival” — was to downplay the group’s significance. Western commentators, for their part, largely painted BRICS as either a sign of Chinese attempts to dominate the global south or little more than a talking shop. Some even called for its dissolution.

Such complacency looks less tenable now. At a summit in Johannesburg last week, the group invited six global south states — Argentina, Egypt, Ethiopia, Iran, Saudi Arabia and the United Arab Emirates — to join its ranks. In the aftermath of the announcement, indifference gave way to surprise, even anxiety. Yet there’s no need for alarm. BRICS will never run the world or replace the U.S.-led international system.

It would be a mistake, though, to dismiss its importance. After all, any club with such a long waiting list — in this case, nearly 20 nations — is probably doing something right. BRICS’s expansion is an unmistakable marker of many countries’ dissatisfaction with the global order and of their ambition to improve their place within it. For America, whose grip on global dominance is weakening, it amounts to a subtly significant challenge — and an opportunity.

The Dutch are leading the way on military aid to Ukraine. Here’s why.

Timo S. Koster

With its recent decision to supply F-16 fighter jets to Ukraine, the Netherlands once again showed its leadership on the critical European security issue of our time and went a step further than any nation has so far. Of the forty-two F-16s in the Dutch fleet, “part will be for the training, and the rest is for Ukraine,” Minister of Defense Kajsa Ollongren explained last week. This follows early Dutch support for sending Leopard 2 tanks to Ukraine in January.

The Netherlands is punching above its weight in terms of overall military aid to Ukraine, too. So far, it has pledged $2.7 billion, the fourth most of any European nation, according to the most recent data from the Kiel Institute for the World Economy. The Netherlands is only surpassed in this amount in Europe by three countries several times its size by population: Germany, the United Kingdom, and Poland.

Although not all contributions have been made public, it’s clear that the Netherlands is doing more than many other nations of its size, and Dutch Prime Minister Mark Rutte is clearly taking the lead in showing other nations the way on how to support Ukraine in its defense against the brutal onslaught of Russian President Vladimir Putin’s Russia.

It’s worthwhile to take a closer look at the what, why, and how of this effort, and the remarkable change in posture of the Dutch. And it’s important to note that Dutch aid to Ukraine goes well beyond defense-related assistance, including humanitarian aid, reconstruction, protection of cultural heritage, and more.

Soon after the Russian invasion, Rutte was one of the first European leaders to recognize the war in Ukraine for what it was: a European war, a war that jeopardized peace and freedom on the continent. His speech at the Sorbonne University in Paris on March 9, 2022, clearly marked the awakening of the Dutch government and general public. After decades of declining Dutch investments in our common security, Rutte spoke of raising the defense budget (finally!), aid to Ukraine, and European and transatlantic unity. “War has returned to Europe,” he said.

There were a few reasons for this approach. First, the Netherlands has a unique feature among sovereign nations in that it has the promotion and the preservation of the rules-based international order inscribed in its constitution as a task for the government. This has led to a strong track record on protection of human rights, development aid, and multilateral cooperation on security, underscored by the status of The Hague as the international capital of peace and justice. The Dutch are simply compelled to act in a situation like this.

Norms plus counter space weapons: RAND recommends holistic strategy to deter space attack

THERESA HITCHENS

WASHINGTON — While it likely will be impossible to completely deter adversary attacks on US space systems, a strategy that takes a mixed approach — including diplomacy at one end and offensive counterspace weapons at the other — is most likely to be successful, a new RAND Corporation study finds.

“A comprehensive approach to space deterrence—one that seeks to regulate the use of force in space in the interest of stability; ostracizes states that violate agreed-on norms; and allows states to retain some capacity to punish space aggressors in multiple domains and to develop measures to enhance the defenses, resilience, and redundance of space systems—may have the greatest probability of success,” the study, “A Framework of Deterrence in Space Operations” released today, finds.

Stephen Flanagan, the lead author on the study, explained in an email to Breaking Defense that the concept is one of “mixed deterrence,” that works through “a mix of resilience and defensive measures, combined with robust active defenses of space assets and more-substantial capabilities to degrade the space systems of other countries.”

This approach includes US Space Command and the Space Force not just highlighting “continued investments in space mission assurance and resilience,” but also cooperation with allies and partners, he added.

Noting that “there is no broadly agreed-on framework in the U.S. or allied governments or the wider analytic community on the nature and requirements of deterrence in space operations,” the study explains that the study’s goal is present suggested guidelines to policy-makers both to understand how other nations think about space deterrence and make decisions.

The study further notes that there is no agreement upon international definition of what constitutes space deterrence or how to achieve international stability among potential adversaries either in peacetime or crisis.

“Various countries have quite different conceptions of what the term means and how it can best be achieved,” the study says. For example, Chinese and Russian writings reveal an aggressive approach based on “intimidation” and threats; whereas those of France and Japan show a more “passive” one. India, where strategic thinking seems to still be in play, falls somewhere in between, it adds.

While under US sanctions, where did Huawei get the advanced chips for its latest Mate 60 Pro smartphone?

Che Pan

Huawei Technologies’ silence over details of the advanced semiconductor that powers its new Mate 60 Pro flagship smartphone has become the subject of intense speculation. Here are some possible explanations for where Huawei got the chip.

1. China’s top chip maker SMIC made the chip for Huawei

This is the most plausible explanation, although both Huawei and Semiconductor International Manufacturing Corp (SMIC) declined to provide details. Based on tests conducted on the smartphone, Chinese benchmarking website AnTuTu identified the central processing unit (CPU) in the Mate 60 Pro as the Kirin 9000s from Huawei’s chip design unit HiSilicon.

Research company TechInsights said in a note on its WeChat account that SMIC has used existing equipment and applied its second-generation 7-nanometre process, known as the N+2 node, to manufacture the 5G-capable Kirin 9000s for Huawei. The California-based research firm said it would provide more details on the phone’s connectivity next week.

If that is the case, it would mark a “breakthrough” for China’s semiconductor industry and a major win for Huawei’s smartphone business.

However, under the US sanctions, SMIC should not have been able to make advanced chips for Huawei.

Information Warfare in Russia’s War in Ukraine

Christian Perez

In the lead-up to Russia’s invasion of Ukraine, and throughout the ongoing conflict, social media has served as a battleground for states and non-state actors to spread competing narratives about the war and portray the ongoing conflict in their own terms. As the war drags on, these digital ecosystems have become inundated with disinformation. Strategic propaganda campaigns, including those peddling disinformation, are by no means new during warfare, but the shift toward social media as the primary distribution channel is transforming how information warfare is waged, as well as who can participate in ongoing conversations to shape emerging narratives.

Examining the underlying dynamics of how information and disinformation are impacting the war in Ukraine is crucial to making sense of, and working toward, solutions to the current conflict. To that end, this FP Analytics brief uncovers three critical components:
  • How social media platforms are being leveraged to spread competing national narratives and disinformation;
  • The role of artificial intelligence (AI) in promoting, and potentially combating, disinformation; and,
  • The role of social media companies and government policies on limiting disinformation.
The Role of Social Media and National Disinformation Campaigns

Russia and Ukraine both use social media extensively to portray their versions of the events unfolding, and amplify contrasting narratives about the war, including its causes, consequences, and continuation. Government officials, individual citizens, and state agencies and have all turned to an array of platforms, including Facebook, Twitter, TikTok, YouTube, and Telegram, to upload information. It is difficult to pinpoint the exact amount of content uploaded by these various actors, but the scale of information being uploaded on social media about the war is immense. For instance, in just the first week of the war, videos from a range of sources on TikTok with the tag #Russia and #Ukraine had amassed 37.2 billion and 8.5 billion views, respectively.

At their core, the narratives presented by Russia and Ukraine are diametrically opposed. Russia frames the war in Ukraine, which Putin insists is a “special military operation,” as a necessary defensive measure in response to NATO expansion into Eastern Europe. Putin also frames the military campaign as necessary to “de-nazify” Ukraine and end a purported genocide being conducted be the Ukrainian government against Russian speakers. In contrast, Ukraine’s narrative insists the war is one of aggression, emphasizes its history as a sovereign nation distinct from Russia, and portrays its citizens and armed forces as heroes defending themselves from an unjustified invasion.

Ukraine and Russia are not the only state actors interested and engaged in portraying the war on their own terms. Countries such as China and Belarus have engaged in efforts to portray the conflict on their own terms, and they have launched coordinated disinformation campaigns on social media platforms. These campaigns have broadly downplayed Russia’s responsibility for the war and have promoted anti-U.S. and anti-NATO posts. The mix of narratives, both true and false, originating from different state actors as well as millions of individual users on social media has enlarged tech platforms’ roles in shaping the dynamics of the war and could influence its outcomes.

The scale of information uploaded to social media and the speed with which it proliferates create novel and complex challenges to combating disinformation campaigns. It is often hard to identify the origin of a campaign or its reach, complicating efforts to remove false content in bulk or identify false posts before they reach mass audiences. For example, the active “Ghostwriter” disinformation campaign, attributed to the Belarusian government, uses a sophisticated network of proxy servers and virtual private networks (VPNs), which enabled it to avoid detection for years. Before the operation was uncovered in July 2021, it effectively hacked the social media accounts of European political figures and news outlets and spread fabricated content critical of the North Atlantic Treaty Organization (NATO) across Eastern Europe. The level of sophistication that these types of modern state-backed disinformation campaigns possess makes them exceedingly difficult to detect early and counter effectively. Russia, in particular, has spent decades developing a propaganda ecosystem of official and proxy communication channels, which it uses to launch wide-reaching disinformation campaigns. For instance, “Operation Secondary Infektion,” one of Russia’s longest ongoing campaigns, has spread disinformation about issues such as the COVID-19 pandemic across over 300 social media platforms since 2014.

The range of social media platforms in use, and the variation in their availability across different countries, hinders the ability to coordinate efforts to combat disinformation, while creating different information ecosystems across geographies. The narratives about the war emerging on social media take different forms, depending on the platform and the region, including within Russia and Ukraine. Facebook and Twitter are both banned within Russia’s borders, but Russian propaganda and disinformation aimed at external audiences still flourishes on these platforms. Within Russia, YouTube and TikTok are still accessible to everyday citizens, but with heavy censorship. The most popular social media platform used within Russia is VKontakte (VK), which hosts 90 percent of internet users in Russia, according to the company’s self-reported statistics. It was previously available and widely used in Ukraine until 2017, but the Ukrainian government blocked access to VK and other Russian social media such as Yandex in an effort to combat online Russian propaganda. In 2020, Ukrainian president Volodymyr Zelenskyy extended the ban on VK until 2023, so it has not facilitated communications between Russians and Ukrainians throughout the war this year.

The government-imposed restrictions placed on these major social media platforms leave Telegram as the main social media platform currently accessible to both Russians and Ukrainians. Telegram is an encrypted messaging service created and owned by Russian tech billionaire Pavel Durov, which is being used in the war for everything from connecting Ukrainian refugees to opportunities for safe passage to providing near-real-time videos of events on the battlefield. Critically, in the fight against disinformation, Telegram has no official policies in place to censor or remove content of any nature. While some channels on Telegram have been shut down, the company does not release official statements on why, and it generally allows the majority of content posted by users to continue circulating, regardless of its nature. This allows Telegram to serve as a mostly unfiltered source of disinformation within Russia and Ukraine and reaches audiences that Western social media platforms have been cut off from. While Telegram does not filter content like many other platforms, it also does not use an algorithm to boost certain posts, and it relies on direct messaging between users. This design makes it difficult for AI tools to effectively boost disinformation. In contrast, on other platforms such as Twitter and Facebook, AI is further enabling the rapid spread of disinformation about the war.

The Impact of Artificial Intelligence in Online Disinformation Campaigns

AI and its subcomponents, such as algorithms and machine learning, are serving as powerful tools for generating and amplifying disinformation about the Russia-Ukraine war, particularly on social media channels. The underlying algorithms that social media platforms use to determine what content is allowed, and what posts become the most viewed, are driving differences in users’ perception of the events unfolding. Before the war, there was significant controversy over how social media platforms prioritized and policed content on all kinds of political and social issues. In recent years, both Facebook and YouTube have come under scrutiny from regulators in the U.S. and EU, concerned that their algorithms prioritize extremist content, and for failing to adequately remove disinformation despite some improvements to automated and human-led procedures.

Throughout the Russia-Ukraine war, similar concerns have risen across a range of platforms. For example, researchers found that TikTok directed users to false information about the war within 40 minutes of signing up. New users on TikTok were shown videos claiming that a press conference given by Vladimir Putin in March 2020 was “Photoshopped” and that clips from a videogame was real footage of the war. Likewise, Facebook’s algorithm routinely promoted disinformation about the war, including the conspiracy theory that the U.S. is funding bioweapons in Ukraine. A study by the Center for Countering Digital Hate (CCDH) found that Facebook failed to label 80 percent of posts spreading this conspiracy theory about U.S.-funded bioweapons as disinformation.

Social media platforms also host popular AI-driven tools for spreading disinformation such as chatbots and deepfakes. Bots—AI-enabled computer programs that mimic user accounts on social media networks—are one of the most effective ways that disinformation about the war spreads. Russia has extensive experience effectively using bots to spread disinformation. For example, Russian government agencies and their affiliates previously used them to spread disinformation during the U.S.’s 2016 election as well as throughout the COVID-19 pandemic. Russia is continuing to use bots, and since the start of the war in Ukraine earlier this year, Twitter has reported removing at least 75,000 suspected fake accounts linked to online Russian bots for spreading disinformation about Ukraine. However, the scale and speed at which disinformation can be produced and spread using bots make it nearly impossible to monitor or remove all false accounts and posts.

In addition to bots, deepfakes—videos that use AI to create false images and audio of real people—have circulated online throughout the conflict. Beginning in March 2022, deepfakes portraying both Vladimir Putin and Volodymyr Zelenskyy giving fabricated statements about the war have repeatedly appeared on social media. A deepfake of Vladimir Putin declaring peace widely circulated through Twitter, before being removed, while a deepfake portraying Volodymyr Zelenskyy circulated on YouTube and Facebook. Beyond deepfakes, experts have expressed concern that AI could be leveraged for more sophisticated disinformation techniques. These include using AI to better identify targets for disinformation campaigns, as well as using techniques such as Natural Language Processing (NLP), which allows AI to produce fake social media posts, articles, and documents that are nearly indistinguishable from those by human posters.

While AI is contributing to the spread of disinformation across social media, AI tools also show promise for combating it. The sheer volume of information uploaded to social media daily makes developing AI tools that can accurately identify and remove disinformation essential. For example, Twitter users upload over 500,000 posts per minute, well beyond what human censors can monitor. Social media platforms are beginning to combine human censors with AI, to monitor false information more effectively. Facebook developed an AI tool called SimSearchNet at the start of the COVID-19 pandemic to identify and remove false posts. SimSearchNet relies on human monitors to first identify false posts, and then uses AI to identify similar posts across the platform. AI tools are significantly more effective than human content moderators alone. According to Facebook, 99.5 percent of terrorist-related content removals and 98.5 percent of fake accounts are identified and removed primarily using AI trained with data from their content-moderation teams. Currently, AI aimed at combatting disinformation on social media still relies on both human and computer elements. This limits AI’s ability to detect novel pieces of mis- and disinformation, and means that false posts routinely reach large audiences before they are identified and removed using AI. The current technical limitations on being able to proactively identify and remove false information, combined with the scale of information uploaded online, pose a continuing challenge for limiting disinformation on social media in the Russia-Ukraine war and beyond.

Government and Social Media Disinformation Policies

Social media companies and governments have enacted a range of policies to limit the spread of disinformation, but their application has been fragmented, depending on the platform and geography, with varying effect. The different policies that social media platforms apply, the extent of their efforts to combat disinformation, and their availability within countries, all help shape the way the public understands the Russia-Ukraine war. Critically, social media companies are privately controlled, and their interests may or may not be aligned with varying state interests, including the states where these companies are registered and headquartered as well as others.

In the Russia-Ukraine war, social media companies have taken a range of different measures. Facebook is deploying a network of fact checkers in Ukraine in attempts to eliminate disinformation, and YouTube has blocked channels associated with Russian state media globally. Both of these platforms enacted restrictions beyond the legal requirements under U.S. and EU sanctions on Russia. In contrast, Telegram and TikTok have not taken as significant steps to limit disinformation on their platform, beyond complying with EU sanctions on Russian state media within the EU. The differences in responses among the platforms reflect the government and public pressures that varying platforms are subject to. In general, platforms based in the U.S. have taken stricter stances on limiting Russian disinformation than their international counterparts, such as Telegram and TikTok. The difference in social media platforms’ policies, their efforts to limit disinformation, and their geographic access are all becoming powerful drivers of not only how individuals consume news about the Russia-Ukraine war globally, but also the narratives—including information, misinformation, and disinformation—that they are exposed to and thereby the views that they may adopt.

The growing role of social media channels in shaping narratives on geopolitical issues, including conflicts, is generating pushback from governments, both democratic and autocratic. This, in turn, has contributed to the trend of some governments placing restrictions on the public’s use of social media and the internet more generally. For example, Russia has restricted its internet activity since 2012, but increased the intensity of its crackdown on dissidents, online dissent, and independent media coverage leading up to, and since, the invasion of Ukraine. Russia recently passed new laws targeting foreign internet companies, such as the 2019 Sovereign Internet Law and the federal “Landing Law” signed in June 2021. These laws grant the Russian state extensive online surveillance powers and require foreign internet companies operating in Russia to open offices within the country. Additionally, Russia has completely banned Facebook, Twitter, and Instagram within its borders. Ukraine cracked down on online expression in late 2021, in response to fears that Russia was sponsoring Ukrainian media outlets and preparing to invade. However, since the beginning of the invasion, Ukraine has turned toward openly embracing social media as a means to broadcast messages outside its borders and garner public support for its resistance efforts.

Globally, this follows a number of existing trends, including numerous countries’ regulatory efforts to enforce digital and data sovereignty. A range of countries are now attempting to regulate social media outlets and restrict online speech domestically, while using these same platforms to shape narratives internationally. For instance, China, Iran, and India, have all enacted restrictive legislation on internet and social media use domestically while simultaneously using social media channels to spread targeted disinformation campaigns globally.

The effectiveness of governments’ efforts globally to curve access to social media and prevent disinformation, both in the Russia-Ukraine war and overall, has been limited thus far. Regulatory efforts have neither curbed disinformation in robust and systematic ways, nor reigned in the role of social media platforms as domains of political polarization and vitriolic social interactions. While governments have been more effective at curbing access to information within their domestic jurisdictions, many individuals can still circumvent restrictions through virtual private networks (VPNs). These networks allow users to hide the origin of the internet connection and offer access websites that may be blocked within a specific countries’ borders. After Russia’s invasion of Ukraine, VPN downloads within Russia spiked to over 400,000 per day, illustrating the extent of the challenge to completely block access to online spaces.

Looking Ahead

Technical and regulatory strategies for combating disinformation are evolving rapidly but are still in their early stages. In modern conflicts, social media platforms control some of the main channels of information, and their policies can have an outsized effect on public sentiment. In the Russia-Ukraine conflict, the largest global social media platforms have broadly agreed to attempt to limit Russian propaganda messages, but they have placed far fewer restrictions on official content from the Ukrainian government. This type of broad power that social media companies exercise by choosing which voices are amplified during conflicts is driving governments to push for increased control over these channels of information. Among others, China, Russia, and Iran all have onerous restrictions on what content can be posted online and have banned most U.S.-based social media companies. Further, both Russia and China are taking measures to move their populations onto domestic social media channels, such as WeChat in China or VKontakte in Russia, which can be heavily censored, in addition to intensive government oversight and interference. The EU and India have also placed regulatory restrictions on U.S.-based social media platforms, with the intent of developing their own domestic platforms. These developments create challenges for existing international social media platforms and continue to complicate efforts to fight disinformation. As social media channels become more fragmented, and users are subject to differing policies restricting content and disinformation, coherently coordinating efforts to fight disinformation across platforms will become increasingly difficult.

Marines to experiment with Lockheed’s new 5G testbed in the second phase of OSIRIS program

JASPREET GILL

WASHINGTON — The Marine Corps over the next 15 months will be experimenting with a new prototype 5G communications network infrastructure testbed built by Lockheed Martin under the Pentagon’s Open Systems Interoperable and Reconfigurable Infrastructure Solution (OSIRIS) program, the company announced today.

According to Lockheed, Marines at Camp Pendleton, Cali., will begin “mobile network experimentation” now that the company has delivered the final phase 1 OSIRIS prototype. The company was awarded a $19.3 million contract to build the testbed in 2022 for OSIRIS, a program born out of the Pentagon’s FutureG and 5G Office meant to help rapid experimentation and integration of commercial tech. Lockheed is working with subcontractors Intel Corporation, Radisys Corporation and Rampart Communications to evaluate the technology and demonstrate it as part of a Fleet Marine Force event.

The testbed will ultimately support the Marines’s “Expeditionary Advanced Base Operations (EABO) goals, which involve Marines operating in contested environments with increased bandwidth requirements,” the release says. “Additionally, the OSIRIS system aims to reduce overall set-up time.”

After 2 years of experimenting, Pentagon to evaluate RDER tech

JASPREET GILL

WASHINGTON — The Pentagon will soon begin weighing which of the first capabilities born out of its two-year-old Rapid Defense Experimentation Reserve (RDER) will be funded and fielded to the military services, the department’s chief technology officer spearheading the effort said today.

Defense Management Acquisition Group (DMAG), a four-star board that includes representation from all military services, combatant commands and the joint staff, will meet later this fall to go over “a list of projects” that are mature and ready to go into fielding, Heidi Shyu, under secretary of defense for research and engineering, told reporters, though she didn’t say which projects would be under consideration.

Parts of RDER experiments were done at Camp Atterbury, Ind., in May this year and during this summer’s Northern Edge experiments. Those capabilities that proved to be useful will be presented at the DMAG meeting, Shyu said.

She added that two military services — the Army and Air Force — asked to be the lead agents on one specific RDER capability. And although she didn’t divulge details about what RDER project it would be, Shyu said the move “shows a demand signal that there’s interest” in the outcomes of the effort.

Russia spikes UN effort on norms to reduce space threats

THERESA HITCHENS

WASHINGTON — The UN working group attempting to develop norms to constrain threatening military activities in space today ended with a bang — as Russia threw firebomb after firebomb into the process, blocking forward motion against the clear wishes of a majority of participating countries.

UN working groups function on the basis of consensus, meaning that any one nation can veto the proceedings.

Moscow, which voted against the original establishment of the working group — officially, the UN Open Ended Working Group (OEWG) on Reducing Space Threats Through Norms, Rules and Principles of Responsible Behavior — on Thursday made it clear it would not allow the group to issue a formal report to the UN General Assembly detailing the proposals discussed and areas of budding accord.

Today, the Russian delegation went even further — pulling out the diplomatic nuclear option by quashing even a procedural report to mark the group’s two years of work, to the dismay of many delegates who expressed fears that the move sets a negative precedent for future proceedings.

“The fact that even a procedural report describing the technical unfolding of the OEWG was blocked speaks volumes about a desire to see this process fail. Yet the statements from the vast majority of delegates show that despite disagreement on documentation, the process itself was a success,” said Jessica West, who has been documenting the meeting for Canada’s Project Ploughshares.

AI-powered Genomics

DANIEL PEREIRA

The convergence of machine learning, deep learning and genomics, especially in the area of AI-powered genomic health prediction, while remarkably promising will also present remarkably challenging unintended consequences. A recent report suggests areas which need to be explored – starting now – as “the issues posed by the…technologies become harder to predict, more complex and more numerous.”

DNA.I.: Early findings and emerging questions on the use of AI in genomics

AI and genomics futures is a joint project between the Ada Lovelace Institute and the Nuffield Council on Bioethics that investigates the ethical, and political economy issues arising from the application of AI to genomics – which [the authors] refer to throughout [the] report as “AI-powered genomics“.

From the Report
  • AI-powered genomics has seen significant growth in the past decade, driven principally by advances in machine learning and deep learning, and has developed into a distinctive, specialised field.
  • Private-sector investment in companies working on AI-powered genomics has been substantial – and has mainly gone to companies working on data collection, drug discovery and precision medicine.
  • The most prominent current and emerging themes in research on AI-powered genomics relate to proteins and drug development, and the prediction of phenotypic traits from genomic data.
  • According to P&S Intelligence, economic forecasts have suggested the market for AI and genomics technologies could reach more than £19.5 billion by 2030, up from half a billion in 2021.

UK awards BAE Systems $113M Trinity tactical network contract

TIM MARTIN

BELFAST — The UK Ministry of Defence has issued a five-year contract, valued at £89 million ($113 million), to BAE Systems for the design and manufacture of a deployable Wide Area Network (WAN) dubbed Trinity, set to deliver “enhanced connectivity” to frontline troops.

Described as a “highly secure and state-of-the-art battlefield internet capability” Trinity will be acquired to replace Britain’s existing Falcon network, due to be retired in 2026. The full contract amount of $113 million will be “dedicated” to the research and development phase of the program, set to be delivered from December 2025, according to an MoD statement Tuesday.

The new system features a series of nodes, meant to be capable of adding, accessing and moving data across a digital network. Even if some nodes suffer damage from combat operations, the others automatically re-route to maintain “optimum” network speed and keep information flowing, added the MoD.

It also noted that BAE Systems will be supported at an industrial level by US firms L3 Harris and KBR, an engineering company headquartered in Texas. Northern Ireland-based business management agency PR Consulting will also assist on the program.

OSINT: Revolution or Renaissance?

BEN CONKLIN

The explosion of publicly available and commercially available information that is geotagged, time-boxed and capable of being rapidly aggregated has brought attributes to open-source intelligence (OSINT) long associated with more traditional intelligence disciplines.

Conventionally used as a gap filler and to provide valuable context, OSINT’s power as a mainstream tool for analysts and decision-makers has been enhanced and expanded. The most notable instance of this recently is the OSINT insight into the Russian invasion of Ukraine. Examples abound of “adtech” data accurately tracking troop movements and even providing targeting-level intelligence. This OSINT revolution is causing people to question how the intelligence community should organize itself to harness this evolving capability.

However, in a recent podcast interview, “Optimizing OSINT for the Intelligence Community,” Randy Nixon, director of the Open Source Enterprise at the CIA, challenged the assertion that there is an ongoing OSINT revolution. Instead, Nixon described it as an OSINT renaissance. In fact, the public is increasingly aware of the value of OSINT and is learning for the first time about a capability that has existed for decades. Whether it is a revolution or a renaissance, many recognize and applaud OSINT’s elevated role and its growing impact. Many important trends in OSINT are expected to profoundly impact the discipline for decades.

No service can fight on its own: JADC2 demands move from self-sufficiency to interdependency

BARRY ROSENBERG

In this Q&A with Lt. Gen. David A. Deptula, USAF (Ret.), dean of the Mitchell Institute for Aerospace Studies, and former deputy chief of staff for Intelligence, Surveillance and Reconnaissance for the US Air Force, we discuss the future of JADC2 under a new chairman of the Joint Chiefs of Staff; the inadequacy of the current defense industrial base; why business standards imposed on the Department of Defense are hindering production; and why jointness is about using the right force in the right place at the right time — not every force, every place, all the time.

DEPTULA: Gen. Brown’s very aware of the critical importance of JADC2 and all-domain operations — perhaps more so than any other service chief because he’s actually been a functional joint-force air-component commander in his assignment in the Persian Gulf.

What that means from a mission perspective is it doesn’t matter what service a particular aircraft comes from. It’s the capability that the Air Force aircraft contributes. So it is with all-domain operations and JADC2. His commitment to see these concepts of operation come to fruition are likely what you’ll see him bring to the position as the chairman.

One of the reasons that JADC2 has been languishing, or at least not accelerating as many of us would like to see it develop, is that there has not been a real champion for the concept in the seniormost levels of the Department of Defense. I think Gen. Brown has the opportunity to become that champion.

Beware of Pentagon techno-enthusiasm

WILLIAM D. HARTUNG

As Deputy Defense Secretary Kathleen Hicks announced to the world at this week’s meeting of the National Defense Industrial Association, the Pentagon is bullish on a new approach to the challenge posed by China, as embodied in the “Replicator” initiative, which involves “helping us to overcome the PRC’s biggest advantage, which is mass…More ships. More missiles. More people.”

Hicks went on to outline the key thrust of the new approach: “To stay ahead, we’re going to create a new state of the art—just as America has before—leveraging attritable, autonomous systems in all domains—which are less expensive, put fewer people in the line of fire and can be changed, updated or improved with substantially shorter lead times.” On the face of it, this seems like a better bet than building ever-more-complex systems that take decades to develop and deploy, are extremely difficult to maintain, and are in some cases far more expensive than the systems adversaries can use to counter them.

But in the quest to change how the Pentagon does business it will be important to be humble about the likely effectiveness of swarms of drones; unpiloted vehicles in the air, on land, and at sea; and AI-driven decision-making systems that can dramatically shorten the “kill chain” from a decision to attack to the arrival of a weapon on its intended target. And it is also necessary to acknowledge that implementing this effort to change how America arms itself for potential conflicts will involve a sharp political clash with Congress over the fates of aircraft carriers, piloted aircraft, and traditional armored vehicles—weapons that provide jobs and incomes in the districts and states of members with the most sway over the size and shape of the Pentagon budget.

Even what may seem like the seminal example of “revolutionary” technologies in warfare—Operation Desert Storm, the 1991 U.S. response to Iraq’s invasion of neighboring Kuwait—involved exaggerated claims of battlefield precision that were only corrected after detailed after-action analyses. As longtime Pentagon critic and defense expert Winslow Wheeler has noted, a post-war Government Accountability Office analysis found that it took many more munitions to destroy key targets than the Pentagon and arms makers had originally asserted. Success rates of key systems like the F-117 stealth aircraft, the Tomahawk land attack missile, and laser-guided bombs were found to be considerably lower than claimed—in some cases, stunningly lower. For example, the GAO’s analysis of Tomahawk use in Desert Storm found that only about half of those missiles launched in that war arrived at their targets. And the agency went on to note that “[o]thers arrived at the designated target area, but impacted so far away from the aimpoint as to only create a crater.”

To cite another telling example, “the claim by DOD and contractors of a one-target, one-bomb capability for laser-guided munitions was not demonstrated in the air campaign where, on average, 11 tons of guided and 44 tons of unguided munitions were delivered on each successfully destroyed target.” That was good enough to defeat a relatively poorly armed adversary, but it was not the performance of the miracle weapons originally advertised. Equally importantly, these capabilities and later refinements were not sufficient to win wars against adversaries with no air forces or significant air defense systems in either Iraq or Afghanistan. War is about far more than having the best bombs and communications systems. If this kind of advantage can be developed vis-a-vis China, it is unlikely to be decisive.

Part of the push for a new generation of weapons and control mechanisms is based on the purported success of drones in the war in Ukraine. But it’s too early to fully evaluate the performance of these systems, or to assess their relevance to a potential conflict with China – a war between nuclear-armed powers that could be an unprecedented disaster for all concerned, drones or no drones.

All of the above suggests that it doesn’t make sense to rush towards the sort of new techno-revolution advocated by Hicks in her NDIA speech without adequate assessment and testing. Most importantly of all, plans to win a war with China must take second place to political and diplomatic initiatives to set rules of the road that make a conflict between Washington and Beijing less likely.

Technological enthusiasm is not a strategy. And without the proper political and diplomatic context, a surge towards capabilities like swarms of drones that can destroy thousands of targets in China on short notice are more likely to accelerate a dangerous arms race than deter a potentially catastrophic conflict.

5 September 2023

India’s Use of Buddhism: Soft Power, Soft Balancing

David Scott

Strategic utilization of Buddhism in Indian foreign policy is a feature that has become particularly noticeable in the last decade, under Narendra Modi’s Bharatiya Janata Party (BJP) which took power in 2014. Modi has pushed an “Indian vision of Buddhism” which “appeals to ancient history while rooted in contemporary geopolitical concerns” (Lam 2022). The geopolitical concerns reflect the deterioration in India’s relations with China on show with confrontation, casualties and conflict along India’s Buddhist Himalayan frontier – at Doklam in 2017, Galwan in 2020, and Yangtse in December 2022.

Leading analysts were already noting by the end of 2014 that “the PM has put Buddhism at the heart of India’s vigorous new diplomacy.” Two examples suffice from September 2015. At the Mahabodhi Temple in Bodh Gaya, Modi proclaimed:

That same month, Modi also initiated a “…‘Samvad’ – Global Hindu-Buddhist Initiative”, in which “India is taking the lead in boosting the Buddhist heritage across Asia.” This Samvad initiative was enthusiastically embraced by Japan’s leader Shinzo Abe. A Samvad framework was then pushed jointly by India and Japan in subsequent years, a geocultural use of Buddhism to offset the geoeconomic allure of China’s Silk Road project.

India is a rising power, with rising hopes for influence in and around its immediate and extended neighborhood. Its power projection is multi-faceted, with geocultural linkages sought for geopolitical purposes. Kishwar noted the “rising role of Buddhism in India’s soft power strategy” in 2018. This soft power Buddhism-facilitated diplomacy is also evident with regard to India’s strategic partnerships with Japan and Mongolia (Sarmah 2022), as well as with Vietnam and South Korea, and with regard to Indian outreach to Thailand, Myanmar and Sri Lanka. Tibetan Buddhism has been “a source and strength of Indian soft power diplomacy” (Tsultrim 2020), along the Himalayas with regard to bilateral relations with Nepal and Bhutan, and further afield with regards to Mongolia.

SHARE ARTICLE The China Threat


When Secretary of the Air Force Frank Kendall said in 2021 that his top concerns were “China, China, and China,” it wasn’t exactly a surprise. Kendall came back into government specifically because of his growing concern over competition with China, which he saw in terms of the Cold War competition between the U.S. and the Soviet Union. The 2022 U.S. National Defense Strategy (NDS) pinpointed China as the nation’s pacing threat. Indeed, in May of 2022, U.S. Secretary of State Anthony Blinken described China as “the only country with both the intent to reshape the international order, and, increasingly, the economic, diplomatic, military, and technological power to do it.”

Beijing’s multifaceted approach to changing the global world order is increasingly underpinned by military might. But the People’s Liberation Army (PLA) did not just rise to near-peer status overnight. Unless you are a China analyst or policy wonk—spending your waking hours digging through the annual DOD China Military Power Report—you might not know very much at all about the People’s Liberation Army, the People’s Liberation Army Air Force, and the other components of China’s military.
The PLA began a period of accelerated growth around 2015, with the intended goal to reach near-parity with the U.S. military. From fielding the world’s largest surface fleet to developing fifth-generation fighter aircraft and hypersonic missiles, the PLA rapidly transformed itself. An army that used human wave tactics to “defeat” Vietnam in 1979 now looks radically different in 2023. China’s leader, Chinese Communist Party (CCP) General Secretary Xi Jinping, has repeatedly set 2049 as the target by which he intends China to surpass the United States in comprehensive power.

Russian cyber group unleashes new malware campaign on Ukrainian military targets

Chris Riotta

A Russian cyber threat actor launched a novel malware campaign against Ukrainian military personnel, targeting Android devices to steal sensitive information from the battlefield, according to an international report published Wednesday.

Sandworm, a Russian state-sponsored threat actor linked to the Kremlin's military intelligence service, leveraged a mobile malware known as "Infamous Chisel" to infect Android devices and periodically scan files and network information for exfiltration, the report said.

The new malware consists of a collection of components that gave the Russian threat actor backdoor access to infected devices to conduct network monitoring, traffic collection and file transfer operations.

The report, which provides technical details into the new kind of malware, was published by the Cybersecurity and Infrastructure Security Agency, the FBI, the National Security Agency and several international partners, including the U.K. National Cyber Security Centre, the New Zealand National Cyber Security Centre and the Canadian Centre for Cyber Security.

Ukraine's security agency first uncovered the Russian-linked cyberattack earlier this month when it announced that it "exposed and blocked" attempts by Sandworm to gain unauthorized access to a combat data exchange system maintained by the country's armed forces.

"Since the first days of the full-scale war, we have been fending off cyberattacks of Russian intelligence services aiming to break our military command system and more," Illia Vitiuk, head of the Ukrainian security agency's cybersecurity department, said at the time.

The new report assesses how Sandworm leveraged Infamous Chisel in an attempt to establish a persistent presence on impacted networks and includes indicators of compromise for affected devices.