6 September 2023

India’s embrace of ‘right to repair’ can transform the electronics sector

Trisha Ray

Electronic waste (e-waste) has emerged as a persistent environmental challenge, driven by the exponential growth in consumer electronics consumption and unfit disposal practices. E-waste refers to all types of e-products and their parts that have been discarded as waste without intention of reuse. In 2022, the value of the global consumer electronics market stood at $762 billion and is expected to reach $1.1 trillion by 2030. For developing nations such as India, e-waste is a quandary on two fronts: managing the dumping of e-waste by developed countries along with a rapidly growing domestic e-waste stream.

In India, better living standards, changing lifestyles, and higher disposable incomes have translated into higher demand for consumer electronics, including “luxury” electronics such as smartphones. Electronics consumption in 2021 was valued at $64.5 billion and is growing at a compounded annual growth rate of 15.77 percent. Smartphones account for more than half of this market. This official market is accompanied by a shadow market of products sold outside manufacturer-approved distribution channels. In 2022, 1.2 million handsets were imported through this shadow route, valued at roughly $13 billion.

If unchecked, the world is projected to generate 111 million tons of e-waste annually by 2050, according to the United Nations University. This growth is and will be exacerbated by developed nations exports of e-waste to developing and least developed countries to avoid national reporting requirements. Eighty percent of e-waste is informally recycled, often under hazardous conditions, in developing nations such as India, Indonesia, Ghana, and Nigeria. If not properly disposed, e-waste leaks harmful chemicals into the soil and atmosphere, posing severe health risks for workers, often children, who scavenge and process this waste. The reported figures do not, however, fully capture the true scope of e-waste dumping, as the United Nations Institute for Training and Research noted in a 2022 report:

“Quantifying these shipments is difficult . . . due to a grey-zone in business when nonfunctional used electronics are shipped for reuse (with individuals claiming that the electronics can still be repaired) or even in illegal situations when non-repairable and non-reusable equipment is shipped, only to prevent recycling costs in countries with strict e-waste legislations.”

India and other developing nations are therefore experiencing the exponential growth of e-waste from both imported and domestic sources.

Indonesia’s Anies Baswedan Picks Head of Islamic Party as Presidential Running Mate

Sebastian Strangio

Indonesia presidential candidate Anies Baswedan gives a speech in Medan, Sumatra, Indonesia, on September 3, 2023.Credit: Facebook/Anies Baswedan

On Saturday, Anies Baswedan, the former governor of Indonesia’s capital Jakarta, named his running mate for next year’s presidential election: the chairman of Indonesia’s largest Islamic party.

Muhaimin Iskandar leads the National Awakening Party (Partai Kebangkitan Bangsa, PKB), which, as Reuters notes, has strong ties with Indonesia’s largest Islamic organization, Nahdlatul Ulama (NU), which boasts some 40 million members.

Anies is one of three candidates vying to succeed President Joko “Jokowi” Widodo, whose second and final term will come to an end with the February 14 election. However, opinion polls have shown Anies sitting consistently in third place behind his rivals, Ganjar Pranowo, the governor of Central Java, and Jokowi’s Defense Minister Prabowo Subianto.

During his speech at a ceremony to launch his partnership with Anies in the city of Surabaya on Saturday, Muhaimin described his move to join the Anies camp as a “blessing from God.”

Anies’ announcement has prompted some surprise, given that Muhaimin’s PKB was not part of the three-party Coalition of Change for Unity that endorsed Anies as a candidate for the election. Indeed, officials from the Democratic Party, one of the members of the coalition, have since expressed indignation that Anies made the decision unilaterally, without their input.

When Dragons Watch Bears: Information Warfare Trends and Implications for the Joint Force

Christopher H. Chin, Nicholas P. Schaeffer 

Over the past decade, the People’s Republic of China (PRC) has watched Russia’s employment of information warfare (IW) with great interest. With the recent conflict in Ukraine and the 2014 Russian annexation of Crimea, the PRC is actively gauging Western nations’ response and associated global implications should it choose to forcefully reunify Taiwan. As the current pacing threat, the PRC seeks to rewrite global norms with the intent to assert supreme influence over Taiwan and the Asia-Pacific region. The parallels between these two Great Powers and their associated aggression toward breakaway republics present an opportunity for the United States and the joint force to map the contours of an evolving Chinese information warfare strategy to build a more comprehensive U.S. response prior to a future conflict in the region. Given the scope, sophistication, and scale of modern information warfare activities, thwarting Chinese information confrontation tactics during crisis and conflict will require a comprehensive approach, one that boldly marshals increased unity of effort from across the whole of government. To compete and win in the 21st-century information environment, the Department of Defense (DOD), in partnership with the interagency community, should endeavor to lead three initiatives across upcoming joint force time horizons:
  • increase the scope and scale of irregular and information warfare to better fit within the modern competition continuum below the threshold of armed conflict (next 1 to 3 years)
  • advocate to establish a central organization responsible for synchronizing U.S. whole-of-government information-related activities to counter foreign malign influence (next 3 to 5 years)
  • revive service to the Nation in the digital age with the establishment of a Civilian Cyber Corps as a precursor to a seventh military branch, U.S. Cyber Force, to build the force capacity necessary to execute cyber effects operations at a scale necessary to defend the Nation, its networks, and its traditional military operations (next 5 to 7 years).

Three Years After Chinese Communist Crackdown, Hong Kong Continues To Suffer

Acton Institute

Despite a push to draw young talent back to the city, Hong Kong is suffering grievously as the Chinese Communist Party crushes civil rights, pursuing dissidents even beyond its borders.

At the end of August, the Hong Kong government charged a Cantonese language group with “threatening national security.” The latter had posted online an essay, cast in the form of fiction, that emphasized the city’s loss of liberty.

Andrew (Lok-hang) Chan, who headed Societas Linguistica HongKongensis, explained that the group, which published the essay, was only related “to arts and literature” but nevertheless was “targeted by the national security police.” He closed the association in response.

Hong Kong’s brutal assault on human rights has disappeared from newspaper front pages, which is a victory for Chinese president Xi Jinping and Hong Kong chief executive John (Ka-chui) Lee, Beijing’s local gauleiter. The Chinese Communist Party (CCP) has effectively extinguished Hong Kong’s inherited British liberties.

After the territory’s return to the People’s Republic of China (PRC) in 1997, the Special Administrative Region enjoyed political autonomy that was supposed to last a half century. However, in June 2020, after years of increasing popular unrest, China imposed the expansive National Security Law (NSL), effectively outlawing criticism of the PRC. Since then, the Hong Kong government, now headed by Lee, has arrested more than 260 people under the NSL and prosecuted more than 3,000 people on charges under other statutes, most long after the targeted conduct.

he Beginning of the End of Putin in Crimea

Mark Toth, Jonathan Sweet

Months in the making, Ukraine is again boldly taking the fight to Russian President Vladimir Putin. Inside of Russia itself. In and around Bakhmut, most notably, strategically southward in the Zaporizhzhia Oblast toward the “decisive terrain” of the Crimean Peninsula.

We have called for strategic patience in these pages and elsewhere, cautioning that we would know when Ukrainian President Volodymyr Zelensky and his generals’ counteroffensive would take shape. We are now starting to see it develop in small villages and towns such as Robotyne and Verbove.

It was never going to be easy. Especially since the Kremlin had months to prepare its defensive lines in Zaporizhzhia because of the Biden Administration’s dithering in getting Kyiv the full suite of ammunition and weapons that would best position Ukraine’s armed forces to achieve an outright victory over Putin.

Wars are managed in a variety of ways, leveraging the instruments of national power collectively known by the acronym DIME – Diplomacy, Information, Military and Economic. Up until now, given relative static battlefronts, the war in Ukraine has been predominantly one of military forces engaged in close combat.

The results of which have played out in the information ecosphere with it being weaponized, creating a ‘Nebula of War’ and competing narratives.

Washington urging Zelensky to press his counteroffensive. Ukraine’s Defense Ministry is pushing back on social media insisting they know best. And Russia is trying to win the war using propaganda that Putin’s soldiers cannot win on the ground.


Riley Bailey

Ukrainian Main Military Intelligence (GUR) Head Kyrylo Budanov reported that the Russian military deployed elements of a newly created “reserve army” (the 25th CAA) to enable units currently on the frontline in Luhansk Oblast to laterally redeploy to defend against the Ukrainian counteroffensive in southern Ukraine. Budanov stated on August 31 that the Russian military deployed elements of the newly formed 25th Combined Arms Army (reportedly formed under the Eastern Military District) to replace elements of the 41st Combined Arms Army (Central Military District) in the Kupyansk direction, and that these elements of the 41st Combined Arms Army (CAA) began a “slow” redeployment to an unspecified area in southern Ukraine.[1] Elements of the 41st CAA’s 35th Separate Guards Motorized Rifle Brigade and 90th Tank Division participated in the failed Russian winter 2023 offensive operation in Luhansk Oblast and have continued limited offensive activity along the Svatove-Kreminna line through now.[2] These units are likely degraded and have been operating without brigade and regiment level rotations like many frontline Russian units throughout the theater. ISW previously assessed that a lack of operational reserves would force the Russian command to conduct further lateral redeployments and make tough decisions about what sectors of the front to prioritize.[3] The Russian military command appears to have deployed elements of the newly formed and likely low quality or understrength 25th CAA to Luhansk Oblast to free up the relatively more effective 41st CAA elements for southern Ukraine. Budanov added that elements of the 25th CAA are already participating in hostilities in Luhansk Oblast.[4]

The 25th Combined Arms Army is unlikely to be combat effective at scale given its rushed deployment, ahead of a previously reported intended deployment date of December 2023. Russian Defense Minister Sergei Shoigu announced that the Russian Ministry of Defense (MoD) formed a “reserve army” at the end of June, likely referencing the 25th CAA, which began recruiting personnel from the Russian Far East in mid-May.[5] The 25th CAA will reportedly consist of 30,000 contract personnel in two motorized rifle divisions as well as an unspecified number of tank and artillery battalions, although it is unclear what elements have actually formed to date.[6] Budanov stated that Russian forces formed the 25th CAA as a ”strategic“ reserve and did not intend for the formation to be combat ready before October or November 2023.[7] A Russian administrator in Dalnegorsk, Primorsky Krai posted a recruitment ad for the 25th CAA on June 5 that claimed that the 25th CAA would train personnel from September 1 to December 1 and then deploy to either Zaporizhia or Kherson Oblast - ISW has not independently observed reporting of the October or November date Budanov cited but has no reason to question this statement.[8] Ukrainian Deputy Chief of the Main Operational Department Oleksii Hromov stated on July 5 that the 25th CAA would not be combat ready until at least 2024.[9] Budanov noted that the 25th CAA elements that have arrived in Luhansk Oblast are understaffed and lack training, unsurprising due to their accelerated deployment.[10] ISW cannot yet independently verify that elements of the 25th CAA are operating in Luhansk Oblast, and the scale of the 25th CAA’s commitment is unclear from Budanov’s comments. The current size and capabilities of the elements of the 25th CAA deployed to Ukraine five months prematurely are unclear. The formation is likely either severely understaffed and not near the paper strength of two divisions, or is poorly trained much like initial Russian mobilized units in fall 2022, or both.

Our clean energy transition requires hydrogen — we must treat it fairly


While driving my Toyota Mirai on the Long Beach International Gateway Bridge, I admired the impressive logistics of the Los Angeles and Long Beach ports. However, I was dismayed by the diesel emissions despite the region’s wealth and regulations. We urgently need technology to eliminate fossil fuel combustion greenhouse gas and pollutant emissions.

Congress came together to pass the bipartisan Inflation Reduction Act (IRA), which includes substantial investments in clean energy technologies. Notably, clean hydrogen received a production tax credit to empower its competition against polluting fossil fuels.

Why? Because people worldwide demand cleaner air and rapid reductions in greenhouse gas emissions. Hydrogen stands as a known clean energy source capable of enabling deep emissions cuts in the toughest-to-decarbonize sectors where direct electrification is ineffective. It can replace fossil fuels in petrochemical production and energy-intensive industries like heavy-duty shipping, freight operations, steel, ammonia and cement manufacturing.

Scaling up clean energy technologies, including clean hydrogen, is vital for achieving net-zero emissions. Clean hydrogen offers crucial energy conversion, storage and transport features essential for our economy. The U.S. Department of Energy predicts a rising demand for clean hydrogen, exceeding 2 million tons annually by 2030 and 20 million tons by 2050, to achieve zero emissions in all sectors. Rapidly scaling up clean hydrogen production is necessary to unlock deep decarbonization and de-pollution. Congress responded by enacting the clean hydrogen production tax credit.

How Democrats’ climate change agenda is blocking real change for America


In the face of Maui’s devastating wildfires that have claimed more than 100 lives, with many still unaccounted for, climate activists are exploiting the tragedy to advance their agenda. But declaring a climate emergency at the behest of the climate lobby would do nothing except make life more expensive for everyday Americans.

Extremist climate groups made their goal clear when President Biden briefly stopped on the island to assess the damage: The president must declare a national climate emergency. The chorus includes versions of “if not now, then when?” alongside demands for action “now” and even some claims that it’s now normal for people to choose between burning in fires or jumping into the ocean.

A Biden-declared climate “emergency” won’t address the root cause of the Maui tragedy, which was mostly due to poor planning, incompetent leadership and distracted priorities. The leaders of Lahaina were well aware of an “extremely high risk of burning” since a 2014 report both defined the problem and proposed a number of mitigation measures.

The report’s author recently stated that a “lack of funding, [and] logistical hurdles in rugged terrain and competing priorities” is why some of the most important measures, including a call to ramp up emergency management response, were never implemented. One official, Maui’s emergency chief, resigned after the tragedy exposed the breadth of ineptitude.

Facing mounting challenges, schools embrace the 4-day week


Hundreds of U.S. school districts have sought to combat the teacher shortage and raise the quality of life for their students and faculty by making a big change: a four-day week.

The trend of a four-day week has been rising among American companies and schools since the beginning of the COVID-19 pandemic, with many finding benefits to having an extra day off. In schools, most students and teachers are getting Friday or Monday off while having slightly longer school days the rest of the week to make up for the missing day.

“The number of school districts with a four-day school week has increased to about 850 districts nationally. Two years ago, it was around 650, so it’s going up,” said Aaron Pallas, professor of sociology and education at Teachers College, Columbia University.

The trend towards four-day school weeks is in part a response to multiple educational issues that flared up during the pandemic, including teacher retention and absenteeism.

Governors in multiple states have turned to creative solutions to tackle teacher shortages, including bringing educators from other states or even veterans to take over classrooms.

The four-day week has largely been implemented in smaller communities grappling with the problem, according to Pallas.

US adds 187K jobs in August, jobless rate rises to 3.8 percent


The U.S. added 187,000 jobs and the unemployment rate rose to 3.8 percent in August, according to data released Friday by the Labor Department.

The jobs report showed the labor market plateaued in August as the Federal Reserve considers another interest rate hike. Economists expected the U.S. to have gained 170,000 jobs and maintain the July jobless rate of 3.5 percent, according to consensus estimates.

While the jobless rate rose 0.3 percentage points in August, the labor force participation rate rose 0.2 percentage points after being largely flat since March.

The Fed has hiked interest rates to their highest level in more than two decades as part of its crusade to cool off an economy overrun by inflation a year ago.

Inflation ticked up slightly in July, according to Commerce Department’s personal consumption expenditures price index released Thursday, although it has fallen from its peak of 9.1 percent last summer.

As Fed officials consider another rate hike, they are looking for signs that the job market is slowing under the weight of past increases.

Job openings fell below 9 million for the first time in more than two years, and the rate of Americans quitting their jobs was the lowest it’s been since January 2021, according to the Job Opening and Labor Turnover Survey released Tuesday.

Alaska Board of Education votes to ban transgender girls from high school sports


Alaska’s state Board of Education voted Thursday to approve a proposal to ban transgender girls from competing on girls’ high school sports teams, advancing one of Republican Gov. Mike Dunleavy’s policy priorities that was thwarted by the state Legislature earlier this year.

The proposal, which board members argued during a special meeting Thursday is necessary to ensure fairness for cisgender female athletes, states that “if a separate high school athletics team is established for female students, participation shall be limited to females who were assigned female at birth.”

The new regulation, if given final approval by Alaska Attorney General Treg Taylor (R), would apply to schools and districts that join the Alaska School Activities Association (ASAA), the state’s regulating body for high school sports.

Current ASAA guidelines allow member schools to decide for themselves whether transgender athletes should be permitted to play on sports teams that do not match their sex assigned at birth. However, if a school determines that a transgender student is eligible to compete, that determination “shall remain in effect for the duration of the student’s high school eligibility.”

Transgender students attending member schools that do not have written policies in place “may only participate based upon [their] gender assigned at birth,” according to ASAA guidelines.

Philippines Suspends New Travel Rules Amid Public Outcry

Mong Palatino

The Philippine government suspended its revised travel guidelines for Filipinos going abroad after legislators, business groups, travel agencies, and migrants described the new requirements as “coercive, restrictive, and redundant.”

The Inter-Agency Council Against Trafficking (IACAT) released the draft in August as part of its intensified campaign against human trafficking. Aside from standard travel documents, some travelers could be required to submit notarized documents from those who sponsored their trips. Critics pointed out that it would entail additional costs for relatives abroad who need to notarize documents in Philippine consulates.

The Bureau of Immigration (BI) emphasized that the revised rules would streamline the process by identifying categories of travelers who may be required to undergo secondary inspection in airports. It also seeks to avoid complaints from passengers who may fail to board their flights on time because of their failure to present numerous documents under the current guidelines.

“The new guidelines issued by the IACAT would ensure that immigration officers look at specific requirements and not require frivolous documents which could later be a cause for complaints,” said BI Commissioner Norman Tansingco. He is referring to the viral video of a tourist who missed her flight because she was unable to show her college yearbook and graduation photo.

American Power Just Took a Big Hit

Sarang Shidore

For more than a decade, the United States mostly ignored BRICS. The grouping, formed by Brazil, Russia, India, China and South Africa, rarely registered on Washington’s radar. When it did, the impulse — as shown by Jake Sullivan, the national security adviser, recently stressing that the coalition is not “some kind of geopolitical rival” — was to downplay the group’s significance. Western commentators, for their part, largely painted BRICS as either a sign of Chinese attempts to dominate the global south or little more than a talking shop. Some even called for its dissolution.

Such complacency looks less tenable now. At a summit in Johannesburg last week, the group invited six global south states — Argentina, Egypt, Ethiopia, Iran, Saudi Arabia and the United Arab Emirates — to join its ranks. In the aftermath of the announcement, indifference gave way to surprise, even anxiety. Yet there’s no need for alarm. BRICS will never run the world or replace the U.S.-led international system.

It would be a mistake, though, to dismiss its importance. After all, any club with such a long waiting list — in this case, nearly 20 nations — is probably doing something right. BRICS’s expansion is an unmistakable marker of many countries’ dissatisfaction with the global order and of their ambition to improve their place within it. For America, whose grip on global dominance is weakening, it amounts to a subtly significant challenge — and an opportunity.

The Dutch are leading the way on military aid to Ukraine. Here’s why.

Timo S. Koster

With its recent decision to supply F-16 fighter jets to Ukraine, the Netherlands once again showed its leadership on the critical European security issue of our time and went a step further than any nation has so far. Of the forty-two F-16s in the Dutch fleet, “part will be for the training, and the rest is for Ukraine,” Minister of Defense Kajsa Ollongren explained last week. This follows early Dutch support for sending Leopard 2 tanks to Ukraine in January.

The Netherlands is punching above its weight in terms of overall military aid to Ukraine, too. So far, it has pledged $2.7 billion, the fourth most of any European nation, according to the most recent data from the Kiel Institute for the World Economy. The Netherlands is only surpassed in this amount in Europe by three countries several times its size by population: Germany, the United Kingdom, and Poland.

Although not all contributions have been made public, it’s clear that the Netherlands is doing more than many other nations of its size, and Dutch Prime Minister Mark Rutte is clearly taking the lead in showing other nations the way on how to support Ukraine in its defense against the brutal onslaught of Russian President Vladimir Putin’s Russia.

It’s worthwhile to take a closer look at the what, why, and how of this effort, and the remarkable change in posture of the Dutch. And it’s important to note that Dutch aid to Ukraine goes well beyond defense-related assistance, including humanitarian aid, reconstruction, protection of cultural heritage, and more.

Soon after the Russian invasion, Rutte was one of the first European leaders to recognize the war in Ukraine for what it was: a European war, a war that jeopardized peace and freedom on the continent. His speech at the Sorbonne University in Paris on March 9, 2022, clearly marked the awakening of the Dutch government and general public. After decades of declining Dutch investments in our common security, Rutte spoke of raising the defense budget (finally!), aid to Ukraine, and European and transatlantic unity. “War has returned to Europe,” he said.

There were a few reasons for this approach. First, the Netherlands has a unique feature among sovereign nations in that it has the promotion and the preservation of the rules-based international order inscribed in its constitution as a task for the government. This has led to a strong track record on protection of human rights, development aid, and multilateral cooperation on security, underscored by the status of The Hague as the international capital of peace and justice. The Dutch are simply compelled to act in a situation like this.

Norms plus counter space weapons: RAND recommends holistic strategy to deter space attack


WASHINGTON — While it likely will be impossible to completely deter adversary attacks on US space systems, a strategy that takes a mixed approach — including diplomacy at one end and offensive counterspace weapons at the other — is most likely to be successful, a new RAND Corporation study finds.

“A comprehensive approach to space deterrence—one that seeks to regulate the use of force in space in the interest of stability; ostracizes states that violate agreed-on norms; and allows states to retain some capacity to punish space aggressors in multiple domains and to develop measures to enhance the defenses, resilience, and redundance of space systems—may have the greatest probability of success,” the study, “A Framework of Deterrence in Space Operations” released today, finds.

Stephen Flanagan, the lead author on the study, explained in an email to Breaking Defense that the concept is one of “mixed deterrence,” that works through “a mix of resilience and defensive measures, combined with robust active defenses of space assets and more-substantial capabilities to degrade the space systems of other countries.”

This approach includes US Space Command and the Space Force not just highlighting “continued investments in space mission assurance and resilience,” but also cooperation with allies and partners, he added.

Noting that “there is no broadly agreed-on framework in the U.S. or allied governments or the wider analytic community on the nature and requirements of deterrence in space operations,” the study explains that the study’s goal is present suggested guidelines to policy-makers both to understand how other nations think about space deterrence and make decisions.

The study further notes that there is no agreement upon international definition of what constitutes space deterrence or how to achieve international stability among potential adversaries either in peacetime or crisis.

“Various countries have quite different conceptions of what the term means and how it can best be achieved,” the study says. For example, Chinese and Russian writings reveal an aggressive approach based on “intimidation” and threats; whereas those of France and Japan show a more “passive” one. India, where strategic thinking seems to still be in play, falls somewhere in between, it adds.

While under US sanctions, where did Huawei get the advanced chips for its latest Mate 60 Pro smartphone?

Che Pan

Huawei Technologies’ silence over details of the advanced semiconductor that powers its new Mate 60 Pro flagship smartphone has become the subject of intense speculation. Here are some possible explanations for where Huawei got the chip.

1. China’s top chip maker SMIC made the chip for Huawei

This is the most plausible explanation, although both Huawei and Semiconductor International Manufacturing Corp (SMIC) declined to provide details. Based on tests conducted on the smartphone, Chinese benchmarking website AnTuTu identified the central processing unit (CPU) in the Mate 60 Pro as the Kirin 9000s from Huawei’s chip design unit HiSilicon.

Research company TechInsights said in a note on its WeChat account that SMIC has used existing equipment and applied its second-generation 7-nanometre process, known as the N+2 node, to manufacture the 5G-capable Kirin 9000s for Huawei. The California-based research firm said it would provide more details on the phone’s connectivity next week.

If that is the case, it would mark a “breakthrough” for China’s semiconductor industry and a major win for Huawei’s smartphone business.

However, under the US sanctions, SMIC should not have been able to make advanced chips for Huawei.

Information Warfare in Russia’s War in Ukraine

Christian Perez

In the lead-up to Russia’s invasion of Ukraine, and throughout the ongoing conflict, social media has served as a battleground for states and non-state actors to spread competing narratives about the war and portray the ongoing conflict in their own terms. As the war drags on, these digital ecosystems have become inundated with disinformation. Strategic propaganda campaigns, including those peddling disinformation, are by no means new during warfare, but the shift toward social media as the primary distribution channel is transforming how information warfare is waged, as well as who can participate in ongoing conversations to shape emerging narratives.

Examining the underlying dynamics of how information and disinformation are impacting the war in Ukraine is crucial to making sense of, and working toward, solutions to the current conflict. To that end, this FP Analytics brief uncovers three critical components:
  • How social media platforms are being leveraged to spread competing national narratives and disinformation;
  • The role of artificial intelligence (AI) in promoting, and potentially combating, disinformation; and,
  • The role of social media companies and government policies on limiting disinformation.
The Role of Social Media and National Disinformation Campaigns

Russia and Ukraine both use social media extensively to portray their versions of the events unfolding, and amplify contrasting narratives about the war, including its causes, consequences, and continuation. Government officials, individual citizens, and state agencies and have all turned to an array of platforms, including Facebook, Twitter, TikTok, YouTube, and Telegram, to upload information. It is difficult to pinpoint the exact amount of content uploaded by these various actors, but the scale of information being uploaded on social media about the war is immense. For instance, in just the first week of the war, videos from a range of sources on TikTok with the tag #Russia and #Ukraine had amassed 37.2 billion and 8.5 billion views, respectively.

At their core, the narratives presented by Russia and Ukraine are diametrically opposed. Russia frames the war in Ukraine, which Putin insists is a “special military operation,” as a necessary defensive measure in response to NATO expansion into Eastern Europe. Putin also frames the military campaign as necessary to “de-nazify” Ukraine and end a purported genocide being conducted be the Ukrainian government against Russian speakers. In contrast, Ukraine’s narrative insists the war is one of aggression, emphasizes its history as a sovereign nation distinct from Russia, and portrays its citizens and armed forces as heroes defending themselves from an unjustified invasion.

Ukraine and Russia are not the only state actors interested and engaged in portraying the war on their own terms. Countries such as China and Belarus have engaged in efforts to portray the conflict on their own terms, and they have launched coordinated disinformation campaigns on social media platforms. These campaigns have broadly downplayed Russia’s responsibility for the war and have promoted anti-U.S. and anti-NATO posts. The mix of narratives, both true and false, originating from different state actors as well as millions of individual users on social media has enlarged tech platforms’ roles in shaping the dynamics of the war and could influence its outcomes.

The scale of information uploaded to social media and the speed with which it proliferates create novel and complex challenges to combating disinformation campaigns. It is often hard to identify the origin of a campaign or its reach, complicating efforts to remove false content in bulk or identify false posts before they reach mass audiences. For example, the active “Ghostwriter” disinformation campaign, attributed to the Belarusian government, uses a sophisticated network of proxy servers and virtual private networks (VPNs), which enabled it to avoid detection for years. Before the operation was uncovered in July 2021, it effectively hacked the social media accounts of European political figures and news outlets and spread fabricated content critical of the North Atlantic Treaty Organization (NATO) across Eastern Europe. The level of sophistication that these types of modern state-backed disinformation campaigns possess makes them exceedingly difficult to detect early and counter effectively. Russia, in particular, has spent decades developing a propaganda ecosystem of official and proxy communication channels, which it uses to launch wide-reaching disinformation campaigns. For instance, “Operation Secondary Infektion,” one of Russia’s longest ongoing campaigns, has spread disinformation about issues such as the COVID-19 pandemic across over 300 social media platforms since 2014.

The range of social media platforms in use, and the variation in their availability across different countries, hinders the ability to coordinate efforts to combat disinformation, while creating different information ecosystems across geographies. The narratives about the war emerging on social media take different forms, depending on the platform and the region, including within Russia and Ukraine. Facebook and Twitter are both banned within Russia’s borders, but Russian propaganda and disinformation aimed at external audiences still flourishes on these platforms. Within Russia, YouTube and TikTok are still accessible to everyday citizens, but with heavy censorship. The most popular social media platform used within Russia is VKontakte (VK), which hosts 90 percent of internet users in Russia, according to the company’s self-reported statistics. It was previously available and widely used in Ukraine until 2017, but the Ukrainian government blocked access to VK and other Russian social media such as Yandex in an effort to combat online Russian propaganda. In 2020, Ukrainian president Volodymyr Zelenskyy extended the ban on VK until 2023, so it has not facilitated communications between Russians and Ukrainians throughout the war this year.

The government-imposed restrictions placed on these major social media platforms leave Telegram as the main social media platform currently accessible to both Russians and Ukrainians. Telegram is an encrypted messaging service created and owned by Russian tech billionaire Pavel Durov, which is being used in the war for everything from connecting Ukrainian refugees to opportunities for safe passage to providing near-real-time videos of events on the battlefield. Critically, in the fight against disinformation, Telegram has no official policies in place to censor or remove content of any nature. While some channels on Telegram have been shut down, the company does not release official statements on why, and it generally allows the majority of content posted by users to continue circulating, regardless of its nature. This allows Telegram to serve as a mostly unfiltered source of disinformation within Russia and Ukraine and reaches audiences that Western social media platforms have been cut off from. While Telegram does not filter content like many other platforms, it also does not use an algorithm to boost certain posts, and it relies on direct messaging between users. This design makes it difficult for AI tools to effectively boost disinformation. In contrast, on other platforms such as Twitter and Facebook, AI is further enabling the rapid spread of disinformation about the war.

The Impact of Artificial Intelligence in Online Disinformation Campaigns

AI and its subcomponents, such as algorithms and machine learning, are serving as powerful tools for generating and amplifying disinformation about the Russia-Ukraine war, particularly on social media channels. The underlying algorithms that social media platforms use to determine what content is allowed, and what posts become the most viewed, are driving differences in users’ perception of the events unfolding. Before the war, there was significant controversy over how social media platforms prioritized and policed content on all kinds of political and social issues. In recent years, both Facebook and YouTube have come under scrutiny from regulators in the U.S. and EU, concerned that their algorithms prioritize extremist content, and for failing to adequately remove disinformation despite some improvements to automated and human-led procedures.

Throughout the Russia-Ukraine war, similar concerns have risen across a range of platforms. For example, researchers found that TikTok directed users to false information about the war within 40 minutes of signing up. New users on TikTok were shown videos claiming that a press conference given by Vladimir Putin in March 2020 was “Photoshopped” and that clips from a videogame was real footage of the war. Likewise, Facebook’s algorithm routinely promoted disinformation about the war, including the conspiracy theory that the U.S. is funding bioweapons in Ukraine. A study by the Center for Countering Digital Hate (CCDH) found that Facebook failed to label 80 percent of posts spreading this conspiracy theory about U.S.-funded bioweapons as disinformation.

Social media platforms also host popular AI-driven tools for spreading disinformation such as chatbots and deepfakes. Bots—AI-enabled computer programs that mimic user accounts on social media networks—are one of the most effective ways that disinformation about the war spreads. Russia has extensive experience effectively using bots to spread disinformation. For example, Russian government agencies and their affiliates previously used them to spread disinformation during the U.S.’s 2016 election as well as throughout the COVID-19 pandemic. Russia is continuing to use bots, and since the start of the war in Ukraine earlier this year, Twitter has reported removing at least 75,000 suspected fake accounts linked to online Russian bots for spreading disinformation about Ukraine. However, the scale and speed at which disinformation can be produced and spread using bots make it nearly impossible to monitor or remove all false accounts and posts.

In addition to bots, deepfakes—videos that use AI to create false images and audio of real people—have circulated online throughout the conflict. Beginning in March 2022, deepfakes portraying both Vladimir Putin and Volodymyr Zelenskyy giving fabricated statements about the war have repeatedly appeared on social media. A deepfake of Vladimir Putin declaring peace widely circulated through Twitter, before being removed, while a deepfake portraying Volodymyr Zelenskyy circulated on YouTube and Facebook. Beyond deepfakes, experts have expressed concern that AI could be leveraged for more sophisticated disinformation techniques. These include using AI to better identify targets for disinformation campaigns, as well as using techniques such as Natural Language Processing (NLP), which allows AI to produce fake social media posts, articles, and documents that are nearly indistinguishable from those by human posters.

While AI is contributing to the spread of disinformation across social media, AI tools also show promise for combating it. The sheer volume of information uploaded to social media daily makes developing AI tools that can accurately identify and remove disinformation essential. For example, Twitter users upload over 500,000 posts per minute, well beyond what human censors can monitor. Social media platforms are beginning to combine human censors with AI, to monitor false information more effectively. Facebook developed an AI tool called SimSearchNet at the start of the COVID-19 pandemic to identify and remove false posts. SimSearchNet relies on human monitors to first identify false posts, and then uses AI to identify similar posts across the platform. AI tools are significantly more effective than human content moderators alone. According to Facebook, 99.5 percent of terrorist-related content removals and 98.5 percent of fake accounts are identified and removed primarily using AI trained with data from their content-moderation teams. Currently, AI aimed at combatting disinformation on social media still relies on both human and computer elements. This limits AI’s ability to detect novel pieces of mis- and disinformation, and means that false posts routinely reach large audiences before they are identified and removed using AI. The current technical limitations on being able to proactively identify and remove false information, combined with the scale of information uploaded online, pose a continuing challenge for limiting disinformation on social media in the Russia-Ukraine war and beyond.

Government and Social Media Disinformation Policies

Social media companies and governments have enacted a range of policies to limit the spread of disinformation, but their application has been fragmented, depending on the platform and geography, with varying effect. The different policies that social media platforms apply, the extent of their efforts to combat disinformation, and their availability within countries, all help shape the way the public understands the Russia-Ukraine war. Critically, social media companies are privately controlled, and their interests may or may not be aligned with varying state interests, including the states where these companies are registered and headquartered as well as others.

In the Russia-Ukraine war, social media companies have taken a range of different measures. Facebook is deploying a network of fact checkers in Ukraine in attempts to eliminate disinformation, and YouTube has blocked channels associated with Russian state media globally. Both of these platforms enacted restrictions beyond the legal requirements under U.S. and EU sanctions on Russia. In contrast, Telegram and TikTok have not taken as significant steps to limit disinformation on their platform, beyond complying with EU sanctions on Russian state media within the EU. The differences in responses among the platforms reflect the government and public pressures that varying platforms are subject to. In general, platforms based in the U.S. have taken stricter stances on limiting Russian disinformation than their international counterparts, such as Telegram and TikTok. The difference in social media platforms’ policies, their efforts to limit disinformation, and their geographic access are all becoming powerful drivers of not only how individuals consume news about the Russia-Ukraine war globally, but also the narratives—including information, misinformation, and disinformation—that they are exposed to and thereby the views that they may adopt.

The growing role of social media channels in shaping narratives on geopolitical issues, including conflicts, is generating pushback from governments, both democratic and autocratic. This, in turn, has contributed to the trend of some governments placing restrictions on the public’s use of social media and the internet more generally. For example, Russia has restricted its internet activity since 2012, but increased the intensity of its crackdown on dissidents, online dissent, and independent media coverage leading up to, and since, the invasion of Ukraine. Russia recently passed new laws targeting foreign internet companies, such as the 2019 Sovereign Internet Law and the federal “Landing Law” signed in June 2021. These laws grant the Russian state extensive online surveillance powers and require foreign internet companies operating in Russia to open offices within the country. Additionally, Russia has completely banned Facebook, Twitter, and Instagram within its borders. Ukraine cracked down on online expression in late 2021, in response to fears that Russia was sponsoring Ukrainian media outlets and preparing to invade. However, since the beginning of the invasion, Ukraine has turned toward openly embracing social media as a means to broadcast messages outside its borders and garner public support for its resistance efforts.

Globally, this follows a number of existing trends, including numerous countries’ regulatory efforts to enforce digital and data sovereignty. A range of countries are now attempting to regulate social media outlets and restrict online speech domestically, while using these same platforms to shape narratives internationally. For instance, China, Iran, and India, have all enacted restrictive legislation on internet and social media use domestically while simultaneously using social media channels to spread targeted disinformation campaigns globally.

The effectiveness of governments’ efforts globally to curve access to social media and prevent disinformation, both in the Russia-Ukraine war and overall, has been limited thus far. Regulatory efforts have neither curbed disinformation in robust and systematic ways, nor reigned in the role of social media platforms as domains of political polarization and vitriolic social interactions. While governments have been more effective at curbing access to information within their domestic jurisdictions, many individuals can still circumvent restrictions through virtual private networks (VPNs). These networks allow users to hide the origin of the internet connection and offer access websites that may be blocked within a specific countries’ borders. After Russia’s invasion of Ukraine, VPN downloads within Russia spiked to over 400,000 per day, illustrating the extent of the challenge to completely block access to online spaces.

Looking Ahead

Technical and regulatory strategies for combating disinformation are evolving rapidly but are still in their early stages. In modern conflicts, social media platforms control some of the main channels of information, and their policies can have an outsized effect on public sentiment. In the Russia-Ukraine conflict, the largest global social media platforms have broadly agreed to attempt to limit Russian propaganda messages, but they have placed far fewer restrictions on official content from the Ukrainian government. This type of broad power that social media companies exercise by choosing which voices are amplified during conflicts is driving governments to push for increased control over these channels of information. Among others, China, Russia, and Iran all have onerous restrictions on what content can be posted online and have banned most U.S.-based social media companies. Further, both Russia and China are taking measures to move their populations onto domestic social media channels, such as WeChat in China or VKontakte in Russia, which can be heavily censored, in addition to intensive government oversight and interference. The EU and India have also placed regulatory restrictions on U.S.-based social media platforms, with the intent of developing their own domestic platforms. These developments create challenges for existing international social media platforms and continue to complicate efforts to fight disinformation. As social media channels become more fragmented, and users are subject to differing policies restricting content and disinformation, coherently coordinating efforts to fight disinformation across platforms will become increasingly difficult.

Marines to experiment with Lockheed’s new 5G testbed in the second phase of OSIRIS program


WASHINGTON — The Marine Corps over the next 15 months will be experimenting with a new prototype 5G communications network infrastructure testbed built by Lockheed Martin under the Pentagon’s Open Systems Interoperable and Reconfigurable Infrastructure Solution (OSIRIS) program, the company announced today.

According to Lockheed, Marines at Camp Pendleton, Cali., will begin “mobile network experimentation” now that the company has delivered the final phase 1 OSIRIS prototype. The company was awarded a $19.3 million contract to build the testbed in 2022 for OSIRIS, a program born out of the Pentagon’s FutureG and 5G Office meant to help rapid experimentation and integration of commercial tech. Lockheed is working with subcontractors Intel Corporation, Radisys Corporation and Rampart Communications to evaluate the technology and demonstrate it as part of a Fleet Marine Force event.

The testbed will ultimately support the Marines’s “Expeditionary Advanced Base Operations (EABO) goals, which involve Marines operating in contested environments with increased bandwidth requirements,” the release says. “Additionally, the OSIRIS system aims to reduce overall set-up time.”

After 2 years of experimenting, Pentagon to evaluate RDER tech


WASHINGTON — The Pentagon will soon begin weighing which of the first capabilities born out of its two-year-old Rapid Defense Experimentation Reserve (RDER) will be funded and fielded to the military services, the department’s chief technology officer spearheading the effort said today.

Defense Management Acquisition Group (DMAG), a four-star board that includes representation from all military services, combatant commands and the joint staff, will meet later this fall to go over “a list of projects” that are mature and ready to go into fielding, Heidi Shyu, under secretary of defense for research and engineering, told reporters, though she didn’t say which projects would be under consideration.

Parts of RDER experiments were done at Camp Atterbury, Ind., in May this year and during this summer’s Northern Edge experiments. Those capabilities that proved to be useful will be presented at the DMAG meeting, Shyu said.

She added that two military services — the Army and Air Force — asked to be the lead agents on one specific RDER capability. And although she didn’t divulge details about what RDER project it would be, Shyu said the move “shows a demand signal that there’s interest” in the outcomes of the effort.

Russia spikes UN effort on norms to reduce space threats


WASHINGTON — The UN working group attempting to develop norms to constrain threatening military activities in space today ended with a bang — as Russia threw firebomb after firebomb into the process, blocking forward motion against the clear wishes of a majority of participating countries.

UN working groups function on the basis of consensus, meaning that any one nation can veto the proceedings.

Moscow, which voted against the original establishment of the working group — officially, the UN Open Ended Working Group (OEWG) on Reducing Space Threats Through Norms, Rules and Principles of Responsible Behavior — on Thursday made it clear it would not allow the group to issue a formal report to the UN General Assembly detailing the proposals discussed and areas of budding accord.

Today, the Russian delegation went even further — pulling out the diplomatic nuclear option by quashing even a procedural report to mark the group’s two years of work, to the dismay of many delegates who expressed fears that the move sets a negative precedent for future proceedings.

“The fact that even a procedural report describing the technical unfolding of the OEWG was blocked speaks volumes about a desire to see this process fail. Yet the statements from the vast majority of delegates show that despite disagreement on documentation, the process itself was a success,” said Jessica West, who has been documenting the meeting for Canada’s Project Ploughshares.

AI-powered Genomics


The convergence of machine learning, deep learning and genomics, especially in the area of AI-powered genomic health prediction, while remarkably promising will also present remarkably challenging unintended consequences. A recent report suggests areas which need to be explored – starting now – as “the issues posed by the…technologies become harder to predict, more complex and more numerous.”

DNA.I.: Early findings and emerging questions on the use of AI in genomics

AI and genomics futures is a joint project between the Ada Lovelace Institute and the Nuffield Council on Bioethics that investigates the ethical, and political economy issues arising from the application of AI to genomics – which [the authors] refer to throughout [the] report as “AI-powered genomics“.

From the Report
  • AI-powered genomics has seen significant growth in the past decade, driven principally by advances in machine learning and deep learning, and has developed into a distinctive, specialised field.
  • Private-sector investment in companies working on AI-powered genomics has been substantial – and has mainly gone to companies working on data collection, drug discovery and precision medicine.
  • The most prominent current and emerging themes in research on AI-powered genomics relate to proteins and drug development, and the prediction of phenotypic traits from genomic data.
  • According to P&S Intelligence, economic forecasts have suggested the market for AI and genomics technologies could reach more than £19.5 billion by 2030, up from half a billion in 2021.