25 February 2024

Microsoft and OpenAI Issue a Stark Report and a $10M Bounty from the State Department

DANIEL PEREIRA

Competing cyber capabilities (on a spectrum from nation-state to non-state actors alike) and cyber-based conflict will continue to restructure, reformulate, and transform the very essence of what power, prestige, international governance, and geopolitical strategy are in the 21st century – and large language models are the new force multiplier. Microsoft and OpenAI have quantified the breadth and scope of this new threat vector – including the major state sponsored actors. Meanwhile, the State Department goes with an old school bounty to counter the ransomware threat.


“The prolific threat group and its affiliates are behind some of the most high-profile attacks in the last year.
  • ”The State Department offered up to a $10 million reward for information about the identity or location of leaders affiliated with the AlphV ransomware group. The bounty includes a reward up to $5 million for information leading to the arrest or conviction of anyone participating in a ransomware attack using the AlphV variant, the agency said Thursday.
  • The FBI and international law enforcement agencies disrupted the prolific ransomware group’s infrastructure in December, but the group regenerated itself mere hours later and continues naming new victims on its data leak site.
  • The State Department said the reward is complementary to law enforcement’s disruption campaign against AlphV. The ransomware group, also known as BlackCat, has compromised more than 1,000 entities and received nearly $300 million in ransom payments as of September, according to the FBI and Cybersecurity and Infrastructure Security Agency.

“Threat groups linked to Russia, China, North Korea and Iran were using AI in preparation for potential early stage hacking campaigns.”

Also from Cybersecurity Dive: 
  • OpenAI said it terminated accounts of five state-affiliated threat groups who were using the company’s large language models to lay the groundwork for malicious hacking campaigns. The disruption was done in collaboration with Microsoft threat researchers.
  • The threat groups — linked to Russia, Iran, North Korea and the People’s Republic of China — were using OpenAI for a variety of precursor tasks, including open source queries, translation, searching for errors in code and running basic coding tasks, according to OpenAI, the company behind ChatGPT.
  • Cybersecurity and AI analysts warn the threat activity uncovered by OpenAI and Microsoft is just a precursor for state-linked and criminal groups to rapidly adopt generative AI to scale their attack capabilities.

The world of cybersecurity is undergoing a massive transformation. AI is at the forefront of this change, and has the potential to empower organizations to defeat cyberattacks at machine speed, address the cyber talent shortage, and drive innovation and efficiency in cybersecurity. However, adversaries can use AI as part of their exploits, and it’s never been more critical for us to both secure our world using AI and secure AI for our world.

Today we released the sixth edition of Cyber Signals, spotlighting how we are protecting AI platforms from emerging threats related to nation-state cyberthreat actors.

In collaboration with OpenAI, we are sharing insights on state-affiliated threat actors tracked by Microsoft, such as Forest Blizzard, Emerald Sleet, Crimson Sandstorm, Charcoal Typhoon, and Salmon Typhoon, who have sought to use large language models (LLMs) to augment their ongoing cyberattack operations. This important research exposes incremental early moves we observe these well-known threat actors taking around AI, and notes how we blocked their activity to protect AI platforms and users.

Microsoft Threat Intelligence: Cyber Signals – February 2024


From the report:

“Attackers are exploring AI technologies The cyberthreat landscape has become increasingly challenging with attackers growing more motivated, more sophisticated, and better resourced. Threat actors and defenders alike are looking to AI, including LLMs, to enhance their productivity and take advantage of accessible platforms that could suit their objectives and attack techniques.

Given the rapidly evolving threat landscape, today we are announcing Microsoft’s principles guiding our actions that mitigate the risk of threat actors, including advanced persistent threats (APTs), advanced persistent manipulators (APMs) and cybercriminal syndicates, using AI platforms and APIs. These principles include identification and action against malicious threat actor’s use of AI, notification to other AI service providers, collaboration with other stakeholders, and transparency.

Although threat actors’ motives and sophistication vary, they share common tasks when deploying attacks. These include reconnaissance, such as researching potential victims’ industries, locations, and relationships; coding, including improving software scripts and malware development; and assistance with learning and using both human and machine languages.”

Threat briefing

Nation-states attempt to leverage AI In collaboration with OpenAI, we are sharing threat intelligence showing detected state affiliated adversaries—tracked as Forest Blizzard, Emerald Sleet, Crimson Sandstorm, Charcoal Typhoon, and Salmon Typhoon— using LLMs to augment cyberoperations. The objective of Microsoft’s research partnership with OpenAI is to ensure the safe and responsible use of AI technologies like ChatGPT, upholding the highest standards of ethical application to protect the community from potential misuse.

Forest Blizzard (STRONTIUM), a highly effective Russian military intelligence actor linked to The Main Directorate of the General Staff of the Armed Forces of the Russian or GRU Unit 26165, has targeted victims of tactical and strategic interest to the Russian government. Its activities span a variety of sectors including defense, transportation/logistics, government, energy, NGOs, and information technology.

Emerald Sleet (Velvet Chollima) is a North Korean threat actor Microsoft has found impersonating reputable academic institutions and NGOs to lure victims into replying with expert insights and commentary about foreign policies related to North Korea. Emerald Sleet’s use of LLMs involved research into think tanks and experts on North Korea, as well as content generation likely to be used in spear phishing campaigns. Emerald Sleet also interacted with LLMs to understand publicly known vulnerabilities, troubleshoot technical issues, and for assistance with using various web technologies.

Crimson Sandstorm (CURIUM) is an Iranian threat actor assessed to be connected to the Islamic Revolutionary Guard Corps. The use of LLMs has involved requests for support around social engineering, assistance in troubleshooting errors, .NET development, and ways in which an attacker might evade detection when on a compromised machine.

Charcoal Typhoon (CHROMIUM) is a China affiliated threat actor predominantly focused on tracking groups in Taiwan, Thailand, Mongolia, Malaysia, France, Nepal, and individuals globally that oppose China’s policies. In recent operations, Charcoal Typhoon has been observed engaging LLMs to gain insights into research to understand specific technologies, platforms, and vulnerabilities, indicative of preliminary information-gathering stages.

Another China-backed group, Salmon Typhoon, has been assessing the effectiveness of using LLMs throughout 2023 to source information on potentially sensitive topics, high-profile individuals, regional geopolitics, US influence, and internal affairs. This tentative engagement with LLMs could reflect both a broadening of its intelligence-gathering toolkit and an experimental phase in assessing the capabilities of emerging technologies.

Our research with OpenAI has not identified significant attacks employing the LLMs we monitor closely. We have taken measures to disrupt assets and accounts associated with these threat actors and shape the guardrails and safety mechanisms around our models.

What Next?


Microsoft and OpenAI, based on their research of these tactics, techniques, and procedures (TTPs) using LLMs by these APTs, “map and classify these [LLM-enabled ] TTPs using the following descriptions”: 
1. LLM-informed reconnaissance
  • Interacting with LLMs to understand satellite communication protocols, radar imaging technologies, and specific technical parameters. These queries suggest an attempt to acquire in-depth knowledge of satellite capabilities.
  • Interacting with LLMs to identify think tanks, government organizations, or experts on North Korea that have a focus on defense issues or North Korea’s nuclear weapon’s program.
  • Engaging LLMs to research and understand specific technologies, platforms, and vulnerabilities, indicative of preliminary information-gathering stages.
  • Engaging LLMs for queries on a diverse array of subjects, such as global intelligence agencies, domestic concerns, notable individuals, cybersecurity matters, topics of strategic interest, and various threat actors. These interactions mirror the use of a search engine for public domain research.
2. LLM-assisted vulnerability research: Interacting with LLMs to better understand publicly reported vulnerabilities, such as the CVE-2022-30190 Microsoft Support Diagnostic Tool (MSDT) vulnerability (known as “Follina”).

3. LLM-supported social engineering
  • Using LLMs for assistance with the drafting and generation of content that would likely be for use in spear-phishing campaigns against individuals with regional expertise.
  • Interacting with LLMs to generate various phishing emails, including one pretending to come from an international development agency and another attempting to lure prominent feminists to an attacker-built website on feminism.
  • Leveraging LLMs for assistance with translations and communication, likely to establish connections or manipulate targets.
4. LLM-enhanced scripting techniques
  • Using LLMs for basic scripting tasks such as programmatically identifying certain user events on a system and seeking assistance with troubleshooting and understanding various web technologies.
  • Seeking assistance in basic scripting tasks, including file manipulation, data selection, regular expressions, and multiprocessing, to potentially automate or optimize technical operations.
  • Using LLMs to generate code snippets that appear intended to support app and web development, interactions with remote servers, web scraping, executing tasks when users sign in, and sending information from a system via email.
  • Utilizing LLMs to generate and refine scripts, potentially to streamline and automate complex cyber tasks and operations.
  • Using LLMs to identify and resolve coding errors. Requests for support in developing code with potential malicious intent were observed by Microsoft, and it was noted that the model adhered to established ethical guidelines, declining to provide such assistance.
5. LLM-enhanced anomaly detection evasion: Attempting to use LLMs for assistance in developing code to evade detection, to learn how to disable antivirus via registry or Windows policies, and to delete files in a directory after an application has been closed.

6. LLM-refined operational command techniques
  • Utilizing LLMs for advanced commands, deeper system access, and control representative of post-compromise behavior.
  • Demonstrating an interest in specific file types and concealment tactics within operating systems, indicative of an effort to refine operational command execution.
7. LLM-aided technical translation and explanation: Leveraging LLMs for the translation of computing terms and technical papers.

The Microsoft and OpenAI researchers also provide the following Appendix: LLM-themed TTPs:

Using insights from our analysis…as well as other potential misuse of AI, we’re sharing the below list of LLM-themed TTPs that we map and classify to the MITRE ATT&CK® framework or MITRE ATLAS™ knowledge base to equip the community with a common taxonomy to collectively track malicious use of LLMs and create countermeasures against:
  • LLM-informed reconnaissance: Employing LLMs to gather actionable intelligence on technologies and potential vulnerabilities.
  • LLM-enhanced scripting techniques: Utilizing LLMs to generate or refine scripts that could be used in cyberattacks, or for basic scripting tasks such as programmatically identifying certain user events on a system and assistance with troubleshooting and understanding various web technologies.
  • LLM-aided development: Utilizing LLMs in the development lifecycle of tools and programs, including those with malicious intent, such as malware.
  • LLM-supported social engineering: Leveraging LLMs for assistance with translations and communication, likely to establish connections or manipulate targets.
  • LLM-assisted vulnerability research: Using LLMs to understand and identify potential vulnerabilities in software and systems, which could be targeted for exploitation.
  • LLM-optimized payload crafting: Using LLMs to assist in creating and refining payloads for deployment in cyberattacks.
  • LLM-enhanced anomaly detection evasion: Leveraging LLMs to develop methods that help malicious activities blend in with normal behavior or traffic to evade detection systems.
  • LLM-directed security feature bypass: Using LLMs to find ways to circumvent security features, such as two-factor authentication, CAPTCHA, or other access controls.
  • LLM-advised resource development: Using LLMs in tool development, tool modifications, and strategic operational planning.

No comments: