14 January 2023

Cybercriminals Using ChatGPT to Build Hacking Tools, Write Code

Marco Marcelline

Expert and novice cybercriminals have already started to use OpenAI’s chatbot ChatGPT in a bid to build hacking tools, security analysts have said.

In one documented example, the Israeli security company Check Point spotted(Opens in a new window) a thread on a popular underground hacking forum by a hacker who said he was experimenting with the popular AI chatbot to “recreate malware strains.”

The hacker had gone on to compress and share Android malware that had been written by ChatGPT across the web. The malware had the ability to steal files of interest, Forbes reports(Opens in a new window).

The same hacker showed off a further tool that installed a backdoor on a computer and could infect a PC with more malware.

Check Point noted in its assessment(Opens in a new window) of the situation that some hackers were using ChatGPT to create their first scripts. In the aforementioned forum, another user shared Python code he said could encrypt files and had been written using ChatGPT. The code, he said, was the first such one he had written.

While such code could be used for harmless reasons, Check Point said that it could “easily be modified to encrypt someone’s machine completely without any user interaction.”

The security company stressed that while ChatGPT-coded hacking tools appeared “pretty basic,” it is “only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad.”

A third case of ChatGPT being used for fraudulent activity flagged by Check Point included a cybercriminal who showed it was possible to create a Dark Web marketplace using the AI chatbot. The hacker posted in the underground forum that he had used ChatGPT to create a piece of code that uses third-party API to retrieve up-to-date cryptocurrency prices, which is used for the Dark Web market payment system.

ChatGPT’s developer, OpenAI, has implemented some controls which prevent obvious requests for the AI to build spyware. However, the AI chatbox has come under yet more scrutiny after security analysts and journalists found it could write grammatically correct phishing emails without typos(Opens in a new window).

OpenAI did not immediately respond to a request for comment.

No comments: