Ai-hype ChatGPT is also popular among cybercriminals

The hype surrounding ChatGPT has not gone unnoticed by cybercriminals. Researchers at IT security firm Checkpoint see the first examples of malicious software tools developed by ChatGPT and warn of applications found on the dark web.

ChatGPT, the prototype chatbot with artificial intelligence, is gaining interest. Many parties are experimenting with the solution from the developer OpenAI. ChatGPT, specialized in conducting dialogues with a user, generates entire texts even with few keywords. Cybercriminals are also exploring this AI hype. IT security firm Checkpoint shares three recent examples where criminal hackers developed malicious tools using ChatGPT.

#1 Stealing information from malware research

On December 29, a popular cybercriminal forum posted an attachment called ChatGPT – Benefits of Malware. The threat creator revealed that it experimented with ChatGPT to mimic malware families and techniques described in common malware research publications.

The messages are aimed at less tech-savvy cybercriminals with the aim of showing them how to use ChatGPT for malicious purposes, with real examples that are immediately applicable, Checkpoint says.

#2 Multi-layer encryption

The ICT security officer also sees examples of potential cybercriminals who barely have enough development skills to get started with the popular AI tool. “They can use ChatGPT to develop malicious tools and become a full-fledged cybercriminal with technical capabilities.”

On December 21, a threat actor named USDoD released a Python script for a multi-layer encryption tool, stating that it was his first script ever created. Checkpoint: When another cybercriminal noticed that the style of the code is similar to OpenAI code, the USDoD confirmed that OpenAI gave him a “good helping hand to finish the script with a nice stretch.”

The ICT security officer emphasizes that all of the above code can be used in a benign way, but that this script can also be easily adapted to encrypt someone’s machine without user intervention. “For example, it can turn the code into ransomware if the script and syntax issues are fixed.”

#3 ChatGPT for scripts dark web marketplace

In a third example found by the IT security officer, a cybercriminal shows how ChatGPT can be used to create scripts for a dark web marketplace. “The role of the marketplace in the illegal economy is to provide a platform for automated trading of illegal goods, such as stolen accounts or payment cards, malware or even drugs and ammunition, with all payments made in crypto.”

A matter of time

Security engineering expert Zahier Madhar from Check Point Research: ‘Cybercriminals find ChatGPT attractive. We have evidence that hackers are using it to write malicious code. ChatGPT has the potential to speed up the process for hackers by giving them a good starting point. Just as ChatGPT can help developers write code, it can also be used for malicious purposes.’

Although the tools Checkpoint sees now are simple, it’s only a matter of time before more sophisticated threat actors improve the way they use AI-based tools. The company continues to investigate ChatGPT-related cybercrime.

Leave a Comment