Blog August 16, 2024

INSIGHTS FROM THE SOC

Theofanis Dimakis, SOC Officer, & Nikolaos Tsompanidis, Threat Detection & Response Expert

A review of AI-generated malware, and how a SOC might deal with the ever-increasing threat… 

Theofanis Dimakis, SOC Officer, and Nikolaos Tsompanidis, Threat Detection & Response Expert at Obrela, speaking during the recent CRESTCon Europe event, shared insights from their perspective into detecting malware, including the rising tide of AI variants. 

As key technical team members at global cybersecurity company Obrela, Gold Sponsor of the event and specialists in managed detection and response services, the duo is well suited to describing the current landscape of AI-generated malware.  

They revealed that since GenAI’s mainstream adoption in 2023, the use of deepfakes has doubled every six months, while ChatGPT usage led to an enormous 1,265% phishing surge in Q4 2023 compared to Q4 2022. 

Tsompanidis told delegates that almost half (46%) of security professionals surveyed fear generative AI will heighten organisational vulnerability to attacks. 

And the backdrop to this growing fear is a rising number of malicious AI tools, thanks to AI jailbreaking.  

These malicious AI tools include Worm-GPT, Malware-GPT, Evil-GPT and most recently, DarkGemini. 

As an example of how popular these malicious AI tolls are, Tsompanidis revealed that Worm-GPT’s Telegram channel gained over 5,000 followers within days of its launch. 

Dimakis suggested we are now in an era where easy malware authoring is available to everyone, including non-technical amateurs, curious individuals, script kiddies, dissatisfied employees, and, of course, hackers. 

He further suggested that current cyber security experts now encounter a daily barrage of unknown and unconventional malware variants, spawned by a larger pool of individuals, making it harder than ever to identify patterns – at least by using conventional, traditional means.  

We do, however, have knowledge of several state-sponsored hacker groups already using malicious AI for nefarious ends, including China-based Aquatic Panda, which exploited Log4Shell to attack universities, highlighting the risks posed by zero-day vulnerabilities and the increasingly sophisticated tactics now used by state-sponsored hacking groups. 

Fancy Bear, a Russian cyber espionage group probably associated with the GRU, (Russian military intelligence), has also incorporated malicious AI into its operations. This enhances the group’s ability to carry out sophisticated cyberattacks, such as automatically generating phishing emails, developing evasive malware, and analysing large volumes of data to identify high-value targets. The Obrela team believes they have gained sensitive information on the likes of satellite communication protocols and radar imaging, for example. 

The pair also described Iran’s state-sponsored adversary, Imperial Kitten, which also uses AI to enhance its cyber espionage activities. This group generates sophisticated code snippets and phishing emails, using AI to create more convincing and effective social engineering attacks. Imperial Kitten has also used evasive code that can bypass traditional security measures, making its malware harder to detect and mitigate.  

Finally, Kimsuky is a North Korean state-backed group which targets South Korea for espionage purposes, using malicious AI to generate spearfishing campaigns, gleaning information regarding the likes of think tank operations and nuclear power operators. 

Polymorphic BlackMamba 

Further demonstrating how large language models, (LLMs) such as ChatGPT, can be exploited, the presenters described the development of an AI-synthesized, polymorphic keylogger, BlackMamba. 

Perhaps the key feature of Black Mamba, the Obrela team revealed, is its ability to modify its program on-the-fly, changing its appearance and behaviour dynamically to avoid signature-based detection methods used by antivirus software. This adaptability makes it particularly challenging for conventional security measures to identify and neutralise, further highlighting the potential dangers of AI-enhanced malware. 

The proof-of-concept not only demonstrates that LLMs can be exploited but underscores the importance of continuous research into the capabilities and limitations of current detection and prevention tools, while also alerting the community to the potential mis use of such technologies. 

The Malware Paradox 

In answer to the burning question, “Can AI-powered malware defeat our (current) security capabilities?”, Dimakis highlighted the malware paradox – that no matter how stealthy or sophisticated a malware may be, if it isn’t executed, it remains ineffective. 

So, he said, malware must run to fulfil its purpose, presenting cyber security teams and SOCs with numerous opportunities for detection – which is, of course, a paradox.  

The duo then revealed how easy it is to use malicious AI like ‘MalwareGPT’ to create basic infostealer code in seconds, then make it more sophisticated, (including encryption, anti-analysis techniques, code obfuscation, polymorphic behaviour and anti-forensic techniques) again, in seconds.  

They then explained that while debugging or modification might be required, this can also be performed by AI in seconds along with the final step, of creating a phishing email or other malicious tool to gain access to the target.  

Detection and triage 

Demonstrating Obrela’s expertise, the presenters showed how, upon running the malware they created using MalwareGPT, the abnormal behavior was immediately flagged, triggering an alert and driving SOC analysts to investigate and verify that the suspect file was indeed related to Infostealer malware, especially given that an exfiltration request was also detected. 

The SOC would then proceed with immediate remediation actions to prevent further spread or damage (such as banning malicious hash and isolating the infected device). 

After informing the customer of the situation, the SOC team would normally continue investigation for further instances of the malware in the environment, try to discover its origins and search for any possible persistence mechanisms or other suspicious findings. 

The presenters highlighted five ways in which AI can enhance a SOC – namely, via threat detection and response capabilities, improved threat intelligence, reducing alert fatigue, performing advanced behavioural analysis, and increased overall efficiency and scalability. These benefits collectively help build a more resilient and proactive cybersecurity posture for any organisation. 

Obrela offers a full solution suite of Managed Detection and Response Services which leverage AI and ML, including threat detection, alert noise reduction and alert prioritisation. 

working together b/w

The expert pair left the audience to ponder some important questions – “Will AI be capable of generating state-of-the-art malware in the future?” and “Will Artificial Intelligence ever be capable of beating Human Intelligence?”, concluding that the while the answer to those questions is uncertain, the only certainty is that the combination of human intelligence with AI will be an unstoppable force.