OpenAI launched ChatGPT a year ago on November 30, 2022. The public release of the large language model (LLM) chatbot quickly sparked discussion about the societal impact generative AI will have – both good and bad. Numerous other AI chatbot tools were released soon after, including Google Bard and Microsoft’s Bing AI.
LLMs have had a huge impact in the world of cybersecurity. Of particular concern has been their use by threat actors in areas like social engineering campaigns and malware creation. Generative AI also offers potential opportunities as they can be used to augment the defenders’ capabilities.
Has ChatGPT Enhanced the Capabilities of Threat Actors?
The main way LLMs like ChatGPT have been utilized by cybercriminals so far is in social engineering campaigns. James McQuiggan, security awareness advocate at KnowBe4, explained that the ability to use such tools to write phishing emails in any language with good spelling and grammar means that traditional guidance and training in this area “pretty much goes out of the window now.”
He added that this has broken down barriers for people who do not have experience of the cybercriminal underground to be able to launch social engineering attacks. The only real learning required is knowing how to type the right prompts into LLM tools to generate the right messages.
Outside of phishing, the impact of ChatGPT on the cybercrime landscape has been limited. Analyses of cybercriminal forums suggests that cybercriminals are so far reluctant to use generative AI in areas like malware development, even expressing concerns that creators of ChatGPT imitators were trying to scam them.
Etay Maor, senior director of security strategy at Cato Networks, highlighted a number of factors for this initial reticence. One of these is practical issues in code created by LLM tools like ChatGPT. This includes hallucinations – outputs that are factually incorrect or unrelated to the given context, and the inability of some LLMs to properly understand questions in certain languages, such as Russian.
Additionally, he praised restrictions that have been put in place by generative AI creators such as OpenAI to protect their models from being abused in this way.
McQuiggan noted that cybercrime-specific versions of LLM tools, such as WormGPT, were based on earlier versions of ChatGPT that did not have these safeguards.
In fact, it is a bad idea to use these technologies to create malware because AI chatbots are trained on past data, code that already exists, according to Borja Rodriguez, manager of threat intelligence operations at Outpost24.
“The most infectious malware are the ones that are developed with innovative ideas in the way they can infect machines or inject processes,” Rodriguez said.
Nevertheless, Maor explained that AI technology is on the radar of cybercriminal groups. He said much of the chatter on dark web forums suggesting the tools are 3-5 years away from being put to widespread use in malware campaigns.
However, David Atkinson, founder and CEO of SenseOn argued that the use of generative AI tech will not be priority for cybercriminal gangs at the moment. He noted that tools that can bypass MFA are more valuable than anything ChatGPT can produce.
“The most infectious malware are the ones that are developed with innovative ideas in the way they can infect machines or inject processes”
“In terms of its alignment to the main intent [of cybercriminals], which is making money, there are far easier ways to do it,” he outlined.
Organizations Adopt Generative AI at Speed
Organizations have already used AI tools in cybersecurity and general operations for a number of years, for example machine learning for anomaly detection and understanding patterns in data.
LLMs have added to such capabilities, from assisting the creation of secure code to rapidly generating security reports to CEOs in an easily readable format – saving time and costs.
The rise of generative AI and LLMs in 2023 has “democratized” the technology, making it a priority at the board level, according to McQuiggan.
“Everyone wants to have it and show they are at the forefront,” he commented.
This is largely driven by growing customer awareness of AI following the release of ChatGPT.
“Customers are looking at AI as a requirement, not as a differentiator,” noted Maor.
However, there have been some problems relating to the use of LLM tools like ChatGPT. This includes data privacy issues caused by how the models are trained and the data generated is used. Concerns that ChatGPT violates GDPR rules led to Italy to temporarily ban the service at the start of 2023.
The risk of accidental data leaks from organizations using these tools is another significant issue that has emerged this year.
As a result, McQuiggan said that companies must put in place transparency and accountability policies for the use of tools like ChatGPT. These should demonstrate what they are doing with the data, how it is being stored; for example, clearly answering questions such as whether and how the data is being trained.
Another vital way for businesses to ensure the safe and secure use of generative AI is to educate staff on how to use these models appropriately. Rodriguez argued that it will change the way companies recruit, needing to hire people who know how to train AI models and as well as ensuring all new staff have general AI skills.
On the use of generative AI in cybersecurity, Atkinson said it is vital that security leaders have a clear vision for how these tools can be used effectively to enhance their team’s capabilities before adopting them.
“We’ve got to keep in mind how the adoption of this technology will reduce the likelihood of our customers getting hacked and describe that successfully at board level,” Atkinson commented.
Generative AI Trends in 2024 and Beyond
Attackers will not be able to utilize generative AI tools to bring about a revolutionary attack vector in the foreseeable future, according to many of the experts. However, cybercriminals will augment the speed and scale of attacks with AI over time.
Maor noted that cybercriminal groups are using ChatGPT to speed up day-to-day tasks, such as getting quick responses to queries. His biggest concern for 2024 is the continued evolution of social engineering attacks, particularly the creation of sophisticated fake social media profiles which leverage all the available information on the internet about an individual via LLMs.
“It’s going to be the criminal’s assistant for now,” said Maor.
On the defensive side, Atkinson believes generative AI’s biggest potential for the coming years is to hyper-automate investigation and response. “That’s really important because that’s going to bring the overall risk of the organization down,” he stated.
McQuiggan set out his hope that we will see more AI regulation adopted next year to facilitate the safe use of such tools.
“I’d like to see more regulation in the sense of transparency,” he said.
A key component of the regulatory approach will be the EU’s AI Act, passed into law in June 2023. New measures were introduced into the legislation in May 2023 to control “foundational models” following the release of ChatGPT. It remains to be seen whether other regions follow suit with their own versions in the coming year and beyond.
Source: https://www.infosecurity-magazine.com/news-features/chatgpt-generative-ai-cybersecurity/