Ad Image

Is Rogue AI Destined to Become an Unstoppable Security Threat?

Rogue AI

Rogue AI

Solutions Review’s Contributed Content Series is a collection of contributed articles written by thought leaders in enterprise software categories.  Dr. Jason Zhang of Anomali explores the world of rogue AI, presenting the possibility of a dystopian future where it can’t be stopped.

When we gauge the likely impact of new technology, count on no small number of pessimists eager to predict a technological Armageddon right around the bend.

Happens all the time. One of the more famous episodes took place at the turn of the millennium after Sun Microsystems co-founder Bill Joy, published a piece in Wired titled Why The Future Doesn’t Need Us. Joy warned that the accelerated development of digital, biological, and material science technologies could cause ‘something like extinction’ for human beings within a couple of generations.

While Joy’s assertions may seem extreme, there are legitimate concerns as humans square off against newly powerful, self-determining algorithms with the sudden rise of generative Artificial Intelligence. It’s an important debate but one where we need to maintain our focus on the science, not the science fiction of one of the more interesting technologies to come down the pike in the last several years.

Download Link to SIEM Buyers Guide

Rogue AI: Destined to Become an Unstoppable Threat?


Why the Hoo Ha Over Generative AI?

Ever since the release of ChatGPT and other natural language processing (NLP) AI models, security experts have become increasingly concerned that bad actors will be able to use generative AI to automate phishing emails, social engineering attacks, and other kinds of malicious content.

Their concerns are valid. There is always a risk that any new technology can be exploited to develop malicious exploits and more effective cyber-attacks. AI is no exception. However, let’s not forget that the AI models trained so far are not sentient and lack reasoning like humans. No doubt that it will become more challenging to tackle AI-based threats in the future, but they remain largely under human control, at least for now.

The Threat of Rogue AI

But since we’re spitballing the future, let’s consider what a rogue AI might look like.

The most common threat from rogue AI will be in the realm of social engineering attacks. Attackers can use AI to generate deepfake texts, images, videos, and voices to trick victims to believe they are writing or talking to a real person. Typical attack examples include phishing attacks such as BEC (business email compromise), romance scams (mimicking a celebrity such as the pig butchering romance scam), and call and chatbot scams (pretending to be customer services).

One reason security wonks are exercised over the threat of rogue AI has to do with the fact that it’s very hard to distinguish between real humans and AI robots. As it is well known, the basic flaws in typical social engineering attacks like phishing emails include poor spelling and grammar (as the attackers often don’t speak the same language as the victims), a human being or an anti-virus (AV) scanner can use these flaws as indicators of phishing attacks. But with the help of AI tools like ChatGPT, attackers can easily generate content in almost any language without the usual spelling or grammar errors often seen in traditional social engineering attacks. Such attacks will be more convincing to victims, even careful people. Also, it will be more challenging for AV scanners to stop them, as those spelling and grammar flaws which the scanners rely on won’t exist anymore.

The scalability of rogue AI-based attacks presents a further challenge. In 2010, there were already reported human-based phone scams targeting Microsoft Windows users. Thanks to modern AI technologies, attackers could conceivably automate thousands of hardly distinguishable scam calls (aka robocalls) at an industrial scale. The AI-based phishing attacks discussed above already pose great challenges to human beings and AV scanners, but the scam calls automated by AI will be more dangerous as there is no efficient way to stop them before

To Arms

So, what steps should AI developers take to prevent such a calamity?

In general, it will be hard for AI developers alone to prevent AI from being weaponized. To be sure, necessary best practices and steps can help alleviate (if not prevent) the negative impacts of AI, including:

  • Implementation of built-in safety measures like kill switches to prevent it from causing harm
  • Conducting comprehensive testing and evaluation under various scenarios to ensure AI tools behave safely
  • Collaboration with policymakers and regulators to help shape regulations and promote ethical AI
  • Development of anti-AI technology to detect AI-generated content or activities.

On the flip side of the same coin, enterprises can also take measures to prepare for this potential threat. As they should. AI tools, which provide equal opportunities for good and evil, will greatly help attackers evolve their tactics, techniques, and procedures (TTPs), and enterprises will face more challenges from the ever-increasing volume and complexity of AI-powered attacks.

The guiding principle of improving security posture and defending the threats largely remains the same as always, which is to optimize the people, process, and technology (PPT) framework:

  • People: The so-called human firewall is the most important security protection layer in any organization. It is of fundamental importance to provide necessary security awareness training to employees, helping them to differentiate between legitimate activities and malicious ones. This could be the most effective solution to tackle the social engineering attacks discussed earlier.
  • Process: implementing appropriate processes and procedures in workflows can greatly augment the security posture, such as regular vulnerability assessments, penetration testing, incident response plans, and security audits.
  • Technology: leveraging effective tools and techniques to prevent, detect and respond to cyber threats is essential in securing businesses of any size. Enterprises should understand their security weakness, and invest in the right tools and technologies to strengthen their security posture.

Put these measures in place sooner, rather than later. Remote work will remain with us for quite some time and so businesses will increasingly rely upon cloud services. This shift is going to open a much larger attack surface than the traditional on-premises environments. As a result, enterprises should optimize their PPT framework to better tackle new challenges from the ever-changing threat landscape, including threats posed by rogue AI.

Final Thoughts on Rogue AI

For the record, I am a great believer in automation and AI. We are lucky to be living in an exciting age to witness the revolutionary breakthrough in AI. But as we’ve seen, all technologies – and AI is no different – offer equal opportunities for both good and evil.

As we celebrate the achievements in AI, we should remain cautious and skeptical just in case it does go rogue sometime in the future.

Download Link to SIEM Buyers Guide
Jason Zhang, Ph.D
Follow Jason

Share This

Related Posts

Udacity Cybersecurity Ad