The AI race: Dark AI is in the lead, but good AI is catching up
It’s no secret that cyberattackers have significantly upped their game by using artificial intelligence to write phishing emails that look more legitimate and contain fewer errors. But that’s not all they’re doing with AI.
Google recently revealed that state-sponsored advanced persistent threat (APT) groups are using the company’s Gemini AI assistant to get help with coding for developing tools and scripts, perform research on publicly disclosed vulnerabilities, search for explanations of technologies, find details on target organizations, and search for methods to invade compromised networks.
Cyberattackers have the lead in using AI effectively … for now
Of course, cyberattackers are still using generative AI to write better phishing emails more quickly. That could explain why the Acronis Threat Research Unit (TRU) found that the number of email-based attacks rose by almost 200% from the second half of 2023 to the second half of 2024. The most common attack vector? Phishing, which accounts for three out of four attacks.
Combine those attacks with threats based on deepfakes and even the potential poisoning of AI models, and it’s fair to say that generative AI has thus far been a mixed bag at best from a cybersecurity standpoint.
Whatever the positive effects of generative AI might be, the downsides of the rapidly evolving technology are also significant. In fact, cybercriminals have an advantage over many cybersecurity providers in the efficient and effective use of AI. But not for long.
Cybersecurity vendors are using AI to strike back
The cybersecurity community is rushing to catch up with malware gangs and other threat actors, and the community is succeeding. Solution developers have been using AI for almost a decade for purposes such as detecting unseen variations of malware samples, and more recently, with the advent of generative AI, cybersecurity vendors have found new ways to use AI to battle AI.
One developing technology is AI-powered chatbots linked to cybersecurity applications such as endpoint detection and response (EDR). These evolving bots can provide concise, easily understandable explanations of security incidents that are free of technical jargon and other barriers to understanding.
With a clear picture of what caused an incident and what kind of impact the incident had, users who aren’t IT experts can better understand how to avoid an attempted breach in the future and take steps to make their systems safer.
And there’s more. Vendors are building capabilities into solutions that assist with such critical tasks as threat hunting and remediation. Analysts can use that information to make better informed decisions more rapidly and devise effective strategies to mitigate and remediate security incidents quickly.
AI can help automate security and management processes by performing a semantic search of past support tickets that feature descriptions that are similar to new tickets. An AI assistant can recommend resolutions based on tickets that have already been resolved.
AI can also group similar issues automatically and provide root cause analysis. Furthermore, vendors can use it to identify the main reason for time-consuming issues and recommend a fix.
AI-based script generation could level the cybersecurity playing field
And then there is AI-based script generation, an evolving capability that offers the promise of reducing the need for manual input as well as the need to find skilled engineers. It also has the potential to minimize the chances of human error and accelerate the process of developing scripts.
Organizations will be able to use AI-based scripting to manage users and systems, and to automate software installation to perform remediation steps.
AI can also enable organizations to standardize configuration across thousands of client workloads and automate security configuration. The result is cybersecurity capabilities that require less development time and less expensive expertise to put into action.
AI-based script generation is particularly useful because it has the potential to deliver powerful capabilities to users of all technical skill levels. As scripting tools evolve, even relative novices will be able to input requirements and receive ready-to-use scripts.
AI also has the power to enable enhancement of preexisting scripts with additional instructions and in-line comments for script readability. Integration with EDR can facilitate the instant creation of incident remediation scripts in response to security threats.
The AI cybersecurity battle will continue
The advent of generative AI has introduced new elements of risk to all users of technology. Even AI platforms themselves have come under attack.
But with rapid innovation, response to user feedback and a keen understanding of how AI can serve as a tool for protection, security vendors will continue seeking to gain the upper hand on cyberattackers who rely on AI for their criminal activities.
About TRU
The Acronis Threat Research Unit (TRU) is a team of cybersecurity experts specializing in threat intelligence, AI and risk management. The TRU team researches emerging threats, provides security insights, and supports IT teams with guidelines, incident response and educational workshops.
Sponsored and written by Acronis.
source: BleepingComputer
Free online web security scanner