Cisco Previews AI Defenses to Cloud Security Platform
Cisco is expanding its cloud security platform with new technology that will let developers detect and mitigate vulnerabilities in AI applications and their underlying models.
The new Cisco AI Defense offering, introduced Jan. 15, is also designed to prevent data leakage by employees who use services like ChatGPT, Anthropic and Copilot. The networking giant already offers AI Defense to early access customers and plans to release it for general availability in March.
AI Defense is integrated with Cisco Secure Access, the revamped Secure Service Edge (SSE) cloud security portfolio that Cisco launched last year. The software-as-a-service offering includes zero trust network access (ZTNA), VPN-as-a-Service, Secure Web Gateway, cloud access security broker (CASB), firewall-as-a-service and digital experience monitoring.
Administrators can view the AI Defense dashboard in the Cisco Cloud Control interface, which hosts all of Cisco's cloud security offerings.
Gaps in AI Capabilities
AI Defense is intended to help organizations who are concerned by the security risks associated with artificial intelligence but are under pressure to implement artificial intelligence into their business processes, Cisco chief product officer and executive VP Jeetu Patel said at the launch event.
"You need to have the right level of speed and velocity to keep innovating in this world, but you also need to make sure that you have safety," Patel said. "These are not tradeoffs that you want to have. You want to make sure that you have both."
According to Cisco's 2024 AI Readiness Survey, 71% of respondents didn't believe they were fully equipped to prevent unauthorized tampering of AI within their organizations. Further, 67% claimed to have a limited understanding of the threats specific to machine learning. Patel said AI Defense addresses these issues.
"Cisco AI Defense is a product which is a common substrate of safety and security that can be applied across any model, that can be applied across any agent, any application, in any cloud," he said.
Model Validation at Scale
Cisco AI Defense is primarily targeted at enterprise AppSecOps organizations. It allows developers to validate AI models before applications and agents are deployed into production.
Patel noted that the challenge with AI models is that they are constantly changing with new data added to them, which changes the behavior of the applications and agents. "So, if models are changing continuously, your validation process also has to be continuous," he said.
Seeking a way to offer the equivalent of red teaming, Cisco last year acquired Robust Intelligence, a startup founded in 2019 by Harvard researchers Yaron Singer and Kojin Oshiba, and the core component of AI Defense. The Robust Intelligence Platform uses algorithmic red teaming to scan for vulnerabilities while using a mechanism Robust Intelligence created called Tree of Attacks with Pruning, an AI-based method of using automation to systematically jailbreak large language models (LLMs).
According to Patel, Cisco AI Defense uses detection models from GenAI platform provider Scale AI and threat intelligence telemetry from Cisco's Talos and its recently acquired Splunk to continuously validate the models and automatically recommend a guardrail for them. Further, he noted that Cisco designed AI Defense to distribute those guardrails through the network fabric.
Pen Testing Models in 30 Seconds
"This essentially allows us to deliver a purpose-built model and data for going out and allowing us to validate if a model is going to work as per expectations or if it's going to surprise us," Patel said. According to Patel, it typically takes most organizations seven to 10 weeks to validate a model. "We can do it within 30 seconds because this is completely automated," he said.
Analysts believe Cisco is the first major player to launch technology that can address automated model verification at that scale. "I don't know anyone else who's done anything close to this," says Frank Dickson, group VP for IDC's security and trust research practice.
"Now, I've heard people doing what we might call an LLM firewall, but it's not as intricate and complex as this," Dickson says. "The ability to do this kind of automated pen testing in 30 seconds looks pretty slick."
Scott Crawford, research director for the 451 Research Information Security channel with S&P Global Market Intelligence, agrees, noting that various large vendors are approaching security for generative AI in different ways.
"But in Cisco's case, it made the first acquisition of a startup with this focus with its pickup of Robust Intelligence, which is at the heart of this initiative," Crawford says. "There are a range of other startups in this space, any of which could be an acquisition target in this emerging field, but this was the first such acquisition by a major enterprise IT vendor."
Crawford says addressing AI security will be a major concern this year, given the rise in attacks against vulnerable models. "We have already seen examples of LLM exploits, and experts have considered the ways in which it can be manipulated and attacked," he says.
Such incidents, often described as LLMjacking, are waged by exploiting vulnerabilities with prompt injections, supply chain attacks, and data and model poisoning.
One notable LLMjacking attack last year was discovered by the Sysdig Threat Research Team, which observed stolen cloud credentials targeting 10 ten cloud-hosted LLMs. In that incident, the attackers accessed credentials from a system running a vulnerable version of Laravel (CVE-2021-3129).
source: DarkReading
Free online web security scanner
Top News:
Bitbucket services “hard down” due to major worldwide outage
January 22, 2025SonicWall Urges Immediate Patch for Critical CVE-2025-23006 Flaw Amid Likely Exploitation
January 23, 2025Hackers exploit 16 zero-days on first day of Pwn2Own Automotive 2025
January 22, 20255,000+ SonicWall firewalls still open to attack (CVE-2024-53704)
January 27, 2025