logo

AI security 2024: Key insights for staying ahead of threats

In this Help Net Security interview, Kojin Oshiba, co-founder of Robust Intelligence, discusses his journey from academic research to addressing AI security challenges in the industry.

Oshiba highlights vulnerabilities in technology systems and the proactive measures needed to mitigate risks, emphasizing robust testing and innovative solutions. He also provides insights into the current state and future security challenges, along with the evolving regulatory landscape.

AI systems security

What motivated you to specialize in the security aspects of AI systems?

While at Harvard, I learned that AI systems are both vulnerable and extremely sensitive to small changes in data. My research proved that even minute changes can have huge impacts on the output of AI models, but the industry treated AI as a “magic black box” that could do anything at the push of a button. I was faced with a choice: continue to write papers and tackle this problem as a researcher, or follow the compulsion to change the way organizations build and run AI models. I chose to pursue action.

I co-founded Robust Intelligence with Harvard professor Yaron Singer in 2019 to break the status quo by proactively mitigating AI risk. Central to our solution are innovations that we created to validate models—algorithmic AI red teaming to automatically identify vulnerabilities and an AI firewall that prevents models from responding with unsafe outputs in real time. This provides organizations the confidence to scale models in production across a variety of use cases.

How would you describe the current state of AI security?

AI development is outpacing security at a considerable rate. In many ways, it closely resembles the early years of cybersecurity. As AI adoption accelerates, so does the frequency and sophistication of attacks on AI systems. This new attack surface is a gap for most companies, as traditional security methods are not designed to sufficiently protect AI systems. The emergence of generative AI has added considerable risk. These are almost exclusively third-party models used to power end-user applications, so supply chain and runtime protection are critical.

Many companies are blocking the release of GenAI-powered applications due to security concerns. They recognize the unmanaged risks but haven’t taken the steps to protect AI systems. For many companies, leadership is directing teams to innovate using AI.

Can you discuss the importance of robust testing and validation in AI security?

AI security best practices share many principles with traditional cybersecurity. It’s essential to scan open-source software before use. For AI, this translates to scanning the files of open-source models in public repositories like Huggingface and Pytorch for malicious components such as pickle file exploits that can execute actions.

Vulnerability testing is the next critical step. This includes the “static” testing of models for binaries, datasets, and models to identify vulnerabilities like backdoors or poisoned data. It also includes “dynamic” testing to evaluate how a vulnerability a model responds across various scenarios. Algorithmic red-teaming can simulate a diverse and extensive set of adversarial techniques without requiring manual testing, exposing things like the model’s propensity to prompt injection attacks.

The vulnerabilities identified in this initial process may determine if a company chooses to build an application on the model. Testing should continue periodically throughout the life of the model, as even the mere act of fine tuning LLMs on a benign data set have been shown to break internal model alignment and introduce new vulnerabilities.

Lastly, companies need to validate the inputs and outputs of AI-powered applications that are running in production. Similar to a WAF, this requires an AI firewall (or guardrail) that detects safety and security threats. In order to maximize effectiveness, this should be a model-agnostic solution that is informed by the latest threat intelligence and AI security research. This will protect companies from attack, as well as prevent undesired responses that may reveal sensitive or toxic information.

What are your thoughts on the current regulatory landscape for AI?

The global AI regulatory landscape is evolving so rapidly that it can be hard to keep up with the latest standards and legislation. Many of the principles included across these global initiatives share common elements, but are tailored to the concerns of each region. In lieu of cohesive regulation, various standards bodies have issued guidelines and frameworks on AI security, including NIST, MITRE, OWASP, US AI Safety Institute, and UK AI Safety Institute. Robust Intelligence has contributed to the development of these standards. They serve as a guidepost for companies that are proactively adopting AI security best practices.

While there has been a flurry of proposed AI safety and security bills, only a few have been voted into law, most namely the EU AI Act. A handful of laws have been enacted at the state level, like in New York and Colorado for example.

All in all, this legislation does not yet comprehensively protect against existing and emerging AI risk. President Biden’s AI Executive Order was among the more impactful attempts to date to meaningfully protect people and society against AI risk. While limited to the federal government, the AI safety and security requirements are specific. The order also expresses President Biden’s desire to work with congress on passing legislation. It’s very possible that we’ll see Congress vote on such bills in the near future.

How do you anticipate AI security challenges will evolve in 5-10 years?

As companies develop increasingly usefully AI applications, the number of connected systems will undoubtedly grow. In turn, this will draw more attention from bad actors looking to exploit these systems. One promising application of AI is autonomous agents. While nascent, AI agents show great promise. These systems enable AI to take automated actions on behalf of users, such as planning a travel itinerary and booking your flights. AI security will need to evolve to identify and mitigate novel AI attacks that will target connected systems.


Free security scan for your website