logo

The AI Hangover is Here – The End of the Beginning

AI Hangover

After a good year of sustained exuberance, the hangover is finally here. It's a gentle one (for now), as the market corrects the share price of the major players (like Nvidia, Microsoft, and Google), while other players reassess the market and adjust priorities. Gartner calls it the trough of disillusionment, when interest wanes and implementations fail to deliver the promised breakthroughs. Producers of the technology shake out or fail. Investment continues only if the surviving providers improve their products to the satisfaction of early adopters.

Let's be clear, this was always going to be the case: the post-human revolution promised by the AI cheerleaders was never a realistic goal, and the incredible excitement triggered by the early LLMs was not based on market success.

AI is here to stay

What's next for AI then? Well, if it follows the Gartner hype cycle, the deep crash is followed by the slope of enlightenment where the maturing technology regains its footing, benefits crystallize, and vendors bring second and third-generation products to market. And if all goes well, it's followed by the hallowed plateau of productivity, where mainstream adoption takes off driven by the technology's broad market appeal. Gartner insists that there are a couple of big ifs: not every technology is destined to recover after the crash and what's important is that the product finds its market fit fast enough.

Right now, it looks almost certain that AI is here to stay. Apple and Google are bringing consumer products to market that repackage the technology into smaller, digestible, easy-to-use chunks (photo editing, text editing, advanced search). While the quality is still very uneven, it looks as if at least some players have found a way to productize generative AI in a way that's meaningful – both for consumers and their own bottom line.

What did the LLM ever do for us?

OK, where does this leave enterprise customers – and cybersecurity applications in particular? The fact is that generative AI still has significant drawbacks that hinder its adoption at scale. One of these is the fundamentally non-deterministic nature of generative AI. Since the technology itself is based on probabilistic models (a feature, not a bug!), there will be a variance in output. This might scare some industry veterans who are expecting old-school software behaviors. It also means that generative AI will not be a drop-in replacement for existing tools – it's rather an enhancement and augmentation for existing tools. Still, it has the potential to perform as one layer of a multi-layered defense, one that's difficult to predict for attackers as well.

The other drawback causing adoption friction is cost. The models are very costly to train and this high cost is currently being passed on to the consumers of the models. Consequently, there is a lot of focus on bringing down the per-query cost. Hardware advancements, coupled with breakthrough results in refining the models promise significant decreases in energy use of running AI models, and there's a reasonable expectation that (at least text-based output) will turn into a profitable business.

Cheaper and more accurate models are great but there is also a growing realization that the task of integrating these models into organizational workflows will be a significant challenge. As a society, we don't yet have the experience to know how to efficiently integrate AI technologies into day-to-day work practices. There is also the question of how the existing human workforce will accept and work with the new technologies. For example, we have seen cases where human workers and customers prefer to interact with a model that favors explainability over accuracy. A March 2024 study by the Harvard Medical School found that the effect of AI assistance was inconsistent and varied across a test sample of radiologists, with the performance of some radiologists improving with AI and worsening in others. The recommendation is that while AI tools should be introduced to clinical practice a nuanced, personalized, and carefully calibrated approach must be taken to ensure optimal results for patients.

What about the market fit we mentioned earlier? While generative AI will (probably) never replace a programmer (no matter what some companies claim), AI-assisted code generation has become a useful prototyping tool for a variety of scenarios. This is already useful to cybersecurity specialists: generated code or configuration is a reasonable starting point to build out something quickly before refining it.

The huge caveat: the existing technology has the chance to speed up the work of a seasoned professional, who can quickly debug and fix the generated text (code or configuration). But it can be potentially disastrous for a user who is not a veteran of the field: there's always a chance that unsafe configuration or insecure code is generated, that, if it makes its way to production, would lower the cybersecurity stance of the organization. So, like any other tool, it can be useful if you know what you're doing, and can lead to negative outcomes if not.

Here we need to warn about one special characteristic of the current generation of generative AI tools: they sound deceptively confident when proclaiming the results. Even if the text is blatantly wrong, all current tools offer it in a self-assured manner that easily misleads novice users. So, keep in mind: the computer is lying about how sure it is, and sometimes it's very wrong.

Another effective use case is customer support, more specifically level 1 support – the ability to help customers who don't bother reading the manual or the posted FAQs. A modern chatbot can reasonably answer simple questions, and route more advanced queries to higher levels of support. While this is not exactly ideal from a customer experience standpoint, the cost savings (especially for very large organizations with a lot of untrained users) could be meaningful.

The uncertainty around how AI will integrate into businesses is a boon for the management consultant industry. For example, Boston Consulting Group now earns 20% of its revenue from AI-related projects while McKinsey expects 40% of their revenue to come from AI projects this year. Other consultancies like IBM and Accenture are also on board. The business projects are quite varied: making it easy to translate ads from one language to another, enhanced search for procurement when evaluating suppliers, and hardened customer service chatbots that avoid hallucination and include references to sources to enhance trustworthiness. Although only 200 of 5000 customer queries go via the Chatbot at ING, this can be expected to increase as the quality of the responses increases. Analogous to the evolution of internet search, one can imagine a tipping point to where it becomes a knee-jerk reaction to "ask the bot" rather than grub about in the data mire oneself.

AI Governance must address cybersecurity concerns

Independent of specific use cases, the new AI tools bring a whole new set of cybersecurity headaches. Like RPAs in the past, customer-facing chatbots need machine identities with appropriate, sometimes privileged access to corporate systems. For example, a chatbot might need to be able to identify the customer and pull some records from the CRM system – which should immediately raise alarms for IAM veterans. Setting accurate access controls around this experimental technology will be a key aspect of the implementation process.

The same is true for code generation tools used in Dev or DevOps processes: setting the correct access to the code repository will limit the blast radius in case something goes wrong. It also reduces the effect of a potential breach, in case the AI tool itself becomes a cybersecurity liability.

And of course, there's always the third-party risk: by bringing in such a powerful but little-understood tool, organizations are opening themselves up to adversaries probing the limits of LLM technology. The relative lack of maturity here could be problematic: we don't yet have best practices for hardening LLMs, so we need to make sure they don't have writing privileges in sensitive places.

The opportunities for AI in IAM

At this point, use cases and opportunities for AI in access control and IAM are taking shape and being delivered to customers in products. Traditional areas of classical ML like role mining and entitlement recommendations are being revisited in the light of modern methods and UIs with role creation and evolution being more tightly woven into out-of-the-box governance workflows and UIs. More recent AI-inspired innovations such as peer group analysis, decision recommendations, and behavior-driven governance are becoming par for the course in the world of Identity Governance. Customers now expect enforcement point technologies like SSO Access Management systems and Privileged Account Management systems to offer AI-powered anomaly and threat detection based on user behavior and sessions.

Natural language interfaces are beginning to greatly improve UX across all these categories of IAM solution by allowing interactive natural language exchanges with the system. We still need static reports and dashboards but the ability for persons with different responsibilities and needs to express themselves in natural language and refine the search results interactively lowers the skills and training needed to ensure that organizations realize value from these systems.

This is the end of the beginning

One thing is certain: whatever the state of AI technology in mid-2024, it's not going to be the end of this field. Generative AI and LLMs are just one sub-field of AI, with multiple other AI-related fields making rapid progress thanks to advances in hardware and generous government and private research funding.

Whatever shape mature, enterprise-ready AI will take, security veterans already need to consider the potential benefits generative AI can bring to their defensive posture, what these tools can do to punch holes through the existing defenses, and how can we contain the blast radius if the experiment goes wrong.


Free security scan for your website