logo

Top 5 AI-Powered Social Engineering Attacks

Social Engineering Attacks

Social engineering has long been an effective tactic because of how it focuses on human vulnerabilities. There's no brute-force 'spray and pray' password guessing. No scouring systems for unpatched software. Instead, it simply relies on manipulating emotions such as trust, fear, and respect for authority, usually with the goal of gaining access to sensitive information or protected systems.

Traditionally that meant researching and manually engaging individual targets, which took up time and resources. However, the advent of AI has now made it possible to launch social engineering attacks in different ways, at scale, and often without psychological expertise. This article will cover five ways that AI is powering a new wave of social engineering attacks.

The audio deepfake that may have influenced Slovakia elections

Ahead of Slovakian parliamentary elections in 2023, a recording emerged that appeared to feature candidate Michal Simecka in conversation with a well-known journalist, Monika Todova. The two-minute piece of audio included discussions of buying votes and increasing beer prices.

After spreading online, the conversation was revealed to be fake, with words spoken by an AI that had been trained on the speakers' voices.

However, the deepfake was released just a few days before the election. This led many to wonder if AI had influenced the outcome, and contributed to Michal Simecka's Progressive Slovakia party coming in second.

The $25 million video call that wasn't

In February 2024 reports emerged of an AI-powered social engineering attack on a finance worker at multinational Arup. They'd attended an online meeting with who they thought was their CFO and other colleagues.

During the videocall, the finance worker was asked to make a $25 million transfer. Believing that the request was coming from the actual CFO, the worker followed instructions and completed the transaction.

Initially, they'd reportedly received the meeting invite by email, which made them suspicious of being the target of a phishing attack. However, after seeing what appeared to be the CFO and colleagues in person, trust was restored.

The only problem was that the worker was the only genuine person present. Every other attendee was digitally created using deepfake technology, with the money going to the fraudsters' account.

Mother's $1 million ransom demand for daughter

Plenty of us have received random SMSs that start with a variation of 'Hi mom/dad, this is my new number. Can you transfer some money to my new account please?' When received in text form, it's easier to take a step back and think, 'Is this message real?' However, what if you get a call and you hear the person and recognize their voice? And what if it sounds like they've been kidnapped?

That's what happened to a mother who testified in the US Senate in 2023 about the risks of AI-generated crime. She'd received a call that sounded like it was from her 15-year-old daughter. After answering she heard the words, 'Mom, these bad men have me', followed by a male voice threatening to act on a series of terrible threats unless a $1 million ransom was paid.

Overwhelmed by panic, shock, and urgency, the mother believed what she was hearing, until it turned out that the call was made using an AI-cloned voice.

Fake Facebook chatbot that harvests usernames and passwords

Facebook says: 'If you get a suspicious email or message claiming to be from Facebook, don't click any links or attachments.' Yet social engineering attackers still get results using this tactic.

They may play on people's fears of losing access to their account, asking them to click a malicious link and appeal a fake ban. They may send a link with the question 'is this you in this video?' and triggering a natural sense of curiosity, concern, and desire to click.

Attackers are now adding another layer to this type of social engineering attack, in the form of AI-powered chatbots. Users get an email that pretends to be from Facebook, threatening to close their account. After clicking the 'appeal here' button, a chatbot opens which asks for username and password details. The support window is Facebook-branded, and the live interaction comes with a request to 'Act now', adding urgency to the attack.

'Put down your weapons' says deepfake President Zelensky

As the saying goes: The first casualty of war is the truth. It's just that with AI, the truth can now be digitally remade too. In 2022, a faked video appeared to show President Zelensky urging Ukrainians to surrender and stop fighting in the war against Russia. The recording went out on Ukraine24, a television station that was hacked, and was then shared online.

Social Engineering Attacks
A still from the President Zelensky deepfake video, with differences in face and neck skin tone

Many media reports highlighted that the video contained too many errors to be believed widely. These include the President's head being too big for the body, and placed at an unnatural angle.

While we're still in relatively early days for AI in social engineering, these types of videos are often enough to at least make people stop and think, 'What if this was true?' Sometimes adding an element of doubt to an opponent's authenticity is all that's needed to win.

AI takes social engineering to the next level: How to respond

The big challenge for organizations is that social engineering attacks target emotions and evoke thoughts that make us all human. After all, we're used to trusting our eyes and ears, and we want to believe what we're being told. These are all-natural instincts that can't just be deactivated, downgraded, or placed behind a firewall.

Add in the rise of AI, and it's clear these attacks will continue to emerge, evolve, and expand in volume, variety, and velocity.

That's why we need to look at educating employees to control and manage their reactions after receiving an unusual or unexpected request. Encouraging people to stop and think before completing what they're being asked to do. Showing them what an AI-based social engineering attack looks and most importantly, feels like in practice. So that no matter how fast AI develops, we can turn the workforce into the first line of defense.

Here's a 3-point action plan you can use to get started:

  1. Talk about these cases to your employees and colleagues and train them specifically against deepfake threats – to raise their awareness, and explore how they would (and should) respond.
  2. Set up some social engineering simulations for your employees – so they can experience common emotional manipulation techniques, and recognize their natural instincts to respond, just like in a real attack.
  3. Review your organizational defenses, account permissions, and role privileges – to understand a potential threat actor's movements if they were to gain initial access.

Free online web security scanner