In theory, the world’s population is more aware than ever before of the risks of social engineering tactics used by cyber criminals. And yet in 2023, up to 98% of cyber attacks involved some form of social engineering – from pretending to be someone the victim trusts, to concealing a malicious link in an email, or masquerading as a bank or government institution over the phone.
But how is it possible that even people who are aware of the prevalence of social engineering attacks can become victims – and why are these strategies on the rise?
Social engineering leverages human motivations
Social engineering attacks became more frequent and more expensive in 2023. As Amy Larsen DeCarlo (Principal Analyst at Enterprise Technology and Services, GlobalData) said in a report:
“Cybercriminals are exploiting the biggest vulnerability within any organisation: humans. As progress in artificial intelligence (AI) and analytics continues to advance, hackers will find more inventive and effective ways to capitalise on human weakness in areas of (mis)trust, the desire for expediency, and convenient rewards.”
And this is a really important point. Social engineering works because it taps into people’s emotions.
A phishing email with poor quality writing and a malicious link that’s sent out to thousands of email addresses in the hope that one or two will stick might not be particularly effective in 2024. More people know what to look out for, and know not to trust content from unknown sources.
But a phishing email that’s tailored to a target’s specific vulnerabilities or desires is far more likely to draw them in; and so is a social engineering attempt in which the victim genuinely believes the attacker is their colleague, friend, or relative. When it comes to decision-making, our emotions are hugely influential and can override knowledge or experience – so when an attack triggers an emotional response, we’re all capable of making unwise decisions.
To illustrate this, we spoke to a phishing call victim who we’ll call Thomas (not his real name). Thomas is in his early 50s, and he’s a music teacher in England – he earns enough to live comfortably but he doesn’t have any savings.
On the day of the attack, Thomas had been on the phone with the UK tax office, HMRC. They were asking him for contact details for his father, who owed the government a significant sum of money in unpaid taxes. Thomas hadn’t been in contact with his father for over a year and didn’t know where he was – and after that phone call, he felt worried that he would become liable for the debt if HMRC couldn’t locate his father.
Then an hour later the phone rang again. This time, it was someone claiming to be from HMRC, and demanding that Thomas pay £10,000 right then over the phone. If he didn’t pay, the threat actor told him, then the police would arrive at his house that afternoon to arrest him.
Feeling panicked, and trusting in the legitimacy of the second call because he had been on the phone with a genuine HMRC officer a short time earlier, Thomas tried to pay. He gave his credit card details, but the payment was declined. He had a total of £1,200 in his current account, which he paid to the threat actor using his debit card. The threat actor then ended the call – and Thomas, convinced that he would now be arrested because he hadn’t been able to pay the demanded amount, put on a shirt and tie, said goodbye to his dog, and called his wife to let her know the police were coming for him.
It wasn’t until his wife got home that they realised the call hadn’t been genuine: “He was completely distressed, completely convinced he was going to prison, and all our money for the month was gone from our account,” she said.
Why did it work? Because he was vulnerable, specifically, to an attacker posing as HMRC. Whether or not the attacker knew that in this case, Thomas doesn’t know – but it shows how emotionally powerful targeted attacks can be.
And AI is making social engineering easier, faster, and cheaper for cyber criminals
With advancements in AI tech, targeted social engineering attacks are becoming easier to execute, on a larger scale, and with lower costs for criminal groups.
AI can:
Generate plausible information and target profiles to help attackers understand their potential victims’ vulnerabilities and motivations.
Launch automated mass phishing attacks with email or audio messages that appear to be tailored to an individual target’s vulnerabilities, preferences, social network, or interests – increasing the chances that a large-scale attack will successfully deceive victims.
Generate deepfakes so attackers can use video and audio that appears to be from individuals who are trusted by targets to emotionally manipulate and deceive them.
Be used to identify key personnel within an organisation who have particular privileges; for example, access to sensitive information; to enable more credible (and hard to detect) social engineering attacks.
AI models and data processing capabilities are giving threat actors the capacity to develop more complex, personalised, and efficient social engineering strategies. And that means that in spite of growing awareness, social engineering attacks will continue to rise in 2024 – because they work.
P.S. - Mark your calendars for the return of Black Hat MEA in November 2024. Want to be a part of the action? Register now!