When we interviewed Suresh Sankaran Srinivasan (Group Head of Cyber Defence at Axiata) for the blog, he mentioned that deepfake tech is giving threat actors new scope to compromise victims.
Instead of writing a phishing email and hoping the victim falls for it, for example, a cyber criminal can use AI-powered face swapping technology to create a video that looks and sounds like a friend of the victim.
That’s exactly how a perpetrator in China earlier this year convinced a man to make a bank transfer of 4.3 million yuan (USD $622,000).
The big problem here is that deepfakes can be really hard to spot – even for people who know about them. And for those who’ve never even heard of deepfakes, it’s impossible to expect that they’d notice if a fake video or audio recording was sent their way.
As Suresh said, “Deepfake technology poses significant risks in various sectors, including politics, finance, and social engineering attacks, as it becomes increasingly sophisticated and difficult to detect.”
Deepfake stats 📊
- Globally, 71% of people surveyed by iproov said they do not know what a deepfake is
- 29% said they do know what a deepfake is – that’s more than double the previous survey in 2019, but still not enough
- 57% of all global respondents said they think they’d be able to spot a deepfake video, while 43% said they wouldn’t
- Meanwhile, DeepMedia estimates that around 500,000 video and voice deepfakes will be shared on social media in 2023
The many nefarious uses of deepfake tech
It goes without saying that being able to make a video look like someone was there when they weren’t really there has huge potential for harm.
🛑 Enable fraud: As deepfakes become more sophisticated and difficult to spot, it’s likely we’ll see more and more cases of fraud, as cyber criminals use the technology to clone individuals and motivate victims to send money or data based on the belief that they’re sending it to someone they know and trust.
🛑 Influence the stock market: In May, a fake image of an explosion near the Pentagon went viral on Twitter. The explosion never happened – but it sent brief shockwaves through the US stock market, and the S&P 500 dropped by 0.3%. This was just a momentary hint of what’s possible – a deepfake of a CEO announcing major business restructuring, for example, could rapidly change that company’s stock price.
🛑 Artificially alter the reputations of individuals, brands, and entire organizations: Perpetrators could create a deepfake that made a presidential candidate appear to be having a psychotic episode or confessing to a crime. Deepfakes could tell lies about how employees are treated by company bosses, or frame innocent citizens for complex crimes. And misinformation like this – even when it’s been proven to be false – can hang around on the internet for years.
Will it take another disruptive technology to fight deepfakes?
With deepfakes increasingly widespread, and awareness still very low, it’s very difficult to create a system for identifying and mitigating the risk of deepfake capabilities.
But when a disruptive technology threatens security, it might just take another disruptive technology to solve the problem.
We’re talking about blockchain. 🔗
Benjamin Gievis, Co-Founder of Parisian startup Block Expert, said to IBM: “What if we could create an ID and an ecosystem that could authenticate a news source and follow it wherever it’s cited or shared?”
Blockchain technology can provide that level of transparency – and newsrooms, corporations and non-profits are already working with blockchain to develop those transparent networks. The Safe. press consortium – open to anyone who distributes news – adds a stamp every time a member publishes a press release or article.
That stamp acts as a digital seal of approval, and it’s linked to a blockchain key which is instantly registered on a blockchain ledger. Then, whenever a news source with a stamp on it is appended to any other stories or references, that usage is tracked in the blockchain.
Everything is traceable. And when that traceability is visible and validated, users can see when a news item has been altered or faked.
It would take widespread adoption of technology like this to counter the risks of deepfake hacks. But it does offer hope for the future.
Would you be able to spot a deepfake video? Comment your answers below!
Mark your calendars for Black Hat MEA from 📅 14 - 16 November 2023.