The entire industry talked about it all the way through 2023 – and this year won’t be any different. New AI use cases are emerging all the time, but not all of them are positive.
The problem-solving power and hyper-convenience of AI tools is countered by its potential for harm. And as threat actors explore ways to leverage AI for exploitative purposes, it’s critical that the cybersecurity industry – and every organisation operating in the digital world – works to understand and mitigate AI threats.
Here are three significant threats we expect to see more of this year.
1. Deepfakes
Around the world, governments, organisations and individuals have been struggling to respond to the avalanche of deepfakes that are being released online. Already, deepfakes have been used as tools for abuse, exploitation, and misinformation. But their full potential for harm has yet to be reached.
According to research by London-based ID verification firm Onfido, deepfake fraud attempts rose by 3,000% in 2023.
And as we move through a new year, we’ll see more deepfakes used to manipulate everything from individuals to national elections and warfare.
As deepfake tech becomes more efficient and more accessible to threat actors without extensive AI knowledge, we’ll also see more low-level deepfake scams and financial cons. The scale of deepfake use is likely to grow quickly – putting more and more people at risk.
2. AI-powered zero-day attacks
Cybersecurity teams are already leaning on the ability of AI to discover zero-day vulnerabilities in their networks. AI can execute these discoveries far quicker than human beings – and as a result, patching operations can become more efficient.
But AI can also create zero-day threats – because attackers can use AI models to find zero-days before you find them, and exploit them before you even know they exist.
The good news is that this isn’t a big problem – yet. Researchers who have begun to demonstrate such threats are keeping their research to themselves, because it has the potential to expedite the process of threat actors understanding and exploiting the potential of AI in zero-day exploits.
3. Increased automation in malware
Malware attacks are set to rise over the coming months and years. And with the advent of AI, fully automated malware could become the most critical security threat for most organisations and individuals globally.
Automation will enable threat groups to target a much higher volume of victims – cutting out the need for time-consuming manual operations, and vastly increasing the number of attacks and the efficacy of attacks.
It’s already happening. But with increasingly accessible AI tools hacking-as-a-service opportunities, it’s going to become a bigger problem – giving attackers an edge over their targets.
We have to plan for the risks of AI, as well as the benefits
Don’t get us wrong – we’re excited about the positive potential of AI. But as AI tech steps up a gear, it’s important that we plan for the risks it exposes us to, as well as the benefits it could bring.
P.S. - Mark your calendars for the return of Black Hat MEA in November 2024. Want to be a part of the action? Register now!