87% of Organisations Experienced AI-powered Cyberattacks in 2024
2 min read
In the past year, 87% of organisations have experienced AI-powered cyberattacks, and the threat is only expected to grow. A staggering 91% anticipate a significant rise in AI-driven threats over the next three years. However, only 26% feel highly confident in their ability to detect these sophisticated attacks, highlighting a dangerous gap in cybersecurity preparedness.
AI is Expanding the Cyber Threat Landscape
Advancements in AI have enabled attackers to execute multichannel cyberattacks across email, text, social media, and other platforms. According to SoSafe’s latest survey, 95% of respondents acknowledge a noticeable increase in this type of attack over the past two years.
One alarming example involves the CEO of WWP, who was targeted through a combination of AI-powered deception techniques. Attackers built trust via WhatsApp, continued engagement on Microsoft Teams, and ultimately used an AI-generated deepfake voice call to extract sensitive information and funds. Such tactics demonstrate how AI adoption is inadvertently expanding attack surfaces, exposing organisations to threats like data poisoning and AI hallucinations.
Security Controls Lag Behind AI-Driven Threats
Despite the growing risks, more than 55% of organisations have not fully implemented security measures to mitigate threats from their own internal AI solutions. Key concerns include:
- Obfuscation techniques (AI-generated methods to disguise attack origins) – flagged as a top concern by 51%
- The creation of entirely new attack methods – cited as the biggest worry by 45%
- The speed and scale of automated attacks – a major challenge for 38%
Andrew Rose, Chief Security Officer at SoSafe, warns that AI is rapidly increasing the sophistication and personalisation of cyberattacks. He explains that attackers now combine multiple communication platforms, mimicking normal interactions to appear more legitimate. Simple email-based scams have evolved into 3D phishing, blending voice, video, and text elements into AI-driven fraud schemes.
Even legitimate AI tools deployed by organisations can become a security risk. Many businesses use AI chatbots to assist employees, but few consider the possibility of their chatbot inadvertently aiding attackers by revealing sensitive data, identifying key personnel, or exposing corporate insights.
The Path Forward: AI Security & Awareness
Niklas Hellemann, CEO of SoSafe, acknowledges that while AI introduces new cybersecurity challenges, it also serves as a crucial tool for defense. However, AI-driven security solutions are only as effective as the people using them.
“Cybersecurity awareness is critical,” Hellemann emphasises. “Without informed employees who can recognise and respond to AI-driven threats, even the best technology will fail. By combining human expertise, security awareness, and AI-powered defense strategies, organisations can stay ahead of emerging cyber threats and build a more resilient security posture.”