Seventy-six percent (76%) of cybersecurity professionals believe the world is very close to encountering malicious artificial intelligence (AI) that can bypass most known cybersecurity measures, according to a new report from cybersecurity company Enea.
More than a quarter (26%) see this happening within the next year, and 50% in the next 5 years.
Phishing, social engineering tactics, and malware attacks are those most likely to become more dangerous with the use of AI.
These are some of the sobering findings from a new global study of IT and cybersecurity professionals conducted by research firm Cybersecurity Insiders.
In addition to the concern about offensive AI outpacing defensive AI, a significant 77% of professionals express serious worries about rogue AI, where AI behavior veers away from its intended purpose or objectives and becomes unpredictable and dangerous.
While a majority (61%) of organizations are yet to deploy AI in any meaningful way as part of their cybersecurity strategy, 41% consider AI as a high or top priority for their organization. And a hopeful 68% of respondents expect a budget increase for AI initiatives over the next two years.
Respondents are nonetheless optimistic about AI’s positive impact on cybersecurity. AI is anticipated to bolster threat detection and vulnerability assessments, with intrusion detection and prevention identified as the domain most likely to benefit from AI. Deep learning for detecting malware in encrypted traffic holds the most promise, with 48% of cybersecurity professionals anticipating a positive impact from AI. Cost savings emerged as the top KPI for measuring the success of AI-enhanced defenses, while 72% of respondents believe AI automation will play a key role in alleviating cybersecurity talent shortages.
“Understanding the profound impact of AI on cybersecurity is crucial for navigating the evolving threat landscape,” said Laura Wilber, Sr. Industry Analyst at Enea. “That begins by listening closely to the concerns and hopes of cybersecurity leaders and their teams on the front lines.”
“This report confirms growing concerns around the malicious use of AI, but it also highlights some remarkable innovations in the use of AI to streamline and automate defenses.”