ai-powered-attacks

AI-Powered Cyberattacks Now Targeting Most Organizations [Research]

Share this Share on email Share on twitter Share on linkedin Share on facebook

New research reveals that a growing number of organizations are experiencing cyberattacks that leverage artificial intelligence. The 2025 Bitdefender Cybersecurity Assessment found that more than six-in-ten (63%) IT & cybersecurity professionals say their organization has experienced an attack involving AI within the last 12 months.

The results are based on a survey of 1,200 IT and security professionals across six countries. In each country, a majority of respondents say they’ve experienced an attack involving AI. The percentages, by geography:

  • 74% United States
  • 71% Singapore
  • 67% Germany
  • 62% United Kingdom
  • 55% Italy
  • 52% France

"In the past 12 months, has your organization experienced an attack that you believe involved AI?"

What Are the Biggest AI Concerns in IT & Cybersecurity?

The survey asked IT & security professionals to rank their top AI-related concerns. The results revealed a mix of external and internal AI-linked risks. External risks include AI-powered malware, AI-enhanced social engineering, and AI attacks that can evade traditional tools. The internal risks respondents are most concerned about include rushed AI implementations and data leaks from LLMs.

Download the 2025 Bitdefender Cybersecurity Assessment Report to see all results

Threat Actors and AI-Powered Attacks: What We’re Seeing

I recently hosted a webinar with Bitdefender experts to ask them what they are seeing in terms of AI on the cyber battlefield.

Team Lead for the Bitdefender Cyber Intelligence Fusion Cell, Sean Nikkel, says one AI fear that has not been realized is the idea of widespread attack automation. “The closest we've gotten to any kind of automated attack is something that's maybe scripted or an executable. But generally, we're seeing hands on keyboards, with attackers in there running their normal attacker commands.”

However, Nikkel adds there are other AI-related concerns to watch. “There's evidence of evil LLMs out there that are being advertised on the dark web and other closed areas that cybercriminals use. Plus, there are surely AI-powered kits that are helping with phishing attacks to make natural language more possible.”

Bitdefender Technical Solutions Director Martin Zugec agrees that AI is helping to increase the quality of social engineering-based attacks. “I speak and understand a couple of Slavic languages. One thing we do not have in Slavic languages are articles like a, and, the. For us, it's really hard in English to place articles and choose which one to use. Now, for anyone from Eastern Europe that's unfortunately involved in cybercrime, LLMs are amazing at just putting the articles in and suddenly, we don't struggle with this anymore.”

Threat actors are using generative AI to craft phishing messages indistinguishable from legitimate communications. Common red flags such as misspellings or awkward phrasing are disappearing, making it far harder for employees—and even seasoned analysts—to spot deception.

When it comes to potential AI risks (chart above), Zugec says he would order them differently from how they appeared in the results of the 2025 Cybersecurity Assessment. “At this moment, I would be least worried about AI-powered malware. And for me, the number one risk would be rushed AI implementation, with Agentic AI, MCP, and all the hidden risks of agentic AI.”

And Nick Jackson, Bitdefender Director of Cybersecurity Services, says he’s also concerned about a variety of internal risks that AI implementation creates. “A lot of companies are coming to us to ask about alignment with ISO 42001, which looks at the implementation of AI. I'd also say that data leaks from LLMs are still a very key risk.”

Adds Jackson, “Too many organizations aren't using enterprise versions of AI tools, and many employees of those organizations are putting sensitive material and data into those LLMs. At the end of the day, that data is no longer within the organization's confines—or control.”

The Ransomware Group That Grew Up on AI

Another interesting point about AI and threat actors that came up during the discussion was the FunkSec ransomware group, which emerged with AI assistance. “What we actually found out is that it was like a case of wannabe cybercriminals. They were active on the forums asking for help to ransomware somebody, saying they didn’t know where to begin,” says Zugec. “In the end, they actually found the help from AI and launched successful attacks.”

For more on this discussion of AI and the key points of the 2025 Cybersecurity Assessment, watch the on-demand webinar From AI to Attack Surface: What's Shaping Cybersecurity Priorities in 2025. The webinar includes a full copy of the survey results.

The Double-Edged Sword of AI

One thing is clear from the survey results: many IT and cybersecurity professionals are worried about AI-powered cyberattacks. In fact, 51% of professionals surveyed say AI-generated threats are a top concern.

At the same time, there is some good news. AI is revolutionizing cybersecurity operations. It enables automation that accelerates detection, streamlines response, and improves prioritization. And it’s powering some unique innovations, like Bitdefender GravityZone PHASR, which utilizes LLMs to create customized security settings for each device-user pair, preventing the abuse of legitimate tools and narrowing ransomware pathways. This is just one example of a true advancement that benefits cyber defenders.

Download: 2025 Bitdefender Cybersecurity Assessment Report