Balancing Act: Cybersecurity Industry Moves Quickly to Adopt AI for Defense While Speed of Attacks Escalates
The cybersecurity community is walking a tightrope with artificial intelligence: It’s balancing a desire to embrace AI as a useful tool for strengthening protection against attacks and taking action to mitigate an emerging new category of risk that widespread adoption of AI will bring.
This clash of competing interests is playing out as security professionals must move quickly to make the right calls for how AI can best be used in preventing attacks while responding to new threats and vulnerabilities brought on by adoption of the same tools by hostile nation-states and malicious actors. The robot-augmented workforce is coming, and it will be accompanied by a new paradigm for how to defend a suddenly unpredictable computing environment.
“AI is the hardest challenge that this industry has seen,” Jeetu Patel, executive vice president and chief product officer at Cisco Systems Inc., said during his keynote address at RSAC 2025 Conference in San Francisco this week. “The AI architecture is going to be completely different. We’ve inserted the model layer. It’s nondeterministic, it’s unpredictable. This opens up a whole new class of risks that we haven’t seen before.”
Hackers Track Structural Weaknesses
The scope of risks associated with AI adoption is still being determined, but the RSAC gathering provided a few hints at what security researchers have discovered so far.
There is a growing body of evidence that AI is being adopted by threat actors and they’re moving fast. On Wednesday, Rob Lee, chief of research and head of faculty at the SANS Institute, explained this for the RSAC audience.
“MIT research now shows that adversarial agent systems are executing attack sequences 47 times faster than human operators, with 93% success rates in privileged escalation paths,” Lee said. “These AI systems don’t just work faster. They are systematically identifying structural weaknesses in your own organization, not within weeks, not within months, but within seconds.”
One of the areas where weaknesses can be exploited is within the AI models themselves. The cybersecurity firm HiddenLayer Inc. published a report last week which documented a transferable prompt injection technique that bypassed safety guardrails across all major frontier AI models.
This followed earlier research from Cisco in which security analysts were able to “jailbreak” the DeepSeek AI model, a technique used to bypass controls designed to avoid having AI teach a user how to build a bomb, for example.
“One hundred percent of the time we were able to jailbreak the model,” said Cisco’s Patel. “A lot of these models start to get jailbroken because they can be tricked.”
There is also the prospect of users within organizations employing AI models to tap critical data sources without approval, a process that has been dubbed “shadow AI.” A study released by Software AG found that at least half of employees were using unauthorized AI tools in their organizations.
“This issue of shared responsibility and who owns it is a big deal right now,” said John Dickson, chief executive of Bytewhisper Security Inc., during one RSAC session. “Why does shadow AI exist? It’s the CEO’s fear of missing out, that’s why. I don’t think we’ve had a major [shadow AI] breach yet. It’s going to happen.”
When that breach does occur, will it change organizational attitudes toward AI’s use in critical systems, from healthcare and government to the financial world and major services such as water and electrical power?
That’s unlikely, according to Bruce Schneier, author of “A Hacker’s Mind” and a Harvard University fellow, said Tuesday. Schneier believes that generative AI’s conversational interface will lead users to form bonds of trust and familiarity that hackers are likely to exploit.
“We’re going to think of them as friends when they are not,” Schneier said. “An adversary is going to manipulate the AI output. There will be an incentive for someone to hack that AI. We are already seeing Russian attacks that deliberately manipulate AI training data. People will use and trust these systems even though they are not trustworthy.”
Leveraging Agentic AI Systems
A presumption of trust will force cybersecurity providers to deploy new solutions. This includes AI agents, intelligent pieces of software, that can perform a wide range of enterprise tasks.
On Monday, IBM Corp. announced the release of a new X-Force Predictive Threat Intelligence, or PTI, agent for ATOM, an agentic AI system that provides investigation and remediation capabilities. The agent will generate predictive threat insights on potential adversarial activity and minimize manual threat hunting efforts.
“Where the gaps are is what is attractive for the hackers to come in,” Suja Viswesan, vice president of Software Development for IBM, said in an interview at RSAC with SiliconANGLE. “With generative AI, it’s critical that security becomes front and center for every aspect of the business. I do believe that we have a strength in doing that.”
Earlier this week, Cisco announced the launch of its first open-source security model from the newly formed Foundation AI group. Foundation-sec-8b, designed to build and deploy AI-native workflows across the security lifecycle, is an 8 billion-parameter large language model that will be accessible to users in the Hugging Face Inc. repository.
The security community has also been focused on providing tools for developers to reduce security debt, the accumulation of large amounts of vulnerabilities and weaknesses in systems or software. Microsoft Corp.’s developer platform GitHub Inc. has introduced security campaigns with Copilot Autofix to reduce the backlog and prevent new vulnerabilities from being added to code.
“What the developer is getting is a fix,” Marcelo Oliveria, the new security product leader for GitHub, told SiliconANGLE. “We have an opportunity to help people get clean and stay clean. We believe this is a differentiator in why we are going to win this battle.”
This flurry of activity in recent weeks underscores the realization among security professionals that robust AI tools will be needed to counteract what’s coming from threat actors. The stage is set for a new level of robotic attacks, and the cybersecurity world is embracing AI to meet the challenge.
“We’re going to have autonomous hacking systems roaming on the Internet,” Menny Barzilay, co-founder and CEO of Milestone Inc., said during a panel discussion hosted by Cloudflare Inc. “We have to build autonomous security systems. I don’t think we have any other alternatives.”