This article was originally published in Smartech Daily.
By Spencer Hulse
Agentic AI cybersecurity represents a fundamental shift in how organizations must defend against digital threats. Unlike traditional generative AI tools that require human direction, agentic AI systems independently plan, execute, and adapt their actions, creating an entirely new attack surface that challenges conventional cybersecurity defenses. These autonomous AI agents are reshaping both offensive and defensive strategies across the cyber landscape.
Historically, cyber threats have required significant manual effort and technical skill. AI has made the process much quicker by automating and accelerating various malicious activities, executing thousands of actions simultaneously and at machine speed. Now, attackers can go after everyday tools such as firewalls, browser add-ons, and smart TVs to turn small cracks into serious breaches. AI models can follow complicated instructions, understand nuanced context, and act as autonomous agents with minimal human input.
The Scale of Automated Attacks
AI-assisted data breaches have significantly increased in frequency. According to Fortinet’s FortiGuard Labs 2025 Global Threat Landscape report, the number of malicious scans conducted by attackers increased by 16.7% from 2024, with cybercriminals conducting approximately 36,000 malicious scans per second. The report recorded over 97 billion exploitation attempts during the year, reflecting “increased automation and broader targeting across industries.”
“The consequences of cyber attacks, including financial losses, reputational damage, and operational disruption, can be catastrophic for companies,” said Elliott Broidy, Chairman and CEO of Broidy Capital Holdings, LLC, a seasoned entrepreneur and investor with extensive experience in national security technology and defense tech. “Business leaders must pay close attention to the ways in which artificial intelligence will shape cybersecurity in 2026,” he said, citing a 2025 report from the World Economic Forum that found 1 in 3 CEOs consider cyber espionage and the theft of sensitive information or IP a top concern.
In November 2025, Anthropic reported that hackers believed to be sponsored by the Chinese government were able to access the AI chatbot Claude and have it carry out automated cyber attacks against approximately 30 global organizations, including:
- Tech companies
- Financial institutions
- Chemical manufacturers
- Government agencies
This marked the “first reported AI-orchestrated cyber espionage campaign,” which relied on AI features such as intelligence, agency and tools that are currently much more advanced than they were a year ago. The attackers bypassed Claude’s safety features by tricking the chatbot into thinking it was performing defensive cybersecurity tasks for a legitimate company and breaking down suspicious requests into smaller tasks. The campaign executed about 80-90% of its operations autonomously at an attack speed that would have been impossible for human hackers to match.
Regulatory Response and New Frameworks
In 2025, a new Data Security Program (“DSP”) went into effect, issued by the Department of Justice (DOJ). The DSP “prohibits or restricts the provision of U.S. bulk ‘sensitive’ personal data and U.S government-related data to ‘countries of concern.’” The National Institute of Standards and Technology (NIST) expects to finalize new AI cybersecurity guidelines in 2026. The preliminary draft, titled Cybersecurity Framework Profile for Artificial Intelligence, discusses how businesses can safely integrate AI into their workflows.
“As AI becomes more sophisticated by the day, companies should be aware of the third-party risks this technology can pose,” said Broidy. “Keeping up with new privacy and cybersecurity laws—intended to protect valuable data and individual privacy—is imperative.”
The Evolution of the Threat Landscape
Intrusions centered around social engineering and identity abuse are skyrocketing. Generative AI can create personalized, relevant phishing emails that become increasingly difficult for everyday internet users to distinguish between and flag. These AI cybersecurity threats manifest through:
- Multifactor authentication bypass attempts
- Employee-facing help desk impersonation
- Sophisticated user consent manipulation
Attackers then trick LLMs into leaking internal data through prompt injection attacks, which the OWASP Top 10 2025 for LLM Applications ranks as the #1 critical vulnerability. These attacks appear in over 73% of production AI deployments assessed during security audits.
“With great innovation comes great risk,” said Broidy. “As AI advances by the day, leading to more opportunities for nefarious actors, finding impactful solutions to malicious AI use is paramount. Organizations cannot afford to treat AI security as an afterthought. It must be embedded into every layer of their cybersecurity strategy.”
Building AI Resilience
Cybersecurity has become a C-suite priority that dictates business success. Traditional defenses are no longer adequate in the face of machine-speed attacks and autonomous AI agents. Organizations must implement multi-layered defenses, continuously monitor for new attack vectors, and maintain realistic expectations about the current state of AI security.
“The organizations that thrive in 2026 will be those that effectively leverage the power of AI agents while proactively managing the associated risks,” said Broidy. “Businesses must place identity at the center of their security strategies to build a resilient foundation that allows them to innovate with confidence.”
