Think about you get up tomorrow to some genuinely thrilling information: you’ve been approved to rent 1,000 new expert-level teammates. Builders, entrepreneurs, ops specialists, knowledge analysts, product managers — good at their jobs, obtainable across the clock, by no means burned out, by no means distracted.
It’s each enterprise chief’s dream. That product line you’ve needed to launch for 2 years however by no means had the engineering capability for? Now you do. That new market you’ve been eyeing however couldn’t workers correctly? It’s inside attain. The backlog of strategic tasks that saved getting pushed as a result of everybody was heads-down on the pressing stuff? You can begin working by means of it.
For the primary time, the restrict on what your group can pursue isn’t headcount or finances. It’s your individual creativeness. Sounds unbelievable, proper?
There’s an enormous catch, although. All these new digital coworkers…You’ll be able to’t test their references. You’ll be able to’t run a background test. You must give them entry to all of your programs on day one. And right here’s the half that ought to actually provide you with pause: they observe directions actually, they don’t know proper from fallacious, they usually face zero penalties if one thing goes fallacious.
Nonetheless excited?
That thought experiment isn’t hypothetical. It’s the place most enterprises are proper now with AI brokers. And it’s the dilemma I’ll be exploring later right now in my keynote at RSA.
From Answering to Performing
Not way back, AI meant chatbots — instruments that helped you write an e mail, summarize a doc, reply a query. Helpful, spectacular even, however basically passive. If a chatbot gave you a foul reply, you’d shrug and transfer on.
We’re now in a distinct period solely. AI brokers don’t simply reply. They act. They plan multi-step duties, name exterior instruments, make choices, and execute workflows autonomously. They’ll ship emails in your behalf, modify information, run database instructions, place orders, change firewall guidelines.
The shift from data to motion modifications every little thing about how we’d like to consider threat.
Right here’s a helpful method to consider it: with a chatbot, the worst case is a fallacious reply. With an agent, the worst case is a fallacious motion, and a few actions can’t be undone.
There are already hundreds of examples of the place this shift has gone fallacious. My “favourite” was a state of affairs the place an investor ran an AI coding agent throughout a code freeze. The instruction was express: “don’t change something with out permission.” The agent ran database instructions anyway, deleted a dwell manufacturing database, tried to cowl its tracks by creating pretend knowledge, after which when the harm turned clear, apologized.
Effectively, an apology isn’t a guardrail.
The Hole Between Pilots and Manufacturing
Right here’s a quantity that tells the entire story. In a current Cisco survey of main enterprises, 85% reported having AI agent pilots underway. Solely 5% had moved these brokers into manufacturing.
That 80-point hole isn’t skepticism about AI’s potential. It’s a rational response to a real safety drawback. Organizations can see what brokers can do. They’re unsure but they’ll belief them to do it safely.
Closing that hole is what we’re centered on at Cisco. And at RSA this week, we’re laying out our method throughout three areas: defending brokers from the world, defending the world from brokers, and detecting and responding to issues on the velocity brokers function.
Defending brokers from the world means guaranteeing brokers can’t be manipulated by unhealthy actors.
That is far more delicate than it sounds. Conventional safety scanning instruments had been constructed to check static software program. They’ll’t simulate what it appears like when an adversary tries to trick an AI mid-task into ignoring its directions. Immediate injection (hiding malicious instructions inside content material that an agent reads) is already an actual assault vector, and it’s getting extra subtle.
Our Cisco Talos 2025 Yr in Overview report (launched right now) exhibits how AI is already getting used to construct new exploit kits, with the React2Shell vulnerability going from public disclosure to essentially the most actively exploited flaw of 2025 in a matter of days. The velocity of weaponization is accelerating, and we will’t assume there’ll be time to react after a vulnerability is disclosed.
To assist organizations check their brokers earlier than they go anyplace close to manufacturing, we’re launching AI Protection Explorer Version, a self-service pink teaming instrument that lets builders and safety groups run adversarial assaults towards their very own brokers and discover vulnerabilities first.
We’re additionally releasing an Agent Runtime SDK that embeds coverage enforcement immediately into agent workflows at construct time, and an LLM Safety Leaderboard that provides organizations a transparent, goal option to consider how completely different AI fashions maintain up towards adversarial assaults, going properly past the efficiency benchmarks that dominate most AI comparisons right now.
Final 12 months at RSAC, we made historical past with the primary open supply basis AI safety mannequin. Since then, we’ve continued constructing within the open, releasing a set of instruments designed to reply the safety questions builders face each day:
- Expertise Scanner — What abilities is that this agent working, and are they secure?
- MCP Scanner — Are my MCP servers exposing malicious actions?
- AI BoM — What’s inside my AI system — fashions, reminiscence, dependencies?
- CodeGuard — Is the AI-generated code I’m delivery introducing vulnerabilities?
- Mannequin Provenance — The place did this mannequin originate from, and has it been modified?
This 12 months we’re open sourcing DefenseClaw — a safe agent framework that brings all of those instruments collectively and makes use of hooks in Nvidia’s OpenShell. With DefenseClaw, builders can deploy safe brokers sooner than ever:
- Each talent is scanned and sandboxed
- Each MCP server is checked for malicious actions
- Each AI asset — fashions, reminiscence, abilities — is routinely inventoried
The result’s zero guide safety steps and nil separate instrument installs. Safety is a workforce sport, and nobody is aware of that higher than Cisco.
Defending the world from brokers is an identification and entry drawback.
In the present day, most enterprises don’t have a transparent image of which brokers are working of their atmosphere, what they’ve entry to, or who’s accountable if one thing goes fallacious. That’s a critical governance hole, and it’s not remotely theoretical.
Turning to the Talos 2025 Yr in Overview once more, analysis exhibits that attackers are centered on the programs that confirm identification and dealer entry: login flows, entry gateways, and administration platforms that sit on the heart of how organizations grant belief. Almost a 3rd of all multi-factor authentication spray assaults focused identification and entry administration programs particularly, a six % bounce from the 12 months earlier than.
Adversaries go the place they’ll do essentially the most harm with the least effort, and proper now, identification is that place.
The excellent news is that we now have a blueprint for this problem. Take into consideration the way you’d onboard a brand new worker. You confirm who they’re, outline their function, give them entry solely to what they want for his or her job, and maintain them accountable to a supervisor. Brokers want the identical therapy. Each agent ought to have a verified identification, an outlined scope of permissions, and a human proprietor who’s chargeable for its conduct.
This week, Cisco is extending Zero Belief to the agentic workforce by means of new capabilities in Duo IAM and Safe Entry, so that each agent will get time-bound, task-specific permissions and safety groups get real-time visibility into each agent and power working of their atmosphere, together with those no one formally sanctioned.
Lastly, we now have to detect and reply to safety threats and incidents at machine velocity.
Brokers function sooner than any human can monitor. When an assault unfolds by means of automated agentic exercise, the window between “one thing is fallacious” and “the harm is finished” could be seconds. That math doesn’t work in case your safety operations heart remains to be working at human tempo. Adversaries are already utilizing agentic AI to scale their very own operations by automating reconnaissance, constructing exploit kits, and increasing what one particular person or group can accomplish in a single marketing campaign. Defenders want the identical leverage.
We’re serving to evolve the Safety Operations Heart (SOC) from reactive to proactive with new capabilities in Splunk, together with Publicity Analytics for steady real-time threat scoring, Detection Studio for streamlining how detections are constructed and deployed, and Federated Search that lets analysts examine throughout distributed knowledge environments with out first pulling every little thing right into a central location — a major benefit as agentic exercise generates exponentially extra knowledge.
We’re additionally deploying specialised AI brokers inside the SOC itself for detection, triage, and response. To not exchange analysts, however to deal with the repetitive investigative work in order that people can give attention to the selections that want expertise and judgment.
Safety is the Accelerator
Right here’s what I discover genuinely thrilling about this second. For a lot of the historical past of expertise, safety has performed an necessary, however conservative function: figuring out what may go fallacious, slowing rollouts, and including friction within the title of threat mitigation.
With agentic AI, the dynamic flips. Safety isn’t the rationale to decelerate. It’s the rationale you can transfer quick. The 80-point hole between organizations piloting brokers and people working them in manufacturing isn’t a expertise hole. It’s a belief deficit that we will solely make up if we reimagine safety for the agentic workforce.
We’ve been right here earlier than. We made the web reliable for commerce. We discovered cloud and cell. The instruments and psychological fashions took time to develop, however they acquired there. The agentic period is the following frontier, and the organizations that get safety proper would be the ones that unlock the actual potential of AI.
Let’s get to it.
Sources
Blogs:
