There’s a popular narrative in security circles that attackers are in the lead and defenders usually catch up. This problem is usually pinned on the idea that attackers need just one success, while defenders must stop everything. Yet today, that saying no longer explains what’s happening; it undersells the problem.
Attackers aren’t just slightly ahead anymore. They’re playing a different game altogether. Their strategies are optimized for speed, autonomy, and persistence, while defenders are still trying to perfect the rule book.
The $20 truth nobody likes to say out loud
Not long ago, malicious intent alone was not enough; you needed skills, infrastructure, and time. The technology friction filtered out a lot of would-be attackers, yet now, that filter is gone.
The new reality:
- Today, for $20, an attacker can access a high-quality AI tool that does all the work for them.
- They just need intent and a credit card.
- Defenders, on the other hand, still need tools integrated correctly, visibility across systems, and cooperation from users.
AI didn’t simplify the defender's job. It highlighted it.
Attackers use AI where it works today
Attackers are not picky about technology; they don’t care if a model is explainable, safe, or compliant. They simply care if it works well enough. Consequently, they use AI for exactly what modern models are already good at: writing phishing emails, summarizing leaked data, and iterating malware faster than ever.
The Defender's Dilemma:
- Aiming too high
Defenders often aim higher and miss. They try to use AI for perfect detection, zero false positives, and fully automated responses.
- Socio-technical problems
These are not language problems; they are socio-technical problems involving ambiguity and organizational risk tolerance. AI struggles here, and defenders hurt their cause by holding AI to a standard of perfection humans have never met.
- Authority-poor teams
Security teams are tool-rich and authority-poor. They often cannot enforce budgets or roadmaps. AI compresses the attacker timeline to minutes, while defenders are stuck with weeklong processes.
The silver bullet fantasy
Defensive organizations often overestimate AI as a replacement for fundamentals. They add more AI to complex stacks, hoping predictions will compensate for brittle architecture.
Attackers don’t think this way. They are perfectly happy improving a success rate from 0.1% to 0.3%—a 3x gain that is more than enough. Defenders, by contrast, measure productivity in cognitive decisions like triage and approval, which are precisely the areas where AI helps the least.
Organizational Fragmentation:
- Security, IT, engineering, and legal teams are siloed.
- AI-related security decisions cut across all of them, yet no single owner controls the end-to-end risk.
- Attackers face one target, while defenders face internal coordination.
- AI exploits policy gaps and response delays between teams rather than zero-day vulnerabilities.
The real problem: Playing the wrong game
The real problem isn’t that attackers are winning with AI; it’s that organizations are still playing the wrong game. Cybersecurity traditionally assumes the objective is preventing unauthorized access. However, AI-augmented attackers are playing something else entirely: exploiting organizational friction faster than the organization can react.
This includes phishing attempts using the organization's own internal language and MFA fatigue.
Old SOC vs. New SOC Mindset
Security is no longer primarily a technical function. It’s an organizational reflex.
- The Old SOC Mindset
Goal: "We need better detection. We must avoid false positives."
Result: Phishing detection improves marginally, but one user still clicks. The attacker pivots, and the SOC team sees it only after damage starts.
- The New SOC Mindset
Goal: "We will lose some phishing battles. The win condition is stopping lateral movement."
Result: Users still get phished, but preapproved actions are in place. The blast radius is kept small, attackers fail to move laterally, and incidents downgrade from breaches to contained events.
This reframes the challenge from needing smarter tools to needing to close the speed, authority, and feedback gaps that attackers exploit.
The ASPIRE framework: Playing the right game
The ASPIRE framework works precisely because it doesn’t fight the organizational reality—it designs around it. It assumes friction and imperfect tools as default conditions.
- A - Assume compromise
Breaches are routine operating conditions. The key question is: If this control fails today, what will happen next?
- S - Shrink the blast radius
If attackers get in, arrest their movement. Identity segmentation and data tiering matter more than perfect alerts.
- P - Pre-authorize actions
Speed beats precision early. Pre-approved, time-boxed containment actions give SOC teams room to act without asking for permission.
- I - Increase the response velocity
Attackers move in minutes; organizations respond in days. Measure the time to the first containment, not report completion.
- R - Reinforce humans
People aren’t the weakest link; they’re the most targeted surface. Design around reality with role-based access control and targeted interventions.
- E - Evaluate adaptively
Replace compliance check marks with reality: time to adapt, mean blast radius, and recovery speed. AI shines here by revealing patterns leadership usually misses.
Final thoughts: Put authority back in SOCs
Defenders are often trapped in systems designed for certainty, consensus, and control—luxuries that don’t survive first contact with an AI-accelerated attacker. The result isn’t a tooling gap. It’s a reflex gap.
The ASPIRE framework is not about giving SOC teams more responsibility. It’s about finally giving them authority proportional to the threat. The future of cyberdefense will be defined by which organizations can act decisively under uncertainty and make breaches operationally boring.
In the end, the goal isn’t to stop every attack. It’s to make attacks stop mattering.