Command Zero · Research Paper · March 2026

The Recomposition of Security Work: Roles, Expertise, and the Agentic SOC

How the distributed, agent-powered Security Operations Center reshapes roles across the security profession, and what happens when that change is handled well or badly.

Authors Dean de Beer, CTO & Cofounder
Organization Command Zero · cmdzero.io
Version 1.0 | Working Draft
Date March 2026
Contents
01

Executive Summary

Security Operations is in the middle of a structural change. AI agents that can conduct investigations, correlate evidence, and coordinate responses across organizational boundaries are pulling the centralized SOC apart into a distributed security fabric embedded throughout the enterprise. This is not an incremental upgrade. It reshapes every security role from entry-level analyst to CISO.

This paper argues two positions at once. The change carries real promise: security operations that scale past human throughput limits, analyst work concentrated on genuinely strategic problems, and for the first time, expert-level security coverage available to organizations that previously could not afford it. The same change also carries a structural risk the industry is moving toward without enough awareness: steady erosion of the expertise pipeline that produces senior security practitioners, made worse by a compliance and governance framework that is not ready for the autonomous systems it will be asked to certify.

The outcome is not determined by the technology. It is determined by the organizational and policy choices made in the next one to three years, while the pipeline is still intact and the change is still in its early phases. Organizations that treat AI deployment as a design problem, deliberately preserving the conditions that develop expertise while removing the work that should never have been a priority in the first place, will end up with security programs more capable than anything the centralized SOC could achieve. Organizations that treat AI deployment as a headcount reduction exercise will discover, five years from now, that they have a lot of automation and nobody who knows what to do when it fails.

2
Waves of analyst pipeline erosion before AI arrived
4
Phases of SOC evolution from augmented to autonomous mesh
6+
Net-new security roles with no direct predecessor

The paper starts with the historical context of talent pipeline damage that predates AI, works through the worst-case and positive-case scenarios, and gives detailed treatment to every significant role change, both evolutionary and entirely new. It closes with concrete design principles for security leaders running the transition now.

These are two positions on what could happen. The real outcome will almost certainly be some combination of both, and will depend on the organization, how AI evolves, adoption rates, and a long list of factors we have not yet accounted for.

02

The Pipeline Was Already Broken

Most security leaders frame AI deployment as a new disruption: a new technology arriving to change a stable profession. The frame is wrong. The analyst pipeline was already damaged before anyone typed a prompt into a language model. Understanding why means looking honestly at two decades of procurement decisions whose downstream costs the industry has not really accounted for. A little blunt, maybe, but it works as a starting point.

The Three Waves

The first wave was MSSP. From approximately 2010 onward, Managed Security Service Providers offered organizations 24/7 coverage at a fraction of the cost of in-house staffing. What organizations received was pattern-matching against known signatures, ticket closure at SLA pace, and the transfer of their Tier 1 analyst function to a vendor whose economic model has always been optimized for throughput, not development.

The second wave was MDR. Managed Detection and Response matured the conversation, with better tooling and legitimate threat hunting capability. But the economics were structurally identical to MSSP. Organizations traded internal headcount for vendor coverage. The Tier 1 seat, the one where analysts learned to see, to intuit threats, to gain experience, disappeared from organizational charts at accelerating pace through 2018 to 2022.

The third wave is AI. Unlike the first two, AI can actually do everything Tier 1 does, and do it faster, more accurately, and continuously than an overworked junior analyst can. That technical advantage is exactly what makes the design decision more consequential, not less.

What Tier 1 Was Actually Building

The Tier 1 analyst role was never primarily valuable for the work it produced. The alerts it triaged, the tickets it closed, the enrichment it performed, all that had value, but none of it was irreplaceable. What was irreplaceable was the developmental function the role served.

The formation mechanism is not complicated to describe, even if it is slow to produce. Security analysts learn by doing. They build exposure to real threat data at volume, make judgment calls, see which ones were right, and adjust. The intuition that eventually operates at a subconscious level is built from thousands of routine cases, a fraction of which are formative. There is no shortcut and no substitute.

Senior security analysts make cognitive leaps during an investigation. Pattern recognition and earned knowledge let them connect events and activities that are not obvious. That knowledge develops only through experience. It cannot be compressed into documentation, transferred through training programs, or inherited from a vendor's playbook.

When you remove the Tier 1 function, whether through MSSP, MDR, or AI, you do not simply move the work. You move the developmental stage. You take away the conditions under which expertise forms, and a few years later you discover that there is no internal pipeline producing the next generation of senior practitioners, because you took apart the conditions that produced them.

The MSSP Counter-Argument and Its Limits

A reasonable objection deserves direct engagement: if Tier 1 work moved to MSSPs, did the analysts doing that work not develop expertise there? Is the pipeline truly broken, or merely relocated?

The counter-argument is partially correct. MSSP analysts gain genuine exposure, in some respects more than their in-house counterparts, because they see broader attack surfaces across dozens of customer environments. Some MSSP and MDR alumni do develop real expertise and transition into enterprise senior roles. The talent did not vanish from the industry entirely.

But the counter-argument breaks down across four structural dimensions. First, the feedback loop is severed. In-house Tier 1 analysts escalate to Tier 2 colleagues working the same environment and often see the case close. At an MSSP, escalation crosses team and geographic boundaries, and the Tier 1 analyst rarely learns what happened to the case they handed off. The exposure exists. The corrective and educational feedback that turns exposure into expertise mostly does not.

Second, breadth is a poor substitute for depth. Senior security judgment is built from knowing one environment well enough that anomaly detection becomes intuitive. MSSP analysts see the surface of many environments and the interior of none. Third, much MSSP Tier 1 work was delivered from lower-cost geographic markets, so the analysts who did develop skills were feeding a different talent market, not the senior analyst pipeline of the enterprises that outsourced the function. Fourth, the MSSP and MDR economic model actively selected against the cases that teach the most. The hardest, most educational cases are exactly the ones SLA pressure pushes analysts to close or escalate fastest.

The pipeline fragmented rather than vanished. But from the perspective of the enterprise that outsourced, the developmental benefit moved to the vendor and the expertise gap remained with the organization.

03

Two Futures: Collapse or Productive Path

These are not two framings of the same outcome. They are incompatible trajectories, driven by different organizational choices, producing structurally different security postures a decade from now. What follows presents both with their full internal logic, making the strongest case for each before looking at where they actually diverge.

Position 1: The Worst Case, Expertise Extinction

Worst Case Thesis

The security industry is executing a structural bet it has not explicitly made: that the judgment required to govern autonomous security systems can exist without the experiential pipeline that produces it. That bet is wrong, and the consequences compound over a 10–20 year horizon in ways that will not be visible until they are irreversible.

The pipeline collapses first. Tier 1 automation is already underway. Within three to five years, the entry-level analyst role is functionally absorbed by agents across the industry. Organizations reduce headcount at the base of the pyramid because the economics demand it, rational at the individual organization level and catastrophic at the industry level. The T1 role, already thinned by MSSP and MDR, disappears. The cohort that would have become T2 analysts in 2028–2030 does not exist. T3 specialists a decade from now are people who were never meaningfully T2. The governance tier, including the agent architects, oversight specialists, and strategic threat analysts the positive scenario requires, will be staffed by people who know how agents work but not what agents are supposed to find.

Competence becomes an illusion. Autonomous agents produce outputs that look like expert analysis. Confidence scores, reasoning chains, evidence summaries, recommended actions, all in language that is hard to tell apart from what a skilled analyst would write. The humans in the loop will not be equipped to spot failure when it happens. An agent that misclassifies a novel intrusion as benign network noise, at 78% confidence with a coherent-looking evidence chain, will be reviewed by an analyst whose main experience is accepting or rejecting agent summaries, not rebuilding investigations from raw log data. The failure passes through. The lost capability is not the task. It is the knowledge and the systems that catch agent error.

Compliance becomes theater. Regulatory frameworks like SOC 2, ISO 27001, PCI-DSS, and HIPAA were written for human-operated environments. They assume human attestation of human-verifiable controls. When autonomous agents generate the evidence that autonomous agents are audited against, compliance certification detaches from security reality. Auditors who cannot independently analyze the underlying security data, because they never developed that capacity, sign off on agent-generated audit trails they cannot independently verify. Organizations end up fully compliant on paper while running security programs whose autonomous components have systematic blind spots no one in the organization can identify.

Vendor Concentration Risk

Organizations that automate their expertise become dependent on the vendors whose agents replaced it. When those agents fail at scale, and they will, against novel attack patterns, there is no internal capacity to recognize the failure, diagnose it, or remediate it independently. The vendor controls the security posture. This is not a speculative risk. It is where current procurement trajectories logically end, visible inside the next five to seven years.

Worst case indicators
  • T1 eliminated as a headcount line item, not redesigned
  • AI deployed for cost reduction with no formation replacement built
  • Agent outputs accepted without structured analyst challenge
  • Compliance achieved via agent-generated evidence, unverified by humans
  • Vendor dependency becomes structural for core security functions
  • Senior analyst bench thins as pipeline cohorts fail to materialize
  • Novel threat detection degrades while dashboard metrics improve
Positive indicators
  • T1 redesigned around agent collaboration, not eliminated
  • Efficiency savings fund formation investment in parallel
  • Agent reasoning paths exposed; analysts trained to challenge them
  • Continuous compliance evidence requires independent verification sampling
  • Internal capability maintained to evaluate vendor agent systems
  • New formation pathways (QA, ontology, scenario design) replace old ones
  • Human expertise concentrated at genuinely irreducible work

Position 2: The Productive Path

Positive Thesis

The transformation is not a threat to security expertise. It is the first technology in the industry's history with the potential to make security expertise the primary activity of security professionals, rather than a fraction of their time buried under volume work they were never able to handle at that scale.

Volume stops being the problem. The core failure of the current SOC is not human inadequacy. It is that humans were asked to operate at machine scale without machine assistance. Tens of thousands of alerts a day processed by individuals whose cognitive bandwidth tops out at a few hundred meaningful analyses per shift. Analyst burnout, alert fatigue, and the skills shortage are symptoms of the same structural mismatch. Agents solve this by removing the cognitive tax of volume work, not by replacing human judgment. An analyst who previously spent 70% of a shift on routine enrichment can invert that ratio in Phase 2. Same expertise, much more investigative output.

Education embeds into the work. Well-designed agents do not just do the work. They make the work visible in ways that teach. When an agent surfaces an enriched alert summary with its reasoning chain, evidence links, and confidence justification, the reviewing analyst is seeing a structured model of how the investigation should be conducted. The agent becomes a trainer, not a replacement. Investigation interfaces that require analysts to confirm or challenge agent findings with explanation, and that track where analysts override agents and learn from those overrides, are teaching tools built into the workflow. The formation that used to require two years of high-volume manual triage can be rebuilt around supervised agent collaboration, producing equivalent intuition through a different path.

New entry points replace old ones. The pessimistic position assumes the only path into security expertise runs through T1 alert triage. Agent QA and validation roles require exposure to security data, investigation logic, and adversarial thinking. Security ontology development requires deep work with attack taxonomy and detection logic. Adversarial scenario design requires the red team thinking that develops adversarial imagination. These are different formation paths, producing different but complementary competencies. A security professional who spent two years designing agent stress-test scenarios developed adversarial intuition through a different route than the analyst who triaged 50,000 alerts. Neither path is inherently better than the other.

Why the bet is worth making. The difference between the two outcomes is not the technology. The technology is identical in both scenarios. The difference is the organizational and policy choices made in the next one to three years, while the change is still in Phase 1 to 2 and the pipeline is still intact. Organizations that answer the design question deliberately, keeping what makes human expertise irreplaceable while removing what was always a poor use of it, will end up with security programs more capable and more sustainable than anything the centralized human-only SOC could achieve.

The following timeline visualizes both trajectories simultaneously, showing the divergence that compounds from Phase 1 onward.

Timeline 1: Dual Scenario, Where the Paths Diverge

Worst case trajectory
Productive transformation trajectory
Select any event for detail
Phase 1 — Augmented
2024 – 2026
Phase 2 — Collaborative
2026 – 2027
Phase 3 — Distributed mesh
2027 – 2030
Phase 4 — Autonomous
2030+
Worst case
Positive
Outcome
divergence
best worst
04

The Four Phases of SOC Evolution

The move from a traditional centralized SOC to a distributed autonomous security mesh happens in four overlapping phases. These phases are not clean cutovers. Most organizations will operate across multiple phases at once, with different functions and business units at different stages of maturity. The trajectory pushes analysts from operators to orchestrators, introduces roles like Agent Operations Specialists and Security Ontology Engineers, and eventually dissolves the SOC boundary into a security fabric embedded throughout the enterprise.

Phase Reference Summary

Phase Period Operating Model Analyst Role Agent Role
Phase 1
Augmented
2024–2026 Analysts assisted by task agents Direct investigations with agent support Automate enrichment and basic analysis
Phase 2
Collaborative
2026–27 Analysts supervise agent teams Orchestrators and adjudicators Autonomous evidence collection, analysis, reporting
Phase 3
Distributed
2027–30 Security mesh across enterprise Strategy, governance, edge-case resolution Front-line detection, self-service security
Phase 4
Autonomous
2030+ Autonomous security mesh Strategic leadership and ethical governance Predictive defense, collective cross-org response

The following timeline presents the positive trajectory in phase-by-phase detail, organized by outcome category.

Phase-by-phase events

Operational outcome
Pipeline & formation
Skills & roles
Defense posture
Select any card for detail
P1
2024
– 26
Phase 1 — Augmented analysis
Pipeline & formation
Reasoning chains as formation tools
Training embedded inside every investigation
Operational outcome
T1 augmented, not eliminated
Entry roles redesigned around agent collaboration
Skills & roles
New entry pathway investment begins
Agent QA, scenario design, ontology as formation paths
P2
2026
– 27
Phase 2 — Collaborative operations
Skills & roles
New specialist roles reach maturity
AgentOps, Ontology Engineer, Investigation Coordinator established
Operational outcome
Senior capacity freed for high-value work
30–40% of senior analyst time recovered from overhead
Defense posture
Continuous compliance monitoring takes hold
Real-time posture replaces periodic certification cycle
P3
2027
– 30
Phase 3 — Distributed security mesh
Skills & roles
Expertise at the irreducible tier
Analysts do only the work agents genuinely cannot
Defense posture
SME organizations gain expert-equivalent coverage
Security capability gap between large and small narrows
Pipeline & formation
New pathway cohort reaches senior level
First agent-era analysts prove the formation model works
P4
2030+
Phase 4 — Autonomous security mesh
Operational outcome
Security expertise becomes the primary activity
Strategic work replaces throughput as the job definition
Defense posture
Collective defense networks become operational
Industry-wide detection faster than any single org
Pipeline & formation
Autonomous systems governed by domain experts
Formation investment from Phase 1–2 pays its return
05

Existing Roles: What Changes and What Disappears

Every role in the security organization is affected by the agentic transition. The nature and degree of change varies by tier and function. Some roles are mostly preserved with expanded scope. Others change so much that the original title becomes misleading. A few functions disappear entirely as autonomous agents absorb them, though the organizational need they served, investigation, analysis, and judgment, does not disappear. It moves upward in complexity.

Tier 1 Analyst

Current function: Alert triage, initial enrichment, queue processing, escalation, basic correlation, documentation.

What disappears: Manual alert correlation, copy-paste enrichment, tier-based queue routing, template-based reporting, rule-based escalation decisions. These represent the majority of current T1 day-to-day work and are absorbed by agents in Phase 1–2.

What survives and transforms: The T1 role does not simply vanish. It gets redesigned. Instead of processing individual alerts, T1-equivalent analysts supervise and validate agent-generated investigation summaries. They evaluate confidence scores, challenge reasoning steps, flag anomalies the agent deprioritized, and develop the habit of critical agent evaluation. Override rate tracking feeds agent retraining. The volume is lower. The analytical depth per case is higher.

Critical risk: In the worst-case trajectory, T1 is cut rather than redesigned. When that happens, the entry-level formation stage disappears. The T2 cohort that emerges five years later will have supervised agent outputs but never conducted a manual investigation. Their ability to recognize agent failure is structurally limited from day one.

Tier 2 Analyst

Current function: Detailed investigation, cross-source correlation, escalation decisions, incident documentation, analyst mentoring.

What changes: T2 transitions from conducting investigations to orchestrating them. In Phase 2, analysts direct agent teams rather than perform evidence collection themselves. Multi-agent investigation management becomes central: understanding which agents are working a case, what they have found, and where to redirect focus. Documentation shifts from manual creation to review and attestation of agent-generated reports.

New skills required: Multi-agent workflow management; evidence chain evaluation for legal and compliance sufficiency; hypothesis testing methodology, which involves formulating competing theories of attack then tasking agents to validate or invalidate each; confidence score interpretation; adversarial scenario intuition, specifically knowing when an agent is likely to miss something because it falls outside trained patterns.

What remains irreplaceable: Knowing when to override an agent becomes as important as the underlying security knowledge. That judgment requires the same foundational formation that manual investigation produced, which is why the T1 pipeline design problem directly determines T2 capability a generation later.

Tier 3 / Senior Analyst

Current function: Complex incident management, threat hunting, tool development, detection engineering, mentoring T1/T2.

What changes: This is the tier with the highest survival probability and the most ambitious change. Threat hunting partly becomes agent design, translating hunt hypotheses into persistent autonomous discovery workflows. Detection engineering grows to include agent-compatible logic, designing detection scenarios agents can execute autonomously while preserving evidence chain integrity.

Highest survival probability because: The skills are substantive and non-routine. Adversarial creativity, threat hypothesis generation, and the ability to externalize tactical investigation knowledge into explicit agent architecture are hard to replace. These capabilities develop only through the formation pipeline, which is why their preservation is load-bearing for the entire positive trajectory.

Detection Engineer

Current function: Writing detection rules, tuning SIEM queries, managing alert logic, reducing false positives.

What changes: Detection logic now has to be authored for autonomous execution. Agent-compatible detection scenarios carry additional requirements beyond SIEM queries: evidence preservation specifications, confidence thresholds, escalation trigger logic, and human-override conditions. The Detection Engineer becomes the bridge between threat knowledge and autonomous execution, translating security understanding into the structured reasoning frameworks agents operate inside.

New skills required: Agent workflow design; evidence chain specification; understanding of how agents fail and what detection logic fails gracefully versus catastrophically; MCP tool-chain awareness; knowledge of what data sources agents can and cannot reach.

SOC Manager

Current function: Queue management, team oversight, SLA accountability, escalation handling, resource allocation.

What disappears: Queue-based workload management, ticket routing, daily triage reviews. Agents handle routine case flow. The command-and-control instincts of traditional SOC management become actively counterproductive in an environment where the primary challenge is governance and orchestration, not throughput.

What transforms: Management shifts from running analyst productivity to governing agent ecosystems and human-agent team performance. SLA accountability stays but the metrics shift from mean-time-to-respond to detection precision, investigation throughput quality, and agent decision accuracy. Cross-functional liaison work expands as security embeds into business units.

New skills required: Agent governance framework design; performance measurement for AI systems including hallucination rate evaluation and confidence calibration assessment; workflow architecture for human-agent collaboration; risk tolerance calibration, specifically defining the boundaries of autonomous action versus required human authorization.

SOC Director / CISO

What changes: Security strategy expands from the organization to the cross-organizational layer: participation in collective defense networks, inter-organizational agent collaboration, and industry-level security mesh governance. Vendor management shifts from tool procurement to agent infrastructure and protocol standards evaluation. Executive reporting moves from activity metrics to security posture and business enablement.

New responsibilities: Designing the organizational architecture for distributed security capabilities; establishing ethical and legal frameworks for autonomous security decision-making; governing the accountability mechanisms that keep autonomous systems auditable and compliant; running the organizational transition in a way that preserves expertise while capturing efficiency gains.

06

Evolved Roles: The Same Job, Changed

The roles below are significant transformations of existing security functions. The job title may persist, but the primary activity, required skills, and organizational context change enough that a practitioner who does not adapt will be structurally misaligned with what the role actually demands. A five-year role evolution timeline follows this section, showing where these evolved roles transition into the net-new roles described in Section 7.

T1 Analyst
Agent Validator
Evolved Role

The T1 role shifts from manual alert triage to structured validation of agent-generated investigation summaries. The analyst evaluates confidence scores, challenges reasoning steps, flags anomalies the agent deprioritized, and surfaces edge cases that fall outside the agent's training distribution. Override rate is tracked systematically and feeds agent retraining cycles.

The formation value of this role depends entirely on interface design. If investigation platforms expose agent reasoning paths, not just outputs, analysts build intuition through structured reasoning review. If platforms present only conclusions, the developmental value evaporates and the role becomes a rubber stamp. That design choice is the single most important Phase 1 decision for pipeline preservation.

Core skills
Agent output critical evaluation Confidence score interpretation Reasoning chain review Anomaly pattern recognition MCP data source awareness Override documentation

T1 analysts who develop strong agent evaluation instincts in Phase 1 are well positioned for Agent Operations Specialist roles in Phase 2. The key development investment is prompt literacy, confidence calibration, and the habit of structured override documentation.

T2 Analyst
Investigation Coordinator
Evolved Role

Senior T2 analysts shift from conducting investigations to orchestrating them. The role directs agent teams, extends hypotheses, decides which findings need a human deep-dive versus autonomous closure, and synthesizes multi-agent outputs into a coherent incident picture. Knowing when to override an agent, and on what evidentiary basis, becomes as analytically demanding as the underlying investigation.

Evidence chain evaluation gets new importance. Agent-collected evidence has to be assessed not just for security relevance but for legal defensibility, compliance sufficiency, and chain-of-custody integrity. That requires analysts to understand the provenance of agent findings, not just their content.

Core skills
Multi-agent workflow management Hypothesis testing methodology Evidence chain evaluation Incident synthesis Agent team redirection Legal evidence standards

Senior T2 analysts with strong investigation instincts and growing interest in agent system behavior. The transition is largely natural through Phase 2 deployment experience if the environment is well designed. T2 analysts who develop interest in the architectural layer have a longer but high-value path toward Agent Architect through deliberate cross-training in agent engineering fundamentals.

T3 / Senior Analyst
Agent Architect
Evolved Role

Senior analysts with deep investigation expertise move to designing the agent systems that perform those investigations. The role demands a rare combination: real security expertise plus the ability to externalize that expertise into autonomous system design. Investigation knowledge has to be translated into explicit agent architectures, detection logic, memory system design, and reasoning frameworks that other analysts then supervise.

This is the load-bearing role in the positive scenario. The quality of agent systems across the industry through 2030 will be a direct function of how many T3-equivalent analysts made this transition successfully in Phase 1 to Phase 2. The role is how domain expertise is preserved in institutional form even as the operational workforce changes.

Core skills
Agent engineering and design Prompt architecture Memory system design Tool orchestration (MCP) Trust validation logic Escalation framework design Detection scenario authoring

T3 analysts with 5+ years of investigation experience who develop systematic interest in how agent reasoning systems work. This transition should be actively managed by organizations in Phase 1–2, not left to individual initiative.

Detection Engineer
Agent Detection Designer
Evolved Role

Detection logic now has to be authored for autonomous execution by agents, not just for human review in a SIEM. Agent-compatible detection scenarios carry requirements beyond traditional rules: evidence preservation specifications, confidence calibration thresholds, escalation trigger conditions, and graceful failure modes. The Detection Designer becomes the bridge between threat knowledge and autonomous capability, translating security understanding into the structured frameworks agents operate inside.

Core skills
Agent workflow design Evidence chain specification Confidence calibration Failure mode analysis MCP tool-chain mapping Detection coverage auditing
SOC Manager
Agent Operations Lead
Evolved Role

Going from managing analyst queues to governing agent ecosystems requires a different operational model. The Agent Operations Lead oversees agent fleet health, deployment pipelines, drift detection, human-agent handoff integrity, and performance measurement across the autonomous layer. Success metrics shift from SLA closure rates to detection precision, investigation quality, and agent decision accuracy.

Cross-functional liaison work expands as security embeds into business units. The role increasingly sits at the intersection of security operations, technology governance, and business relationship management, a combination the traditional SOC manager role rarely demanded.

Core skills
Agent governance frameworks AI performance measurement Workflow architecture Risk tolerance calibration Cross-functional liaison Escalation policy design

The following timeline maps specific role evolutions alongside operational capability shifts across the five-year transition window, showing where evolved roles appear and when net-new roles first emerge.

Five-year role view (2025–2030)

Operational shifts
Evolved roles
New roles (no predecessor)
Select any card for detail
Operations & capability
Role evolution
P1
2025
Operations
Agent-assisted enrichment
Agents pre-populate every investigation
Operations
Reasoning-visible interfaces
Training embedded in the workflow
Evolved role
T1 Analyst
Agent Validator
Alert review becomes reasoning audit
Evolved role
Detection Engineer
Agent Detection Designer
Rules become agent-executable scenarios
→2
2026
Operations
MCP-enabled cross-tool correlation
Silos dissolve at the agent layer
Operations
Autonomous preliminary investigations
Known threat categories handled end-to-end
New role
Agent Operations Specialist
No predecessor — first appearance at scale
Evolved role
T2 Analyst
Investigation Coordinator
Analyst becomes orchestrator of agent teams
P2
2027
Operations
Domain-specialist agent teams
Malware, phishing, identity, cloud agents
Operations
Continuous compliance monitoring
Real-time posture replaces periodic audit
New role
Security Ontology Engineer
Knowledge frameworks that agents reason with
Evolved role
T3 / Senior Analyst
Agent Architect
Investigation expertise becomes system design
→3
2028
Operations
Security mesh begins forming
Agents embedded in every business unit
Operations
Continuous autonomous threat hunting
Hunt hypotheses execute without analyst time
New role
Adversarial Scenario Designer
Red team thinking applied to agent systems
New role
Trust Engineer
Governance and auditability for autonomous systems
P3
2027–30
Operations
Cross-org agent collaboration
Collective defense networks emerge
Operations
Expertise at the irreducible tier
Human work is genuinely strategic
New role
Cross-Domain Interpreter
Security findings translated across the business
Evolved role
SOC Manager / Director
Ethical Oversight Specialist
Governance of autonomous operations
07

New Roles: No Predecessor, New Demand

The roles in this section have no real predecessor in the current security organization. They come from the structural requirements of operating autonomous agent systems at scale, requirements that did not exist when security operations were entirely human-operated. When they show up on org charts tracks the phase evolution. Agent Operations Specialist is needed the moment Phase 1 agents go into production. Security Ontology Engineer becomes critical at Phase 2 when agent reasoning quality directly determines detection effectiveness. Trust and Boundary Engineering becomes a regulatory necessity by Phase 3.

Agent Operations Specialist
New Role · Phase 1+

The operational reliability function for deployed agent systems. Monitors agent performance in production, manages deployment pipelines, detects agent drift and reasoning degradation, keeps human-agent handoff integrity, and maintains observability across the agent fleet. Sits between security operations and engineering, and demands both operational instinct and systems-level technical thinking.

This is the first new role to appear at scale, needed as soon as Phase 1 agents go to production, before the deeper knowledge engineering roles are required. Its absence in early deployments is one of the most common failure modes. Organizations deploy agents and then have no systematic visibility into whether those agents are performing as expected.

Core skills
Agent monitoring and observability Deployment pipeline management Drift detection Handoff protocol design Performance metrics for AI systems Incident response for agent failures

Former T2 analysts with strong tool proficiency and systems thinking, or DevOps/SRE engineers who develop security domain knowledge. The role rewards practitioners who are comfortable operating at the boundary between technical systems and operational process.

Security Ontology Engineer
New Role · Phase 2+

Develops and maintains the knowledge frameworks agents use to understand security concepts, attack taxonomies, organizational context, and business semantics. The quality of agent reasoning is bounded by the quality of the knowledge representation it reasons over. Poor ontologies produce agents that misclassify threats, miss contextual signals, and fail to translate findings meaningfully across business functions. This role owns the semantic infrastructure of the entire agent ecosystem.

It bridges two fields that rarely intersect in current security organizations: deep security domain expertise and knowledge engineering, including graph databases, semantic representation, NLP, and ontological modeling. Real depth in both is required. That makes the formation challenge significant and the practitioner rare.

Core skills
Knowledge graph design Semantic representation Attack taxonomy development Ontological modeling NLP fundamentals Security domain depth Organizational context mapping

Former T3 analysts or threat intelligence specialists who develop deep interest in knowledge engineering and structured representation. Alternatively, knowledge engineers or ontologists who invest substantially in security domain expertise. No shortcut: the role genuinely requires both halves.

Adversarial Scenario Designer
New Role · Phase 2+

Applies red team thinking to agent stress-testing. Designs attack simulations and edge cases that challenge agent reasoning, surface detection blind spots, probe confidence calibration failures, and test the boundaries of autonomous decision-making under adversarial pressure. The role makes sure autonomous detection systems are tested adversarially, not just functionally, before they are trusted with production security decisions.

It is conceptually next to traditional red teaming but adds a technical dimension: understanding how agent reasoning systems fail. An attack that would challenge a human analyst may not challenge an agent the same way, and vice versa. The Adversarial Scenario Designer has to model both failure modes at once.

Core skills
Red team methodology Agent reasoning failure analysis Attack simulation design Confidence calibration testing Edge case generation Agentic QA methodology Detection coverage gap analysis

Red teamers and penetration testers who develop systematic interest in AI system evaluation, or Agentic QA specialists who develop offensive security expertise. Both require deliberate cross-domain investment; neither background alone is sufficient.

Cross-Domain Interpreter
New Role · Phase 3+

As security agents embed into HR, Finance, Legal, R&D, and Operations workflows, the gap between security reasoning and business reasoning becomes a critical failure point. The Cross-Domain Interpreter makes sure agent findings are understood and acted on by non-security stakeholders. This is not a communications role. It requires real security expertise plus the ability to translate threat context into business-relevant language without losing analytical precision.

It becomes structurally essential at Phase 3, when security is no longer centralized in a SOC but distributed across every business function. The people receiving security findings are HR managers, finance controllers, and legal counsel, not security analysts. The quality of their response to those findings depends on the quality of the translation.

Core skills
Security domain expertise Business function literacy Risk communication Regulatory translation Stakeholder management Cross-functional workflow design

T2/T3 analysts who develop strong business communication skills and cross-functional exposure over their careers. Security awareness professionals who develop deep technical grounding. Business relationship managers who invest seriously in security domain expertise.

Agent Trust & Boundary Engineer
New Discipline · Phase 2+

This role manages the trust fabric of the agent ecosystem. Often referred to as Permission Engineering, it is more precisely scoped as Agent Trust and Boundary Engineering, reflecting that the discipline covers the full communication layer, trust model, and access boundary architecture, not just permission assignment.

The IAM analogy is the right starting point but undersells the complexity. Where IAM manages relatively static human-to-system permissions, Agent Trust and Boundary Engineering has to handle permissions that are contextual and dynamic. An agent's access profile should shift based on the task it is executing, not just its identity. It has to handle the communication layer: which agents can talk to which other agents, under what conditions, with what data in the payload. It has to solve the composition problem: an orchestrator agent directing sub-agents can aggregate the permissions of multiple downstream agents into an effective capability set that no single agent was explicitly granted. And it has to operate at operational tempo, with revocation in seconds, not quarterly access reviews.

Why this role is distinct from the Trust Engineer: The Trust Engineer governs the quality and accountability of agent reasoning, covering auditability, confidence calibration, and compliance. The Agent Trust and Boundary Engineer governs the access surface and communication boundaries within which that reasoning operates. Both are necessary. Neither is the other.

Core skills
Zero-trust policy architecture Agent identity management Contextual permission design A2A communication security MCP boundary enforcement Permission composition analysis Real-time revocation systems Trust fabric design Graduated trust modeling

IAM engineers who develop deep agent architecture knowledge; security engineers specializing in API security and data boundary enforcement who extend to agent communication layers; or compliance-focused security engineers who develop AI governance expertise. The role has no clean predecessor because the problem space itself is new; it requires assembly from multiple adjacent disciplines.

Ethical Oversight Specialist
New Role · Phase 3+

Governs the ethical, legal, and accountability dimensions of autonomous security operations. Makes sure agent systems respect privacy, maintain regulatory compliance, operate within defined ethical boundaries, and stay accountable when their decisions produce bad outcomes. The role shows up first in heavily regulated sectors, including financial services, healthcare, and critical infrastructure, where the legal implications of autonomous security decision-making land first.

It is not a security role in the traditional sense. It requires deep familiarity with regulatory frameworks, privacy law, and the organizational accountability structures needed for compliant autonomous decision-making. It is the direct organizational response to the compliance theater risk. This is the person responsible for making sure agent-generated audit trails actually reflect security reality, and that human verification sampling is enough to catch systemic agent failure before it becomes a regulatory event.

Core skills
Regulatory framework expertise Privacy law AI governance frameworks Audit design for autonomous systems Accountability structure design Ethical decision framework Board-level risk communication

Senior security managers or directors with broad operational exposure who develop regulatory and governance depth. GRC professionals who develop technical understanding of autonomous systems. Legal and compliance counsel who develop sufficient security domain knowledge to evaluate agent system design. The role requires both technical credibility and institutional authority.

08

The Deskilling Problem: Compliance and Audit

Nowhere are the consequences of the worst-case trajectory more immediate, more concrete, and more consistently underestimated than in compliance and audit. The compliance industry has a persistent blind spot for what happens when the controls it certifies are operated by systems rather than humans, and the industry is moving toward that condition faster than the frameworks can adapt.

What Current Frameworks Assume

SOC 2, ISO 27001, PCI-DSS, HIPAA, and their equivalents share a foundational assumption: somewhere in the chain, there is a human who understands what the control is designed to prevent, can verify that the implementation actually prevents it, and can exercise judgment about whether a deviation is material. This assumption is load-bearing. The entire framework of periodic certification rests on human attestation of human-verifiable controls.

What Continuous Agent Monitoring Does

Agents can check controls at scale, continuously, and generate audit evidence automatically. That is operationally useful, and it creates a governance gap the current compliance model is not designed to handle. An agent can verify that a control is configured as specified. It cannot verify that the specified configuration actually achieves the security intent of the control in the organization's specific environment. That gap between configuration compliance and security effectiveness has always existed, and human auditors with domain expertise have always bridged it through judgment.

When auditors lose the technical depth to evaluate agent-generated evidence independently, whether because they never developed it or let it atrophy, audit becomes a process of attesting to agent outputs rather than evaluating security. Compliance certification detaches from security reality. Organizations can be fully compliant, per agent-generated audit trails, while being substantively insecure.

The Structural Conflict

Autonomous agents generate the evidence that autonomous agents are audited against. If an agent misclassifies a finding, suppresses an alert below a confidence threshold, or applies flawed reasoning to a compliance check, the audit trail reflects the agent's conclusion, not the underlying reality. An auditor who cannot independently analyze the underlying data cannot detect the discrepancy. This is not a hypothetical failure mode. It is the logical endpoint of removing independent human verification capacity while keeping a certification process that requires it.

Worst Case: Phase 4

A major breach occurs in a fully certified organization. Post-incident review reveals all compliance checkpoints were satisfied, agent-generated audit trails showed no anomalies, and human analysts had no independent verification capacity. The gap between certification and security posture becomes publicly undeniable. Regulatory frameworks written for human-operated controls are exposed as inadequate for autonomous operations.

Positive Case: Phase 2+

Continuous autonomous monitoring supplements rather than replaces periodic human verification. Regulators develop frameworks requiring human-verifiable sampling of agent-generated evidence, mandatory disclosure of agent confidence calibration methods, and independent audit of agent decision architecture. Compliance becomes more rigorous than the current model, not less, because the evidence surface expands dramatically while human verification requirements are explicitly designed into the governance framework.

09

The Adversarial Asymmetry Risk

The deskilling problem has a systemic dimension that goes beyond any single organization. Defenders across the industry are moving toward automating expertise. Attackers are not constrained to follow the same path.

Sophisticated threat actors, including nation-state groups, advanced criminal organizations, and state-sponsored mercenaries, are using AI to augment human expertise, not replace it. Their operators use LLMs to accelerate reconnaissance, generate new attack content, assist with code development, and identify attack paths. But the strategic creativity, choosing targets, understanding organizational context, and identifying the specific chain of weaknesses that defeats a particular organization's controls, remains human-directed and human-developed.

If the defender side of the industry systematically deskills through automation while the attacker side uses automation to amplify existing expert capacity, the net effect is a widening expertise gap in favor of attackers. Autonomous detection systems optimized for known-pattern detection face increasingly novel attacks from adversaries who still have deep human expertise driving the targeting and strategy. Detection coverage calcifies around known TTPs while sophisticated actors operate in the gaps that trained pattern-matchers cannot see.

This asymmetry is not solved by better agents. It is solved by preserving the human adversarial imagination: the ability to hypothesize attack chains that have not yet been enumerated, to recognize threat actor creativity, and to probe defenses from an attacker's perspective. These capabilities develop through the same formation pipeline the deskilling argument identifies as at risk. The Adversarial Scenario Designer role exists precisely to preserve this capacity in institutional form. But that role can only be filled by practitioners who developed adversarial intuition through enough prior exposure, which loops back, once more, to pipeline design.

The symmetry argument

The positive case argues that defender organizations operating in the agentic model are also augmenting experts with AI rather than replacing them, and that the organizations that navigate this well will have more T3-equivalent analysts, not fewer, because agent automation of the operational tier frees the economics for senior specialist capacity they previously could not afford. The adversarial asymmetry only materializes if defender organizations actually eliminate expertise rather than redirect it. That is the choice.

10

Design Principles for the Transition

The principles below are not prescriptive implementation steps. They are the minimum design considerations that separate organizations running the positive transformation from those running the worst case by accident. Each addresses a specific mechanism through which well-intentioned AI deployment can produce the outcome it was designed to avoid.

Principle 1: Measure Developmental Case Rate

Before deploying AI that automates any analyst-facing workflow, audit the developmental content of that workflow. What percentage of the cases currently handled by the role being automated require hypothesis formation, pattern recognition, or investigative judgment rather than lookup and disposition? If the answer is below 30%, a development problem already exists and automation will compound it. If above 30%, a deliberate replacement formation mechanism must be designed before automation deploys.

Principle 2: Design for Reasoning Visibility, Not Just Output Efficiency

Investigation interfaces have to expose agent reasoning paths as a first-class product requirement, not a nice-to-have. The platform question is not "does the agent return the right answer?" but "does reviewing the agent's reasoning teach the analyst something?" Those are different optimization targets and produce different interface designs. Platforms that present conclusions without reasoning chains are headcount reduction tools. Platforms that expose reasoning chains are formation infrastructure.

Principle 3: Preserve the Override Mechanism and Take It Seriously

Analyst override of agent findings must be tracked, analyzed, and fed back into agent retraining. An override rate of zero is not evidence of excellent agent performance; it is evidence of rubber-stamp review. Override rates and patterns are the primary signal for both agent improvement and analyst development quality. Organizations should establish baseline override rate expectations and investigate both excessive overrides and insufficient ones.

Principle 4: Fund Formation Investment from Efficiency Savings

The economics of AI deployment in security produce efficiency savings immediately and pipeline costs five to seven years later. Organizations have to explicitly budget formation investment, including new entry pathway development, Agent QA roles, scenario design programs, and ontology engineering, as a line item funded from the efficiency gains automation produces. This does not happen by default. It requires deliberate decision-making against the efficiency gradient.

Principle 5: Treat Agent Trust and Boundary Engineering as Day-One Infrastructure

Permission engineering for agent systems, which involves defining and governing the boundaries between agents, data sources, APIs, communication layers, and organizational boundaries, is not a Phase 2 or Phase 3 concern. Every autonomous agent deployed in Phase 1 has an access surface that needs governance. The Agent Trust and Boundary Engineer function, even if initially staffed by an existing security engineer doing double duty, must exist before the first production agent goes live.

Principle 6: Require Human Verification Sampling in Compliance Frameworks

Agent-generated audit evidence must not be accepted as self-certifying. Internal governance frameworks should establish mandatory human verification sampling for compliance-relevant agent decisions. The sampling rate, methodology, and independence requirements should be documented, auditable, and treated as a control in their own right.

Principle 7: Design the Governance Tier for the People Who Will Populate It

The Agent Architects, Ethical Oversight Specialists, and Security Ontology Engineers of 2028 onwards are today's T2 and T3 analysts. Career pathways for the new roles need to be visible and accessible now, not as aspirational job descriptions but as structured development programs with clear competency milestones. Organizations that wait until Phase 3 to think about who will fill these roles will find there is no internal pipeline to draw from.

11

Cross-Profession Implications

The agentic transformation of security operations does not happen in isolation from the rest of the enterprise. As autonomous security agents embed into every business function, and as the Agentic Web infrastructure layer lets agents discover, communicate, and collaborate across organizational boundaries, every profession that touches security, directly or indirectly, is affected. The analysis below covers the most materially impacted adjacent roles.

Software and Detection Engineering

Software engineers are going through a parallel change to security analysts. The shift from writing deterministic code to designing cognitive loops, from debugging functions to validating trust in agent reasoning, mirrors the analyst's shift from triage to orchestration. Detection engineers face the most direct overlap. Their work is now explicitly about designing logic that agents execute, which means the software engineering and security domains converge in the Detection Designer role described in Section 6.

More broadly, software engineers building security products, including platforms, investigation tools, SOAR replacements, and agent orchestration layers, need fluency in agentic engineering. The platforms they build determine whether investigation interfaces expose reasoning chains (formation infrastructure) or present only conclusions (headcount reduction tools). That design choice, made by product engineers, has downstream consequences for the analyst pipeline that most engineering teams have not thought about.

The skills shift for this cohort is from writing deterministic code to designing cognitive systems: reasoning chains, memory architecture, tool orchestration, trust validation, and the economics of autonomous operation, meaning model costs, retry loops, memory management, and latency. The engineering discipline that emerges from this transition is what the Role Evolution framework calls Agentic Engineering: treating cognition not as a feature but as infrastructure.

Legal, Compliance, and HR

Legal counsel faces the convergence of two previously separate domains: security law and AI governance law. As autonomous agents make consequential security decisions, including containment actions, evidence preservation, and breach notification triggers, legal teams need to understand both the evidentiary standards those decisions produce and the liability exposure when autonomous systems act wrongly. The legal professional who can evaluate whether an agent-generated evidence chain meets admissibility standards, or whether an autonomous containment action creates exposure, becomes valuable in ways that have no current parallel.

Compliance professionals face the most immediate and concrete change. The compliance frameworks they administer were written for human-operated controls. As autonomous agents take over compliance monitoring, evidence generation, and control verification, compliance professionals have to develop the technical depth to evaluate agent-generated evidence independently. That means understanding what agent confidence scores mean, what systematic failure modes look like, and how to design human verification sampling that actually catches agent errors. Accepting agent audit trails uncritically is the compliance theater failure mode described in Section 8.

HR professionals are among the earliest non-security users of embedded security agents. The employee offboarding scenario, where an HR Security Agent discovers potential data exfiltration and escalates to security analysts, is a Phase 2 reality, not a distant possibility. HR professionals need to know how to interact with security agents, what their outputs mean, what escalation thresholds apply, and what their own responsibilities are when agents surface findings. The cross-domain literacy required is modest in technical depth but broad in scope. Enough security understanding to not misinterpret agent findings, enough process understanding to know when to escalate and to whom.

IT Operations and Product Management

IT Operations increasingly operates at the interface between security agents and the infrastructure they monitor. Security-focused change management becomes agent-assisted. Change requests flow through agents that do real-time security impact assessment before human approval. IT operations professionals need to understand what security agents are evaluating when they assess a change, what findings are blocking versus advisory, and how to provide the contextual information agents need to make accurate assessments. The historical pattern of IT and security operating in separate silos is incompatible with a security mesh architecture.

Product Managers for security platforms face a new capability planning challenge. They need to understand agent economics, including token costs, retry loops, memory architecture, and latency tradeoffs, as first-class product constraints. They need to treat trust metrics and human-AI collaboration UX patterns as product requirements, not afterthoughts. And they need to make the formation versus efficiency tradeoff explicit in every platform decision. Does this feature design preserve analyst development value, or remove it? That question is probably not on most product management radars.

Executive Leadership

The executive layer faces a different kind of change, not of technical skills but of decision-making infrastructure. The security briefings executives receive will increasingly be synthesized by agents rather than prepared by analysts. The risk metrics they act on will increasingly be generated by autonomous monitoring rather than human assessment. The vendor decisions they make will determine whether their organizations have the independent capacity to verify that autonomous systems are functioning correctly, or whether they have ceded that capacity permanently to external providers.

Three capabilities become essential at the executive level that were rarely required before. First, the ability to ask the right questions of agent-generated security summaries: not "what does this say?" but "what is this agent capable of missing?" Second, the organizational governance to maintain human verification capacity as a deliberate investment against efficiency pressure. Third, the ethical framework to recognize when autonomous security operations are approaching boundaries, including privacy and regulatory limits, that require human authority rather than agent judgment.

Executives who lack these capabilities will make resource allocation decisions that look rational in the short term and catastrophic in hindsight, exactly the way MSSP procurement decisions looked in 2012 and look very different from the vantage point of 2026.

12

Cross-Role Skills: Rising and Declining

The role-specific skills covered in Sections 5 to 7 describe what individual practitioners need for particular functions. Underneath those specifics, certain skills become table stakes across every role in the security organization, and certain skills that currently carry high market value are being systematically devalued by automation. Understanding both categories matters for individual practitioners planning development investments and for organizations designing workforce transition programs.

Skills Rising Across All Roles

Agent output critical evaluation is the foundational universal skill in an agentic future. Every security role, from T1 equivalent to CISO, requires the ability to evaluate agent-generated findings with skepticism. Not reflexive rejection or uncritical acceptance, but the informed judgment to distinguish high-confidence routine findings from low-confidence ones that need human attention. This skill requires both domain expertise (knowing what the agent should find) and systems understanding (knowing how agents fail and what their failure modes look like).

Prompt literacy, the ability to formulate precise investigative queries, structure agent directives, and design prompt templates that produce reliable outputs, is not a specialist skill. It is as fundamental to agentic-era security work as query syntax was to the SIEM. Practitioners who cannot construct effective prompts cannot extract the investigative value agents offer. Those who can construct high quality prompts operate at far higher effectiveness than those who cannot. This is a trainable skill with high return on investment at every level of the security organization.

Human-AI collaboration patterns, meaning knowing when to delegate, when to intervene, how to structure handoffs, and how to maintain accountability when agents are acting autonomously, are a new category of judgment with no close equivalent in pre-agentic security work. The analyst who instinctively knows when an agent's 82% confidence finding deserves another look, and the manager who knows which decisions should never be delegated to agents regardless of confidence score, are both exercising this judgment. It develops through experience with agent systems, but organizations can accelerate its development through deliberate case review programs.

Data source and tool-chain awareness, meaning an understanding of which MCP servers are connected to which data, what agents can and cannot access, and where coverage gaps exist, determines whether practitioners can evaluate the completeness of agent findings, not just their accuracy. An agent that concludes "no anomalous activity detected" means something very different depending on whether it had access to email logs, network traffic, and endpoint telemetry or only to authentication events. A practitioner who does not understand the agent's data surface cannot properly interpret its conclusions.

Ethics and bias recognition, including identifying when autonomous operations produce systematically biased outcomes and when automated decisions are approaching ethical or legal boundaries, becomes a required competency at every level as autonomous systems take on more consequential security decisions. This is not primarily a technical skill. It is a combination of domain judgment, ethical awareness, and the professional confidence to flag concerns about systems that others may trust.

Skills Declining in Value

Identifying skills in decline is not a commentary on the practitioners who hold them. It is a practical map of where retraining investment is most urgent. The skills below are functions agents handle with growing competence, reducing the market premium for human performance of the same tasks.

Manual alert triage and enrichment are the most immediate casualties. Correlation of known indicators, enrichment of IP addresses and domains against threat intelligence databases, basic pattern matching against known signatures. These are exactly the tasks Phase 1 agents execute reliably and at scale. Practitioners whose primary value proposition is speed and accuracy on these tasks face the most urgent need for transition.

Template-based report writing, meaning the structured documentation of investigation steps, findings, and recommendations in standardized formats, is a Phase 1 automation target. Agents produce structured investigation reports as a native output. The human value-add shifts from writing the report to reviewing and attesting to its accuracy, identifying what the agent missed, and adding the contextual judgment that makes a technically accurate report operationally useful.

Rule-based escalation decisions, meaning determining whether an alert meets defined thresholds for escalation to a higher tier, are straightforward automation targets. The agent's confidence scoring and escalation logic replaces the lookup-based decision that makes up much of current T1 escalation work. What does not automate is the judgment call on edge cases. The alert that does not meet escalation criteria by the numbers but has characteristics an experienced analyst recognizes as worth a second look. That judgment requires the deep, tactical expertise the formation pipeline is designed to produce.

Basic log querying without analytical depth, including running predefined searches, extracting known log fields, and populating spreadsheets with filtered data, is absorbed by agent tool-use through MCP server connections. The practitioner who knows how to write a SIEM query keeps their value. The practitioner whose primary contribution is executing predefined queries without analytical interpretation does not.

The 30% Diagnostic

One operational metric collapses much of the skills-in-decline analysis into a single, measurable organizational indicator. Before any AI deployment decision, security leaders should audit the function being automated against a single question: what percentage of the cases currently handled by this role require real hypothesis formation, meaning genuine investigative judgment rather than lookup and disposition? If the answer is below 30%, the organization already has a development problem and automation will compound it. If above 30%, a deliberate replacement formation mechanism has to be designed before automation deploys.

The same metric works as a continuous operational monitor post-deployment. If the cases being routed to junior analysts for review, the structured sample preserved for developmental purposes, contain less than 30% that require hypothesis formation or real judgment, the sample is not serving its formation purpose. It is producing the appearance of analyst development without any real substance.

Skills declining fastest
  • Manual alert triage and enrichment
  • Template-based investigation reporting
  • Rule-based escalation decisions
  • Predefined log query execution
  • Basic data correlation without interpretation
  • Signature-based pattern matching
  • SLA-driven ticket closure
Skills rising in all roles
  • Agent output critical evaluation
  • Prompt literacy and query design
  • Human-AI collaboration judgment
  • Data source and tool-chain awareness
  • Ethics and bias recognition in autonomous systems
  • Hypothesis formation and adversarial creativity
  • Cross-functional communication of agent findings
13

Formation Pathways: A Practical Map

The roles described in Sections 5 to 7 do not populate themselves. For every Agent Architect, Security Ontology Engineer, and Trust and Boundary Engineer the industry needs from 2028 onwards, there has to be a practitioner who made specific development investments earlier. The map below covers those investment paths, for existing practitioners making the transition and for new entrants coming into the security profession through agent-era pathways that did not previously exist.

Existing Practitioners: Transition Paths

T1 Analysts face the most urgent and most tractable transition. The target roles, Agent Validator in the near term and Agent Operations Specialist in the medium term, build on existing alert pattern knowledge while adding the agent-specific layer: understanding of agent reasoning systems, confidence calibration, override tracking, and MCP data surface and scope mapping. The transition takes roughly 12 to 18 months of deliberate development, most of which can happen in role if the organization designs the Phase 1 deployment to expose reasoning chains rather than just outputs. T1 analysts who develop strong agent evaluation instincts in Phase 1 are well positioned for Agent Operations roles in Phase 2.

T2 Analysts have a clear path to Investigation Coordinator through Phase 2 deployment experience. The transition is largely natural if the Phase 2 environment is well designed. The investment required is in two areas. Multi-agent workflow management, which requires exposure to coordinating multiple simultaneous agent investigations. And evidence chain evaluation for legal and compliance, which requires working with legal counsel on what agent-collected evidence means in a formal setting. T2 analysts who develop interest in the architectural layer have a longer but high-value path to Agent Architect through cross-training in agent engineering fundamentals.

T3 / Senior Analysts have the clearest path to the highest-value new roles, and face the most significant identity transition. The Agent Architect role requires externalizing expertise, meaning translating investigation intuition into explicit system design. This is cognitively demanding in a specific way. Most investigators have never been asked to articulate their reasoning at the level of precision required for agent architectures. Development programs should include structured knowledge elicitation exercises, agent design workshops with immediate feedback loops, and mentored agent-building projects that test whether their investigative knowledge can be successfully encoded. Senior analysts who develop Security Ontology Engineering skills also need investment in knowledge graph foundations and semantic representation, which is roughly a 6 to 12 month development period alongside existing responsibilities.

Detection Engineers have a relatively direct path to Agent Detection Designer, with the primary development requirement being agent workflow design. Understanding of how agents execute detection logic, what evidence chain specifications look like in practice, and how to design graceful failure modes. Most detection engineers already have the security domain knowledge and analytical precision the role requires. The gap is mostly technical. MCP tool design, confidence threshold calibration, and the mechanics of autonomous evidence preservation.

SOC Managers transitioning to Agent Operations Lead face a split development path depending on their existing strength. Those with strong technical backgrounds should invest in AI governance frameworks and agent performance measurement methodologies. Those with stronger operational backgrounds should invest in the technical foundations of agent deployment and observability. Both need significant development in liaison and business relationship management skills, which have traditionally been peripheral to SOC management and will become central in the distributed mesh model.

Transition Timeline Reference
Development Guide
Current Role Phase 2 Target Phase 3 Target Primary Development Investment
T1 Analyst Agent Validator Agent Operations Specialist Agent reasoning evaluation; confidence calibration; MCP data surface awareness
T2 Analyst Investigation Coordinator Agent Architect (long path) Multi-agent workflow management; legal evidence standards; hypothesis methodology
T3 / Senior Analyst Agent Architect Security Ontology Engineer Agent engineering fundamentals; knowledge elicitation; ontological modeling
Detection Engineer Agent Detection Designer Adversarial Scenario Designer Agent workflow design; evidence chain specification; failure mode analysis
Red Teamer / Pen Tester Adversarial Scenario Designer Adversarial Scenario Designer (senior) Agent reasoning failure analysis; agentic QA methodology; confidence boundary testing
SOC Manager Agent Operations Lead Ethical Oversight Specialist AI governance frameworks; agent performance measurement; cross-functional liaison
IAM / Security Engineer Agent Trust & Boundary Engineer Trust fabric architecture (senior) Contextual permission systems; A2A communication security; composition analysis

New Entrants: Agent-Era Formation Paths

The security profession is creating entry pathways that did not exist five years ago. These are not shortcuts around foundational security expertise. They are different formation routes that build domain depth through different mechanisms. The paths below are viable agent-era entry points for practitioners coming into security for the first time.

Agent QA and Validation is the most accessible new entry pathway for practitioners with analytical backgrounds but limited prior security exposure. The work involves testing agent reasoning quality, identifying hallucination patterns, evaluating confidence calibration, and designing stress test scenarios for detection logic. Practitioners in this role build security domain knowledge through continuous exposure to what agents get right and wrong. The failure analysis requires understanding what correct would have looked like. A two year cycle in this role, with structured mentorship from senior analysts, can produce practitioners with real investigation judgment and strong agent systems knowledge at the same time.

Security Ontology Engineering is accessible to practitioners from knowledge engineering, information architecture, and data modeling backgrounds who invest seriously in security domain development. The formal knowledge engineering skills are transferable. The security domain expertise has to be built. Organizations that hire from this background should design explicit cross training programs, rotating ontology engineering candidates through investigation workflows, threat intelligence analysis, and detection engineering.

Trust and Boundary Engineering is accessible from identity and access management, API security, and network security engineering backgrounds. The contextual permission design and agent communication security aspects need development, but the foundational governance and access control knowledge will transfer well. This is probably the most near term hiring opportunity for practitioners with strong IAM backgrounds who want to move into a higher complexity, higher value specialization.

Organizational Investment Requirements

Formation pathways exist only if organizations fund them. Three specific investment categories determine whether the positive transformation trajectory is achievable or remains aspirational.

Structured development case programs have to be designed and maintained as operational infrastructure, not as training add-ons. That means identifying which agent-resolved cases carry the most developmental value, routing them to appropriate analysts, building the structured review interfaces that expose agent reasoning, and tracking development outcomes. The cost is operational. Analyst time spent on development cases instead of production throughput. The payoff is a senior analyst bench that exists in 2030.

Cross-training investment for the new specialist roles requires dedicated time and budget. T3 analysts developing agent architecture skills. Detection engineers developing adversarial scenario design. IAM engineers developing agent boundary expertise. These transitions do not happen on the margins of existing responsibilities. They require protected development time, access to agent engineering tooling, mentorship from those who have already made the transition, and explicit permission to prioritize development over short-term operational productivity.

Career pathway visibility is the most underinvested and most immediately actionable requirement. The new roles described in this paper need to be visible as defined career destinations with explicit competency requirements, progression milestones, and compensation recognition. Not as job descriptions that will be written when needed. Practitioners make development investments based on the career landscape they can see. Organizations that make the new roles visible now will find candidates self-selecting into development paths that serve the organization. Organizations that wait will find themselves unable to populate the roles they will urgently need when Phase 2 and Phase 3 arrive.

14

Conclusion

The traditional SOC, centralized, human-bound, and reactive, was never designed to withstand the scale, speed, and sophistication of today's threat environment. Its replacement by a distributed, agent-powered security fabric is no longer a question of if but how. The technology is more than capable, the economics are aligned, and the organizational change is already underway. The Natural Language or Agentic Web may provide the blueprint for the infrastructure layer that makes cross-organizational agent collaboration practical rather than theoretical.

What is not yet decided is whether that change preserves or destroys the human expertise layer that makes autonomous systems trustworthy. The argument of this paper is not that the change should be resisted, but that it has to be intentional and designed. Every section points to the same conclusion from a different angle. The positive trajectory requires active organizational choices against the efficiency gradient. Those choices are not technologically complex. They are economically inconvenient. The industry has failed to make them when given the opportunity twice before, in 2012 with MSSP and in 2019 with MDR.

AI is the third wave. Unlike the first two, it has the capability to finish the pipeline damage the first two waves began. That capability is also what makes the design decision more consequential, not less. The same technology that can hollow out the analyst pipeline can, if deployed with formation intent, produce a security profession more capable and more sustainable than anything the centralized SOC ever achieved.

The roles described in Sections 5 through 7, from the Agent Validator to the Ethical Oversight Specialist and from the Security Ontology Engineer to the Agent Trust and Boundary Engineer, are the human architecture of a more capable profession. They are not speculative. They are the logical endpoints of decisions security leaders are making today about how to deploy AI, how to preserve developmental value, how to govern autonomous systems, and how to invest in the practitioners who will populate these roles when the roles become critical.

The practitioners who will fill the governance tier of the 2028 to 2030 autonomous SOC are working somewhere in security today. Whether they develop the expertise required depends on whether the organizations they work for invest in their formation or automate it away. Whether compliance frameworks meaningfully constrain autonomous security operations, or certify fictional security postures, depends on whether regulators and organizations develop the human verification capacity now. Whether the adversarial asymmetry narrows or widens depends on whether the industry preserves the adversarial creativity and threat intuition no agent can currently replicate.

These are tractable problems. They are not primarily technology problems. They are organizational design problems, workforce investment problems, and governance design problems of the kind security leadership is well equipped to solve, once they recognize that the threat is structural and self-inflicted rather than external and beyond their control.

The security profession is not facing a technology problem. It is facing a formation problem that a technology transition is about to make permanent. The practitioners who will govern the autonomous security systems of 2030 are somewhere in the industry today, working their way through the volume and repetition that builds the judgment those systems will require to stay trustworthy. Whether they complete that formation, or whether that formation gets automated away before it finishes, is a choice security leaders are making right now, in purchasing decisions and platform design choices and headcount models, without necessarily realizing they are making it.

The recomposition of security work is inevitable. Its outcome is not.

References and Context

This paper synthesizes original research from Command Zero, including "The Evolution of the SOC" and "The Natural Language Web, Designing for Agent-Human Coexistence," developed by Dean de Beer, CTO & Cofounder, Command Zero.

The four-phase SOC evolution model draws on the Command Zero distributed agentic SOC architecture, incorporating the Agent Communication and Discovery Protocol (ACDP), Model Context Protocol (MCP), and A2A agent communication standards.

The pipeline analysis in Section 2 incorporates arguments from "The Hollow Middle: How We Gutted the SOC Analyst Pipeline Before AI Ever Showed Up," referenced with attribution as an independent corroborating source for the MSSP/MDR pipeline damage argument. The 30% developmental case rate metric in Section 13 derives from the same source.

The Agent Trust and Boundary Engineering role definition in Section 7, and its treatment as a discipline distinct from the Trust Engineer role, represents original analytical work developed in the context of this research. It extends beyond traditional IAM frameworks to address the contextual permission, communication layer governance, and permission composition challenges specific to autonomous multi-agent security systems.

The cross-profession implications analysis in Section 11 extends the role framework beyond the security organization to address the full scope of enterprise transformation implied by distributed security mesh architecture.