Agentic Scanning
Defines the agentic scanning methodology: a pattern for continuous, multi-agent security discovery across distributed infrastructure.
Agentic Scanning
A methodology for continuously discovering, triaging, and coordinating security work across distributed infrastructure.
Traditional security scanners run on schedules - weekly, daily, or hourly at best. They execute fixed playbooks, produce static reports, and require manual interpretation. Agentic scanning breaks this pattern.
Agentic scanning is a continuous scanning paradigm using multiple role-scoped agents that coordinate their behavior based on findings and historical context. Instead of rigid scripts, agents coordinate via an orchestrator, persist patterns in vector memory, and refine targeting based on feedback loops. The system throttles safely, avoids overloading targets, and prioritises highest-risk surfaces.
In agentic scanning, agents are role-scoped and policy-constrained. They generate judgments, coordination signals, and candidate actions, but they do not themselves justify enforcement. Actions that affect live systems require separate governance, explicit human review, and bounded authority.
This page describes a methodology and architectural pattern used in modern security automation. Agentic scanning is not the same as autonomous enforcement - it addresses discovery and triage only. Execution of remediation or enforcement actions requires separate governance, such as an earned autonomy framework where authority is explicitly granted by operators.
Architecture
A typical agentic scanning system operates as a multi-agent orchestrator coordinating specialised agents. Each agent has a defined role, maintains its own state, and shares context through a central memory store.
Example Agent Roles
The following are examples of agent roles commonly found in agentic scanning systems:
Discovery Agent:
- Observes and seeds initial targets from known infrastructure lists, endpoints, and peer observations
- Enriches targets with ASN, geolocation, provider data
- Tracks network topology changes over time
Port Scanner Agents:
- Scan target ranges continuously for open ports
- Use backoff strategies and target-aware throttling to avoid rate limiting
- Classify new services as they appear
Protocol Fingerprinting Agent:
- Correlates open ports to detailed service fingerprints
- Uses banner grabs, protocol-specific heuristics, and model-assisted classification
- Produces confidence scores (0-1) for each identification
CVE Correlation Agent:
- Maps detected services and versions to known vulnerabilities
- Cross-references CVE databases, vendor advisories, and exploit availability
- Surfaces remediation recommendations (does not apply them)
Web Probe Agent:
- Collects HTTP/HTTPS headers, TLS certificate data, and content signals
- Classifies common misconfigurations: default pages, exposed admin panels, weak ciphers
- Respects robots.txt and rate limits
Compliance Mapping Agent:
- Maps findings to control frameworks (SOC2, ISO 27001, MiCA)
- Generates compliance gap reports for operator review
The Orchestrator
The orchestrator coordinates agent activity and enforces policy constraints. Common orchestrator responsibilities include:
- Prioritising targets: Selecting scan targets based on risk signals, network role, and time since last scan
- Balancing regional queues: Distributing work across geographic regions to avoid overload
- Tracking state: Monitoring hygiene streaks, flapping events, and persistent exposures
- Enforcing policy: Applying rate limits, connection concurrency caps, and retry backoff
The orchestrator ensures agents operate within operator-defined, safe, and non-intrusive boundaries.
Vector Memory
All fingerprints, banners, and service signatures can be embedded into a vector memory store. This enables:
- Fast similarity recall: Match new findings against historical patterns
- Deduplication: Suppress repeated alerts for known states
- Pattern refinement: Improve fingerprint accuracy over time via feedback
Vector memory transforms agentic scanning from reactive detection to context-aware triage.
Strengths and Risks
Comparison with Static Scanners
| Feature | Traditional Scanners | Agentic Scanning |
|---|---|---|
| Execution Model | Scheduled (weekly/daily) | Continuous, event-driven |
| Targeting | Fixed playbooks | Policy-adjusted based on findings |
| Context Awareness | None | Historical memory and network topology |
| Throttling | Global rate limits | Target-aware backoff and regional balancing |
| False Positives | High (single-signal fingerprints) | Lower (multi-signal, confidence-weighted) |
| Output | Manual interpretation | Candidate actions with context |
Strengths
- Coverage: Continuous operation catches transient exposures that scheduled scans miss
- Prioritisation: Risk-weighted triage focuses operator attention on what matters
- Repeatability: Deterministic agent behavior produces consistent, auditable results
- Institutional memory: Vector memory accumulates context that improves accuracy over time
Risks
- Noisy signals: High-frequency scanning can generate alert fatigue without proper deduplication
- False positives: Model-assisted fingerprinting may misclassify services, especially novel ones
- Scope creep: Without bounded authority, scanning systems may probe beyond intended targets
- Automation overreach: Operators may assume findings are actionable without validation
Failure Modes
Agentic systems can fail in ways that static scanners cannot:
- Hallucinated recommendations: Model-based agents may propose fixes that don't apply to the actual vulnerability
- Compounding errors: Incorrect fingerprints propagate through the pipeline, producing cascading false positives
- Feedback loops: Policy-adjusted prioritisation may over-focus on certain targets while neglecting others
- Operator fatigue: Continuous alerting can desensitise teams, causing genuine issues to be ignored
These failure modes reinforce why agentic scanning should be separated from enforcement. Discovery and triage benefit from automation; execution requires human review and explicit authority grants.
Implementation Snapshot
Below is illustrative pseudocode showing how agents might coordinate. This is illustrative pseudocode only; it does not imply automated remediation or production readiness.
# Orchestrator loop (illustrative pseudocode)
while True:
targets = discovery_agent.get_high_priority_targets()
for target in targets:
# Port scan with backoff
open_ports = port_scanner_agent.scan(target, throttle=True)
for port in open_ports:
# Fingerprint service
fingerprint = fingerprint_agent.classify(target, port)
# Check vector memory for similar patterns
if vector_memory.is_novel(fingerprint.embedding):
# New pattern detected
cves = cve_agent.correlate(fingerprint)
if cves:
# Calculate risk and surface recommendation
risk = scoring_engine.compute_risk(target, cves)
orchestrator.surface_candidate(target, risk, cves)
# Store in vector memory for future recall
vector_memory.store(fingerprint.embedding, metadata)
# Adjust priorities based on findings
orchestrator.update_priorities(targets)
This feedback loop - scan, classify, correlate, prioritise - represents the core pattern. Actual implementations vary significantly based on infrastructure constraints, compliance requirements, and operational context.
Safety and Governance
Agentic scanning systems should enforce strict boundaries:
Detection vs Enforcement Separation:
- Scanning agents observe, classify, correlate, and propose - they do not execute
- Remediation and enforcement require separate systems with explicit operator authority
- This separation prevents automated overreach
Operational Constraints:
- Bounded scope: Agents operate only on explicitly defined target sets
- Least privilege: Each agent has minimal permissions required for its function
- Audit logs: All agent actions are logged for review and forensics
- Human review: Irreversible actions (blocking, quarantine) require operator sign-off
Ethical Boundaries:
- Legal, non-intrusive probes only: banner grabs, TLS handshakes, metadata collection
- Respect for operator controls: honor robots.txt and allowlists where applicable
- Conservative defaults: rate limits and connection concurrency tuned to avoid impact
- Responsible disclosure: findings communicated privately before public reporting
Automation does not mean aggression. Agentic systems should prioritise ecosystem health over exhaustive coverage.
Related Research
Explore the foundations and applications of agentic scanning:
For questions about agentic scanning methodology or implementation patterns, contact the research team.
Related Research
XDP Inline Defense for Validators: Kernel-Level Protection at Line Rate
Validator nodes face constant exposure. This deep dive explains how NullRabbit Guard uses eBPF and XDP to enforce security directly inside the NIC driver, dropping scans and abnormal traffic at line rate before they reach the kernel or your node.
Validator Slashing Incidents Are a Warning. Sui Could Be Next.
Recent Ethereum validator slashings showed how fragile infra can be. Our scan of Sui uncovered something worse: nearly 40% of validator voting power exposed.
Sui Validator Network Exposed: Nearly 40% at Risk
NullRabbit's August 2025 scan of the Sui validator set revealed nearly 40% of voting power exposed to SSH, CVEs, and misconfigurations - leaving the network one step away from consensus failure.
