NullRabbit Logo

NullRabbit Research Hub

Open technical reports and benchmarks from NullRabbit Labs on validator, DePIN, and agentic security intelligence.

By NullRabbit Labs

NullRabbit Research Hub

The NullRabbit Research Hub is a growing library of open studies, datasets, and benchmarks that advance decentralized infrastructure security. Our research covers validator security, agentic scanning, autonomous security intelligence, and DePIN security.

All research is conducted using non-intrusive, reproducible methodologies designed to provide transparency and ecosystem health insights - not naming-and-shaming or competitive rankings.

Our Mission

Decentralized networks operate in the open, but their security posture remains opaque. Validators, RPC operators, and DePIN edge nodes run critical infrastructure, yet there's no standardized way to measure external exposure, patch latency, or systemic concentration risks.

NullRabbit measures the outside-in security posture of decentralized infrastructure. We publish:

  • Monthly benchmarks: Aggregate security scores and trends for major networks
  • Interactive heatmaps: Geographic and provider distribution visualizations
  • Methodology papers: Detailed explanations of scoring models and scan techniques
  • Open datasets: Versioned, reproducible data for independent analysis

Our goal: provide the visibility and accountability that decentralized networks need to mature.

Featured Studies

Validator Security Benchmarks

Sui Validator Security Report - September 2025 →

Our comprehensive analysis of the Sui validator set found:

  • 39.6% of voting power exposed via SSH and CVE-affected services
  • 28% of validators running software with known vulnerabilities
  • Only 18.5% of validators met "good hygiene" threshold (score >70)
  • Significant provider concentration (HCI: 0.21)

This benchmark identified systemic risks approaching the 33% consensus failure threshold.

Sui Validator Network Exposed →

A deep dive into specific exposure patterns and security implications for the Sui network.

Sui Billion Dollar Liability →

Analysis of the economic impact and systemic risks from validator exposures.

Sui Validator Slashing Warning →

Critical security alerts and remediation guidance for Sui validator operators.

Research Methodology

Our research follows strict principles designed to balance transparency with responsible disclosure.

1. Non-Intrusive Scanning

All scans use passive reconnaissance techniques:

  • Port scans: TCP SYN scans at conservative rates
  • Service fingerprinting: Banner grabs and TLS handshakes
  • Version detection: Heuristic matching against known signatures
  • No exploitation: We never attempt to exploit vulnerabilities

Scans are designed to be indistinguishable from routine network monitoring.

2. Reproducibility

Every report includes:

  • Scan timestamps: When data was collected
  • Tool versions: Which scanning agents and libraries were used
  • Configuration details: Rate limits, retry policies, target lists
  • Code snippets: Sample implementations for key algorithms

Independent researchers can verify our methodology and reproduce findings.

3. Privacy & Responsible Disclosure

We balance public transparency with operator privacy:

Public Reports:

  • Aggregate network-wide scores and distributions
  • Anonymized case studies (no identifying details)
  • Systemic concentration metrics (provider/region clustering)

Private Communications:

  • Individual validator scores shared only with verified operators
  • Detailed remediation guidance provided privately
  • Coordination with network foundations for critical findings

Operators can request their own scan results at any time via our Discord bot.

4. Dataset Versioning

All datasets are versioned and archived:

  • Monthly snapshots: Capture network state at regular intervals
  • Version control: Track changes in scan coverage and methodology
  • Hash attestations: On-chain proofs of dataset integrity (when supported)

This enables longitudinal analysis: how has validator hygiene changed over 3, 6, or 12 months?

5. On-Chain Transparency

For networks that support it, we publish:

  • Aggregate score hashes: Cryptographic proof of benchmark timestamps
  • Trend summaries: Whether network hygiene is improving or regressing
  • Compliance attestations: Verification that scans followed stated methodology

On-chain publishing builds trust and prevents retroactive manipulation of results.

Interactive Tools

Exposure Heatmaps

Geographic Heatmap →

Visualize where validators are located and where exposures concentrate:

  • Color-coded regions by exposure severity
  • Provider and ASN breakdowns
  • Clustering analysis (are validators geographically diverse or concentrated?)

Validator Discord Bot

Sentinel Discord Bot →

Operators can:

  • Request on-demand rescans of their validators
  • Retrieve their current hygiene score
  • Get remediation guidance for specific findings
  • Track hygiene trends over time

All interactions are private and require validator ownership verification.

Benchmarking Schedule

NullRabbit publishes monthly benchmarks for major networks:

NetworkCoverageNext Benchmark
SuiFull validator setMonthly (1st week)
AptosFull validator setMonthly (2nd week)
CelestiaFull validator setQuarterly
SolanaSample (top 200 by stake)Quarterly

Additional networks added based on community interest and data availability.

How to Use This Research

For Validators & Operators

  • Benchmark your hygiene: Compare your score against network averages
  • Track improvements: Monitor whether your remediations are effective
  • Learn from peers: See anonymized examples of common exposures

For Network Foundations

  • Ecosystem health checks: Monitor validator set security trends
  • Incentive design: Use hygiene scores to inform delegation algorithms
  • Coordinated patching: Identify when network-wide CVE remediation is needed

For Researchers & Auditors

  • Methodology validation: Review our scan techniques and scoring models
  • Independent analysis: Download datasets and perform your own studies
  • Collaboration: Propose new metrics or research directions

For Delegators & Stakeholders

  • Informed delegation: Choose validators with strong security hygiene
  • Systemic risk awareness: Understand network-wide concentration risks
  • Advocate for improvements: Use data to push for baseline security standards

Contribute & Collaborate

We welcome collaboration from:

  • Network foundations: Partner on coordinated security initiatives
  • Security researchers: Propose new scan techniques or metrics
  • Validator operators: Provide feedback on scoring fairness and remediation guidance
  • Academic institutions: Co-author papers or validate methodologies

Contact us via Discord, Twitter, or email: [email protected]

Citation

If you use NullRabbit datasets or methodology in your research, please cite:

NullRabbit Labs. (2025). Validator Security Benchmark [Dataset].
Retrieved from https://nullrabbit.ai/research

All datasets are released under Creative Commons Attribution 4.0 (CC BY 4.0). You are free to share and adapt with proper attribution.

Roadmap

Upcoming research areas:

  • Cross-network comparative analysis: How does Sui hygiene compare to Aptos, Celestia, etc.?
  • Longitudinal studies: 6-month and 12-month trend analysis
  • Predictive models: Can we forecast which validators will experience drift?
  • Compliance mapping: SOC2, ISO 27001, and MiCA control coverage
  • On-chain reputation integration: Fusing security posture with validator performance

Stay Updated

The NullRabbit Research Hub exists to make decentralized infrastructure more secure, transparent, and trustworthy. We measure what matters - and publish the results openly.

Related Concepts

Explore the foundations of NullRabbit's research:

Related Research