NullRabbit Logo
Back to Research Hub

XDP Inline Defense for Validators: Kernel-Level Protection at Line Rate

NullRabbit Labs7 min read

Kernel-Level Enforcement for a Network That Never Sleeps

Your validator's IP gets added to a scan list. Within 24 hours, it's probed 40,000 times across every port. Your cloud firewall logs it. Your IDS sees it. iptables processes every single packet. But by then, your kernel has already allocated memory, parsed headers, and burned CPU cycles on traffic that shouldn't have made it past the NIC.

And most probably, you saw nothing.

Traditional packet handling in Linux is a bit like security theatre at airports - lots of activity, but it happens after you've already let everyone through the door.

NullRabbit Guard does something different. Instead of filtering traffic in the kernel or userspace, we enforce security at the earliest possible point: directly inside the NIC driver, at the XDP layer.

That means malicious packets get dropped before the kernel ever sees them. No skb allocation. No conntrack. No CPU waste. Just instant DROP decisions at line rate.

Why Inline Defense Matters for Validators

Validator nodes are uniquely exposed:

  • They run predictable services (RPC, consensus, health endpoints).
  • Their IPs are often static and publicly known.
  • They cannot hide behind load balancers.
  • Their availability directly affects network safety and liveness.

Traditional setups protect after the fact:

  • iptables / nftables - after kernel allocation
  • agents, IDS, WAFs - after userspace handling
  • cloud firewalls - outside the node, slow to adapt

By the time these see a packet, you've already paid the performance tax. None of them can stop:

  • High-rate scans pushing 100k+ packets/second
  • Malformed consensus packets
  • RPC abuse traffic
  • Protocol-level fuzzing
  • Burst probes across 65,535 ports

By the time the kernel sees a malicious packet, resources are already gone.

Inline defense solves this. With XDP, Guard drops traffic:

  • before skb allocation
  • before conntrack
  • before socket handling
  • before it touches your node

We're talking microseconds, not milliseconds. And it scales to 10Gbps+ without breaking a sweat.

What XDP Actually Is

XDP (eXpress Data Path) is a high-performance hook inside the Linux networking stack that lets you run eBPF programs at the earliest possible point - right inside the NIC driver, before packets even hit the kernel.

Think of it as a bouncer at the door instead of security checking tickets halfway through the venue.

XDP programs can:

  • Run before the kernel's networking stack
  • Bypass iptables, nftables, and netfilter entirely
  • DROP, PASS, or REDIRECT packets instantly
  • Process packets in microseconds at line rate
  • Scale to 10+ Gbps without eating CPU

For validator operators: better protection, lower latency, zero performance overhead.

How Guard Works

           Incoming Traffic
                  ↓
        [ NIC Driver / XDP Hook ]
        ┌─────────────────────────┐
        │ 1. Parse headers        │
        │ 2. Lookup flow state    │
        │ 3. Apply block/allow    │
        │ 4. Emit events (sample) │
        └─────────────────────────┘
                  ↓
        [ Regular Kernel Networking ]
                  ↓
             Validator Node

Guard has two coordinated components:

1. XDP Fast-Path Program (in-kernel)

  • Extracts minimal metadata (IP, ports, flags)
  • Detects new flows
  • Applies allow/block decisions instantly
  • Emits sample events for userspace correlation
  • Always fails open for safety

2. Userspace Control Plane

  • Enriches sampled flows with Sentinel intelligence
  • Builds behavioural fingerprints of scanners
  • Updates eBPF maps in real time
  • Pushes protocol profiles and port policies
  • Learns continuously without touching the fast path

The XDP program stays minimal and deterministic. All the complex analysis happens in userspace, where it can't impact packet processing.

The Fast Path: How Packets Are Classified

Guard’s XDP logic is intentionally minimalistic for performance and safety.

Simplified pseudo-code:

int guard_xdp(struct xdp_md *ctx) {
    struct packet_meta meta = parse_headers(ctx);
    if (!meta.valid) return XDP_PASS;

    struct flow_key key = {
        .src_ip = meta.src_ip,
        .dst_ip = meta.dst_ip,
        .src_port = meta.src_port,
        .dst_port = meta.dst_port,
        .proto = meta.proto,
    };

    // 1. Precomputed allow/block decisions
    __u8 *rule = bpf_map_lookup_elem(&rules_map, &key);
    if (rule) {
        return *rule == RULE_ALLOW ? XDP_PASS : XDP_DROP;
    }

    // 2. First packet of the flow → classify + record
    struct flow_state *state = bpf_map_lookup_elem(&flows_map, &key);
    if (!state) {
        struct decision d = classify_initial_packet(&meta);
        bpf_map_update_elem(&flows_map, &key, &d, BPF_ANY);
        emit_sample_event(&meta, &d);
        return d.action == ACTION_ALLOW ? XDP_PASS : XDP_DROP;
    }

    // 3. Existing flow → reuse decision
    return state->action == ACTION_ALLOW ? XDP_PASS : XDP_DROP;
}

Design principles:

  • Headers only on the fast path - no payload inspection
  • First-packet classification - decide once, cache the verdict
  • Encrypted-friendly - TLS/gRPC/QUIC all work fine
  • Deterministic O(1) map lookups - no loops, no unpredictable branches
  • Fail-open - if something breaks, traffic flows

What Guard Can See on Encrypted Traffic

Guard does not decrypt packets. We're not doing DPI, and we don't need to.

Instead, we look at what's already visible:

Metadata:

  • IP / port relationships
  • TCP flags & options
  • Packet size distributions
  • TLS handshake lengths
  • gRPC preface patterns

Behaviour:

  • Connection spikes (one IP → 5000 connections in 10 seconds)
  • Port sweeps (sequential scans across 1-65535)
  • High-failure RPC ratios (95% RST packets)
  • Suspicious consensus handshakes (wrong packet sizes)
  • Repeated short-lived connections (connect → RST → repeat)

You'd be surprised how obvious malicious traffic is when you look at the shape of it rather than the content. A masscan sweep looks nothing like legitimate RPC traffic, even when both are encrypted.

Guard catches:

  • masscan-style scanning
  • structured reconnaissance
  • malformed consensus traffic
  • RPC abuse
  • fuzzing probes

All without touching the payload.

Sentinel → Guard: Closed-Loop Defense

Guard isn't a standalone firewall. It's the enforcement layer for Sentinel, our distributed scanning fabric.

The loop works like this:

  1. Sentinel discovers attackers scanning your validator's IP ranges
  2. It fingerprints their behaviour (packet timing, scan patterns, misconfigs)
  3. Sentinel pushes a rule update to Guard
  4. Guard updates eBPF maps in real time
  5. Future packets from that fingerprint get dropped at the NIC
  6. Validator risk scores update automatically
  7. Exposure history gets logged for on-chain reporting

You get a self-updating defense that learns from attacks across the entire network. What Sentinel sees scanning any validator, Guard can block on your validator - before the attack even starts.

Deployment Safety: Fail-Open by Design

Validator uptime can't depend on a security tool. If Guard breaks, your node needs to keep running.

That's why we built it to fail open:

  • Fail-open eBPF design - if the program unloads, traffic passes through
  • Graceful map fallbacks - missing rules default to PASS, not DROP
  • Userspace crash safety - the XDP program keeps running even if the control plane dies
  • No single point of failure - the kernel doesn't care if our daemon restarts

You get inline defense without risking liveness. Security that doesn't become an outage.

What's Next

Right now, Guard handles:

  • L3/L4 enforcement at the NIC
  • Validator port fingerprinting
  • RPC anomaly detection
  • Closed-loop adaptation with Sentinel

We're working on:

  • Chain-specific models - Sui, Arweave, Cosmos each have different traffic patterns
  • Deeper flow tagging - distinguishing RPC from consensus traffic
  • On-chain exposure publishing - cryptographically verifiable attack logs
  • Multi-cloud deployment automation - one-click deploys for AWS, GCP, bare metal
  • Network-wide threat attribution - who's scanning what, and why

Guard is in private beta. If you're running validators and want kernel-level inline defense that actually works at scale, reach out directly at [email protected].

We're looking for operators who understand that security at the application layer is already too late.

Related Reading

Related Posts