Home Platform ML Anomaly Detection

3-Model ML Ensemble for Behavioral Anomalies

Rules catch known threats. Machine learning catches the ones you haven't imagined yet. DAT runs three complementary ML models in parallel to detect anomalous agent behavior in real time — all in pure TypeScript, with zero external dependencies.

Three Models, Zero Blind Spots

Each model catches a different class of anomaly. Together, they cover the full threat surface.

Defense in Depth, Powered by ML

A single anomaly detection model is a single point of failure. If an attacker understands the model, they can evade it. DAT's ensemble approach makes evasion exponentially harder: fooling the Isolation Forest still triggers the Autoencoder, and evading both still leaves the LSTM watching for temporal patterns.

  • Isolation Forest — Detects point anomalies. Agents that suddenly behave unlike any agent the system has seen before are flagged instantly
  • Autoencoder — Learns the "shape" of normal behavior. When an agent's pattern deviates from the learned representation, the reconstruction error spikes
  • LSTM Predictor — Watches sequences over time. An agent that gradually shifts its behavior window-by-window is caught by the LSTM even when individual windows look normal
  • Pure TypeScript — No Python, no TensorFlow, no external ML runtime. The entire pipeline runs in Node.js, deploys in Docker, and scales with your existing infrastructure
3-Model Ensemble Architecture
==============================

Events -> Feature Extraction (14 dims)
       -> 1-minute sliding windows

Model 1: Isolation Forest
  Type:   Point anomaly detection
  How:    Random partitioning trees
  Finds:  Single-event outliers
          "This request is bizarre"

Model 2: Autoencoder (Neural Net)
  Type:   Pattern anomaly detection
  How:    Compress -> reconstruct
  Finds:  Behavioral shifts
          "This agent changed"

Model 3: LSTM Predictor
  Type:   Sequence anomaly detection
  How:    Temporal prediction
  Finds:  Gradual drift
          "This trend is dangerous"

Ensemble Decision:
  Any model flags HIGH -> alert
  2+ models flag MEDIUM -> alert
  Consensus required for LOW
  -> feeds into investigation

14 Behavioral Dimensions

Every minute of agent activity is compressed into a 14-dimensional feature vector that captures the full behavioral surface.

What Gets Measured Gets Monitored

Most anomaly detection systems watch simple metrics like error rates. DAT measures 14 dimensions of behavior per one-minute window, including features specifically designed to catch sophisticated attacks: silence gaps that indicate a compromised agent going dark, latency deviations that suggest resource hijacking, and geographic spread that flags impossible travel.

  • Activity Metrics — Actions per minute, unique actions, unique targets. Catches both floods and unusual diversification
  • Performance Metrics — Average response time, error rate, block rate. Sudden slowdowns or failure spikes are immediately visible
  • Financial Metrics — Total amount, maximum single amount. Detects unusual transaction patterns before damage is done
  • Temporal Metrics — Hour of day, day of week. An agent that suddenly operates at 3 AM when it normally works 9-to-5 is flagged
  • Watchdog Features — Time since last event, event frequency, latency deviation. These three features (added in Phase 3) specifically target silent compromises and selective reporting
14-Feature Vector (v2)
==============================

Per 1-minute sliding window:

Activity:
  [0] actionsPerMinute     # volume
  [1] uniqueActions        # diversity
  [2] uniqueTargets        # target spread

Performance:
  [3] avgResponseTime      # latency
  [4] errorRate            # failure %
  [5] blockRate            # denied %

Financial:
  [6] totalAmount          # spend
  [7] maxAmount            # single max

Temporal:
  [8] hourOfDay            # 0-23
  [9] dayOfWeek            # 0-6

Spatial:
  [10] geoSpread           # IP diversity

Watchdog (Phase 3):
  [11] timeSinceLastEvent  # silence gap
  [12] eventFrequencyPerHr # reporting
  [13] latencyDeviation    # std dev

Thresholds:
  silence_gap > 30 min -> alert
  latency_deviation > 500ms -> alert

Anomalies Feed the Trust System

Detection is step one. DAT closes the loop by automatically triggering investigations, freezing trust, and enabling DVN consensus.

From Alert to Action in Seconds

Most anomaly detection ends at an alert. Someone reads the alert, opens a ticket, and maybe investigates next week. In DAT, anomaly detections automatically feed into the trust scoring system, trigger investigations, enable shadow scoring, and surface in the admin console with full context. The entire loop — from behavioral anomaly to frozen agent — can happen without human intervention.

  • Trust Signal Generation — HIGH anomalies generate fraud signals (-10 trust). MEDIUM generates violation signals (-5 trust). Scores adjust in real time
  • Investigation Escalation — Critical anomalies automatically escalate the agent to investigating state, freezing its trust at 30 while shadow scoring continues
  • Shadow Scoring — While an agent is under investigation, its "shadow" trust score continues computing in the background. If exonerated, the agent recovers 50% of held gains
  • DVN Consensus — Anomaly context is available to the Decentralized Verifier Network for community-driven consensus on whether the agent should be exonerated or blacklisted
  • SIEM Forwarding — Every anomaly detection event is forwarded to your enterprise SIEM (Splunk, Sentinel, Elastic) in real time via the SIEM webhook export pipeline
Anomaly -> Trust System Pipeline
==============================

1. EVENT INGESTION
   Chatbot, Monitoring, Reputation
   -> POST /webhook/events
   -> Redis buffer (max 10K)

2. AUTO-TRAINING
   >= 100 events -> train ensemble
   Models learn "normal" per-agent

3. REAL-TIME SCORING
   New event -> extract features
   -> Isolation Forest score
   -> Autoencoder reconstruction
   -> LSTM sequence prediction
   -> Ensemble decision

4. TRUST IMPACT
   HIGH   -> fraud signal (-10)
          -> auto-investigate
          -> freeze at 30
   MEDIUM -> violation signal (-5)
          -> admin notification
   LOW    -> logged for review

5. INVESTIGATION FLOW
   Frozen agent:
   -> Shadow scores computed
   -> DVN audit requested
   -> Admin reviews evidence
   -> Exonerate (50% recovery)
      or Blacklist (permanent)

6. SIEM EXPORT
   -> Splunk / Sentinel / Elastic
   -> Ed25519 signed envelope
3
ML Models
14
Feature Dimensions
TypeScript
Pure Implementation
Real-time
Detection Speed

Catch Threats Your Rules Can't See

Deploy ML-powered behavioral monitoring that learns what "normal" looks like for each agent and flags deviations in real time.