Rules catch known threats. Machine learning catches the ones you haven't imagined yet. DAT runs three complementary ML models in parallel to detect anomalous agent behavior in real time — all in pure TypeScript, with zero external dependencies.
Each model catches a different class of anomaly. Together, they cover the full threat surface.
A single anomaly detection model is a single point of failure. If an attacker understands the model, they can evade it. DAT's ensemble approach makes evasion exponentially harder: fooling the Isolation Forest still triggers the Autoencoder, and evading both still leaves the LSTM watching for temporal patterns.
3-Model Ensemble Architecture
==============================
Events -> Feature Extraction (14 dims)
-> 1-minute sliding windows
Model 1: Isolation Forest
Type: Point anomaly detection
How: Random partitioning trees
Finds: Single-event outliers
"This request is bizarre"
Model 2: Autoencoder (Neural Net)
Type: Pattern anomaly detection
How: Compress -> reconstruct
Finds: Behavioral shifts
"This agent changed"
Model 3: LSTM Predictor
Type: Sequence anomaly detection
How: Temporal prediction
Finds: Gradual drift
"This trend is dangerous"
Ensemble Decision:
Any model flags HIGH -> alert
2+ models flag MEDIUM -> alert
Consensus required for LOW
-> feeds into investigation
Every minute of agent activity is compressed into a 14-dimensional feature vector that captures the full behavioral surface.
Most anomaly detection systems watch simple metrics like error rates. DAT measures 14 dimensions of behavior per one-minute window, including features specifically designed to catch sophisticated attacks: silence gaps that indicate a compromised agent going dark, latency deviations that suggest resource hijacking, and geographic spread that flags impossible travel.
14-Feature Vector (v2)
==============================
Per 1-minute sliding window:
Activity:
[0] actionsPerMinute # volume
[1] uniqueActions # diversity
[2] uniqueTargets # target spread
Performance:
[3] avgResponseTime # latency
[4] errorRate # failure %
[5] blockRate # denied %
Financial:
[6] totalAmount # spend
[7] maxAmount # single max
Temporal:
[8] hourOfDay # 0-23
[9] dayOfWeek # 0-6
Spatial:
[10] geoSpread # IP diversity
Watchdog (Phase 3):
[11] timeSinceLastEvent # silence gap
[12] eventFrequencyPerHr # reporting
[13] latencyDeviation # std dev
Thresholds:
silence_gap > 30 min -> alert
latency_deviation > 500ms -> alert
Detection is step one. DAT closes the loop by automatically triggering investigations, freezing trust, and enabling DVN consensus.
Most anomaly detection ends at an alert. Someone reads the alert, opens a ticket, and maybe investigates next week. In DAT, anomaly detections automatically feed into the trust scoring system, trigger investigations, enable shadow scoring, and surface in the admin console with full context. The entire loop — from behavioral anomaly to frozen agent — can happen without human intervention.
fraud signals (-10 trust). MEDIUM generates violation signals (-5 trust). Scores adjust in real timeinvestigating state, freezing its trust at 30 while shadow scoring continuesAnomaly -> Trust System Pipeline
==============================
1. EVENT INGESTION
Chatbot, Monitoring, Reputation
-> POST /webhook/events
-> Redis buffer (max 10K)
2. AUTO-TRAINING
>= 100 events -> train ensemble
Models learn "normal" per-agent
3. REAL-TIME SCORING
New event -> extract features
-> Isolation Forest score
-> Autoencoder reconstruction
-> LSTM sequence prediction
-> Ensemble decision
4. TRUST IMPACT
HIGH -> fraud signal (-10)
-> auto-investigate
-> freeze at 30
MEDIUM -> violation signal (-5)
-> admin notification
LOW -> logged for review
5. INVESTIGATION FLOW
Frozen agent:
-> Shadow scores computed
-> DVN audit requested
-> Admin reviews evidence
-> Exonerate (50% recovery)
or Blacklist (permanent)
6. SIEM EXPORT
-> Splunk / Sentinel / Elastic
-> Ed25519 signed envelope
Deploy ML-powered behavioral monitoring that learns what "normal" looks like for each agent and flags deviations in real time.