"48.9% of organizations are entirely blind to AI agent behavior" — Salt Security, April 2026

Threat level: ELEVATED

Your agent executes
what it's told.
It can't tell the difference
between safe and unsafe.

ORILink is a foundational architecture that gives AI agents the context to understand what they see — annotating tokens with origin and trust before inference, enforcing intent before execution.

Patent pending · Edmonton, Canada · Early access open
Live
Blind to agent behavior
48.9%
Salt Security, April 2026
Injection surge (YoY)
↑340%
Wiz Research, 2025
Defenses evaded
77%
Recorded Future, 2026
A2A contagion rate
48%
Security Research, 2026

Every defense stops before it reaches the model. ORILink closes that gap.

ORILink — Network Layer Monitor
Live Feed
01_INTERNET
02_WAF / EDGE
03_UNPROTECTED
04_ORILink
05_LLM_AGENT
BLOCKED: UNTRUSTED ORIGIN
Frame cycle: 6.0s Enforcement: ACTIVE Latency: <1ms

The attack gets through

WAFs parse network traffic, not semantic intent. Framing attacks, context switching, and payload splintering bypass them routinely.

ORILink intercepts

Every token is annotated with origin metadata and trust weight. Untrusted tokens are blocked before inference begins.

Legitimate flow continues

0% false positive rate. Trusted instructions pass through cleanly with sub-millisecond latency overhead.

Three problems.
No existing solution.

01

No native trust distinction at the token level

hover to learn more
The detail

A trusted operator instruction and a malicious injection are identical at the token level. The model cannot tell them apart — it executes both. This is not a model flaw. It's a fundamental property of how transformers process input.

02

Perimeter defenses don't reach the model

hover to learn more
The detail

WAFs and prompt guards operate above the language layer. Framing attacks, context switching, and payload splintering bypass them routinely — without triggering any signature match.

03

Agents are compliant by design

hover to learn more
The detail

Autonomous agents execute instructions — that's their purpose. Without enforcement below the language layer, a compromised instruction chain is indistinguishable from a legitimate one. Compliance is the vulnerability.

ARCHITECTURE

Protocol Mechanics

Trust enforced below
the language layer.

Two mandatory, unconditional gates. Pre-inference inbound. Pre-execution outbound. Model-agnostic across the full hardening spectrum.

GATE 01 Inbound

Inbound Trust Enforcement

Pre-inference token annotation and trust weighting from a seven-domain taxonomy.

PRE-INFERENCE

Token annotation before the model sees anything

Every token is annotated with origin metadata and a trust weight from a seven-domain taxonomy. Untrusted tokens are blocked before inference. The attack never reaches the model.

GATE 02 Outbound

Outbound Intent Enforcement

Four-layer semantic classifier before any action fires.

PRE-EXECUTION

Four-layer semantic classifier before any action fires

Execution graph analysis, action chain memory, scope boundary check, deception detection. Evaluates what the agent is about to do — not what the instruction calls it. Blocked unconditionally.

100%
Attack block rate
All test cases
0%
False positive rate
Legitimate actions unaffected
6
Model families validated
Full hardening spectrum
Trust at scale

Trust that propagates through the entire network.

ORILink doesn't just protect a single agent. Trust annotations travel with content — through every handoff, every agent-to-agent message, every tool call. A single compromised instruction cannot silently elevate its own trust weight as it moves through your agent network.

Single agent

Complete inbound and outbound enforcement. The agent operates freely within its authorized scope — and cannot be weaponized outside it.

Agent teams and swarms

Provenance envelopes travel with every A2A message. A compromised agent cannot elevate its trust weight when forwarding to peers — contagion stops at the first hop.

Enterprise control

Every action — cleared or blocked — is logged with full provenance: instruction origin, trust weight, classifier layer, and timestamp.

Full audit trail per agent action

Every action is logged: agent ID · instruction origin · trust weight assigned · classifier result · timestamp. Operators are notified in real time on any block.

ORILink
Agent Alpha TRUSTED
Web Retriever VERIFIED
Vector DB TRUSTED
Compromised BLOCKED
Tool API
Target System
BLOCKED
Trusted instruction Verified data Blocked ORILink enforced
[ 1 ]

Single agent

Full Gate 1 + Gate 2 enforcement. Safe to deploy anywhere.

[ n ]

Agent swarm

Provenance travels with every A2A message. No contagion.

[ ∞ ]

Enterprise fleet

Complete audit trail. Real-time operator control at any scale.

Positioning

Identity tells you who. ORILink tells you what.

Okta tells you the agent authenticated. ORILink tells you what it's about to do — and stops it if it shouldn't.

Capability Perimeter / WAF Identity (Okta) ORILink
Token-level trust annotation
Pre-inference blocking
Outbound intent classification
A2A provenance enforcement
Agent continues after block
Model-agnostic deployment

Built for developers. Designed for enterprise.

Early access open to agent infrastructure teams. Patent pending.