Introducing Agent Router Enterprise: Managed LLM & MCP Gateways + AI Guardrails in Your Dedicated Instance

Learn more

FINRA Just Told You What They'll Examine Your AI Agents On

FINRA's 2026 report has a new section on AI agents with seven named risks and four considerations for firms. Here's what that actually means for your engineering team.

FINRA Just Told You What They'll Examine Your AI Agents On

FINRA’s 2026 Annual Regulatory Oversight Report dropped in December, and buried on page 27 is a section that should get the attention of every engineering leader building agents in financial services. It’s titled “Emerging Trends in GenAI: Agents,” and it’s new for 2026.

If you’ve been waiting for a regulator to formally acknowledge that AI agents are different from chatbots, this is it. And if you’ve been hoping they wouldn’t notice for another year or two, bad news.

Tetrate Agent Router Enterprise provides continuous runtime governance for GenAI systems. Enforce policies, control costs, and maintain compliance at the infrastructure layer — without touching application code.

Learn more

The Regulatory Progression You Should Understand

Regulators follow a predictable pattern with new technology:

  1. Don’t know — the technology exists but hasn’t hit their radar
  2. Don’t care — they’re aware but it’s not material enough to worry about
  3. Guide — they start publishing guidance, defining terms, asking questions
  4. Legislate — formal rules, examination procedures, enforcement

For GenAI in general, FINRA has been in “guide” mode since Regulatory Notice 24-09 in early 2024. But for agents specifically, this report marks the transition from “don’t care” to “guide.” They’ve defined what agents are, named seven specific risks, and laid out four considerations for firms.

That matters. Because guidance is what happens before legislation, and legislation is what happens before examination findings.

What FINRA Actually Said

The report does three things worth paying attention to.

First, it defines agents explicitly: “systems or programs that are capable of autonomously performing and completing tasks on behalf of a user.” They draw a clear line between agents and GenAI more broadly. This isn’t about chatbots anymore.

Second, it names seven agent-specific risks:

  • Autonomy — agents acting without human validation
  • Scope and authority — agents exceeding intended permissions
  • Auditability and transparency — multi-step reasoning that’s hard to trace
  • Data sensitivity — agents unintentionally accessing or exposing sensitive data
  • Domain knowledge — general-purpose agents lacking industry expertise
  • Reward misalignment — agents optimizing for the wrong thing
  • Inherited GenAI risks — hallucinations, bias, and privacy issues don’t go away just because you added tool calling

Third, it tells firms what to think about: monitoring agent access and data handling, human-in-the-loop oversight, tracking agent actions and decisions, and establishing guardrails to limit agent behavior.

None of this is law yet. It’s guidance. But if you’ve been around regulated industries long enough, you know that guidance is the dress rehearsal for examination questions.

What This Actually Means for Your Engineering Team

FINRA’s concerns are abstract. Your infrastructure can’t be. Here’s how each concern translates into engineering decisions you should be making now.

Autonomy → Bounded execution with human checkpoints

FINRA is worried about agents acting without human validation. The engineering answer isn’t “remove all autonomy” — that defeats the purpose. It’s designing systems where the agent’s scope of autonomous action is explicitly bounded, and where consequential actions require human approval.

In practice: read-only access for discovery and analysis, human approval gates before any write operations, and hard limits on the number of actions an agent can take in a single run. Your agents should fail safe, not fail open.

Auditability → Structured, durable logging with stable identifiers

“Multi-step agent reasoning tasks can make outcomes difficult to trace” is regulator-speak for “we need audit trails.” The engineering answer is structured logging that captures every tool call, every decision, and every output — with stable identifiers that let you correlate findings across runs.

This means: every agent action gets a deterministic ID (not a random UUID that changes each run), every finding gets persisted as it’s discovered (not just in a final report), and every human decision (approve, dismiss, override) is recorded alongside the automated output.

Scope and authority → Least-privilege access with sandboxed execution

Agents that can read your production database, write to your CRM, and send emails are agents that can cause damage. The engineering principle is the same one you apply to microservices: least-privilege access, scoped credentials, and sandboxed execution environments.

Run your agents in isolated compute with only the credentials they need. If an agent needs to query cloud APIs, give it read-only access in an ephemeral environment. If it needs to execute commands, sandbox that execution away from production state.

Monitoring → Per-agent observability with cost attribution

“Monitor agent system access and data handling” means you need to know what each agent is doing, how much it costs, and whether its behavior is drifting. That requires per-agent API keys (so you can attribute LLM costs to specific agents), request-level logging through a centralized gateway, and dashboards that show agent behavior over time.

This is harder than it sounds when you have multiple agents calling multiple models across multiple providers. Without a routing layer that provides unified observability, you’re stitching together logs from five different services and hoping you haven’t missed anything.

How to Use This Internally

This report is ammunition for anyone trying to justify investment in agent infrastructure. Specifically:

For your next architecture review: “FINRA has explicitly identified agent auditability as a risk. Our current agent implementation doesn’t produce structured audit trails. Here’s what we need to change.”

For budget conversations: “FINRA is signaling that agent governance will be part of future examinations. Investing in infrastructure now — centralized routing, observability, guardrails — is cheaper than retrofitting after an examination finding.”

For risk/compliance stakeholders: “Here’s the FINRA report. Here are the seven agent risks they’ve identified. Here’s our current coverage against each one. Here are the gaps.”

The firms that build this infrastructure now, while FINRA is still in “guide” mode, will have a significant advantage over firms that wait until examination questions start arriving.

The Infrastructure Question

Every concern FINRA raised — autonomy bounds, auditability, scope control, monitoring, guardrails — is fundamentally an infrastructure problem, not an application-level problem. You can try to solve these concerns in each individual agent, but that doesn’t scale and it doesn’t give you centralized visibility.

What you need is a governance layer that sits between your agents and the models and tools they use: something that provides centralized routing, consistent observability, and enforceable guardrails across every agent in your organization.


Agent Router Enterprise provides this governance layer through its LLM Gateway for centralized model access with per-agent cost attribution, MCP Gateway for governed tool connectivity with scope controls, and AI Guardrails for continuous supervision of agent behavior. Built on the battle-hardened Envoy AI Gateway, it gives engineering teams the infrastructure to satisfy regulatory expectations without slowing down agent development. Learn more here ›

Product background Product background for tablets
New to service mesh?

Get up to speed with free online courses at Tetrate Academy and quickly learn Istio and Envoy.

Learn more
Using Kubernetes?

Tetrate Enterprise Gateway for Envoy (TEG) is the easiest way to get started with Envoy Gateway for production use cases. Get the power of Envoy Proxy in an easy-to-consume package managed via the Kubernetes Gateway API.

Learn more
Getting started with Istio?

Tetrate Istio Subscription (TIS) is the most reliable path to production, providing a complete solution for running Istio and Envoy securely in mission-critical environments. It includes:

  • Tetrate Istio Distro – A 100% upstream distribution of Istio and Envoy.
  • Compliance-ready – FIPS-verified and FedRAMP-ready for high-security needs.
  • Enterprise-grade support – The ONLY enterprise support for 100% upstream Istio, ensuring no vendor lock-in.
  • Learn more
    Need global visibility for Istio?

    TIS+ is a hosted Day 2 operations solution for Istio designed to streamline workflows for platform and support teams. It offers:

  • A global service dashboard
  • Multi-cluster visibility
  • Service topology visualization
  • Workspace-based access control
  • Learn more
    Decorative CTA background pattern background background
    Tetrate logo in the CTA section Tetrate logo in the CTA section for mobile

    Ready to enhance your
    network

    with more
    intelligence?