Announcing Built On Envoy: Making Envoy Extensions Accessible to Everyone

Learn more

Tetrate Agent Router Service Adds Integration Guides and Open Weight Model Support

This week's Tetrate Agent Router Service update brings in-app integration guides for popular AI tools and adds support for Grok, Groq, and DeepInfra providers, expanding access to xAI and open weight models.

Tetrate Agent Router Service Adds Integration Guides and Open Weight Model Support

Welcome to our weekly update for Tetrate Agent Router Service! If you missed the launch last week, you can catch up on the details in the launch blog. This week we’re excited to share two major enhancements that make it even more powerful and accessible for developers building AI applications.

If you haven’t tried Tetrate Agent Router Service yet, you can sign up in single step to try these new features.

New In-App Integration Guides

Based on user feedback, we’ve added a dedicated Integration page directly within the application. This new section provides comprehensive guides for integration with popular AI development tools and frameworks.

Integration Page
Integration Page
Integration Page Detail
Integration Page Detail

What’s Included

The Integration page covers four key categories:

Code Assistants: Step-by-step instructions for connecting with popular AI coding assistants like Aider, Cline, and Roo Code. Since the service is OpenAI-compatible, integration is as simple as changing the base URL and using your API key.

LLM Libraries: Detailed examples for using popular programming libraries including LangChain, Pydantic, and the Vercel AI SDK.

Agent Frameworks: Configuration guides for advanced agent frameworks like OpenAI Agent SDK, CrewAI, and LocalAI, enabling complex, multi-agent workflows through intelligent routing.

Local AI Agents: Instructions for connecting desktop applications and local AI tools that support OpenAI-compatible endpoints such as Goose and OpenWebUI, bringing Tetrate Agent Router Service benefits to your local development environment.

Each guide includes:

  • Quick setup instructions
  • Code snippets you can copy and paste
  • Link to the official documentation for deeper integration details

Expanded Model Support: Grok, Groq, and DeepInfra

We’re thrilled to announce support for three new providers, significantly expanding the models available through the service:

Grok (xAI)

Access to xAI’s Grok models, known for their real-time knowledge and unique conversational capabilities. Grok models offer competitive performance with a distinctive personality.

Groq

Lightning-fast inference for open weight models including Llama, DeepSeek and others through specialized LPU hardware. Perfect for real-time applications and agentic workflows requiring instant responses.

DeepInfra

Cost-effective and scalable infrastructure for the same open weight models with automatic scaling and flexible deployment options. Ideal for production workloads requiring reliable, budget-friendly inference.

Why Open Weight Models Matter

The addition of Groq and DeepInfra is particularly significant as they provide access to open weight models. This gives you:

  • Flexibility: Use models without vendor lock-in
  • Cost Optimization: Often more affordable than proprietary models
  • Specialized Options: Access to fine-tuned models for specific domains
  • Compliance: Better control for organizations with strict data requirements

With intelligent routing, you can now seamlessly combine proprietary models (OpenAI, Anthropic, Google) with open weight alternatives, optimizing for cost, performance, or specific capabilities as needed.

Getting Started

These updates are available immediately for all users. To explore the new features:

  1. Access the Integration page: Log into your dashboard and click on “Integrations” in the navigation menu
  2. Try the new models: Create a new API key with routing rules that include Grok, Groq, or DeepInfra models
  3. Experiment in the Playground: Compare responses from the new models against your existing favorites

What’s Next

We’re continuing to enhance our service based on your feedback. Coming soon:

  • Additional provider integrations
  • Addition configurations for different routing strategies
  • Bring Your Own Provider Key support for using your own existing provider credits
  • Enhanced team management features for collaborative development

Get Started Today

If you haven’t tried Tetrate Agent Router Service yet, sign up now and get $5 free credit when you use your business email. Existing users can access these new features immediately through their dashboard.

Have questions or feedback? Join our Slack community or reach out through the in-app support.

We have many new features coming, follow us to get product updates as they happen.

Product background Product background for tablets
Building AI agents

Agent Router Enterprise provides managed LLM & MCP Gateways plus AI Guardrails in your dedicated instance. Graduate agents from prototype to production with consistent model access, governed tool use, and runtime supervision — built on Envoy AI Gateway by its creators.

  • LLM Gateway – Unified model catalog with automatic fallback across providers
  • MCP Gateway – Curated tool access with per-profile authentication and filtering
  • AI Guardrails – Enforce policies, prevent data loss, and supervise agent behavior
  • Learn more
    Replacing NGINX Ingress

    Tetrate Enterprise Gateway for Envoy (TEG) is the enterprise-ready replacement for NGINX Ingress Controller. Built on Envoy Gateway and the Kubernetes Gateway API, TEG delivers advanced traffic management, security, and observability without vendor lock-in.

  • 100% upstream Envoy Gateway – CVE-protected builds
  • Kubernetes Gateway API native – Modern, portable, and extensible ingress
  • Enterprise-grade support – 24/7 production support from Envoy experts
  • Learn more
    Decorative CTA background pattern background background
    Tetrate logo in the CTA section Tetrate logo in the CTA section for mobile

    Ready to enhance your
    network

    with more
    intelligence?