MCP Catalog Now Available: Simplified Discovery, Configuration, and AI Observability in Tetrate Agent Router Service

Learn more

Automating SEO Implementation with an Agentic Workflow

How we built agents to automate SEO reporting and content improvements—and what Tetrate Agent Router handled for us along the way.

Automating SEO Implementation with an Agentic Workflow

Automating SEO Implementation with an Agentic Workflow

How we built agents to generate SEO reports and turn them into pull requests—and what Tetrate Agent Router handled for us along the way

Executive Summary

SEO optimization has two bottlenecks: knowing what to improve (reporting is manual and sporadic) and having capacity to improve it (implementation competes with everything else). We built an agentic system that automates data collection and report generation, uses agents to synthesize metrics and generate improvement proposals, and keeps humans in control of decisions while agents handle mechanical work. Agent Router Enterprise manages the LLM infrastructure.

  • Continuous visibility into SEO performance — before: sporadic manual reports with no historical context; after: weekly automated reports with trend analysis and prioritized recommendations
  • Consistent, repeatable implementation — before: ad-hoc improvements squeezed in “when we have time”; after: a structured weekly workflow with bounded effort and measurable outcomes
  • Robust, flexible, and secure AI infrastructure — Agent Router handles routing, failover, cost attribution, and audit logging, so agent code focuses purely on domain logic

LLM Infrastructure Layer

We used an internal version of Agent Router Enterprise to handle the infrastructure layer—routing LLM calls, tracking costs, managing model selection, and maintaining an audit trail—so the agent code focuses purely on domain logic. In this article, we share in detail what and how we built with Agent Router Enterprise.

If you want to start building immediately, you can trial Agent Router Enterprise with a free Agent Router Service account today. If you want additional guardrail features in Agent Router Enterprise, contact us.

Before: The Manual Workflow

The dual bottleneck

The business objective was straightforward: improve organic search performance across several hundred pages. But we had two problems. First, we didn’t have consistent visibility into SEO performance—reporting was manual and sporadic, with no historical data for trend analysis. Second, even when we knew what to improve, implementation was entirely manual.

Every step required a human

Someone needed to pull data from Google Search Console, run Lighthouse audits, compile the results into something readable, identify which recommendations to prioritize, make the actual edits, create pull requests, respond to review feedback, and track what had been done.

Two phases, both manual

ReportingImplementation (per improvement)
  1. Log into Google Search Console
  2. Export metrics for relevant pages
  3. Run Lighthouse audits on key pages
  4. Compile results into a report
  5. Analyze trends and identify issues
  6. Write up recommendations
  7. Publish to Confluence and notify the team
  1. Open the report
  2. Find a page with an optimization opportunity
  3. Open the source file in the www repository
  4. Write an improved meta description or title tag
  5. Commit the change with a descriptive message
  6. Open a pull request
  7. Respond to review comments
  8. Merge once approved

Swivel-chair integration

The workflow touched multiple systems: Google Search Console, Lighthouse, Confluence, GitHub, local editor, Slack, spreadsheets for tracking. Alt-tab between systems, copy-paste data, maintain state mentally.

Sporadic at best

Reporting happened when someone had time. When it did happen, the report was a snapshot with no historical context. Implementation competed with other priorities, so recommendations accumulated faster than they could be addressed.

After: The Agentic Workflow

We built the system around two main workflows, built on Tetrate AI infrastructure for reliability, flexibility, cost-accounting, security, and auditability:

  • a data pipeline for collection and reporting
  • an agent-assisted, human-directed workflow for analysis and implementation
┌───────────────────────────────────────────────────────────────────────┐
│  Reporting Workflow                                                   │
│  ┌────────────────────────────┐     ┌──────────────────────────────┐  │
│  │ Data Pipeline              │     │ Agent: SEO Summary           │  │
│  │ (GSC, Lighthouse,          │  →  │ Generation (trends           │  │
│  │ site audit)                │     │ recommendations)             │  │
│  └────────────────────────────┘     └──────────────────────────────┘  │
└───────────────────────────────────────────────────────────────────────┘

┌───────────────────────────────────────────────────────────────────────┐
│  Content Improvement Workflow                                         │
│  ┌────────────┐   ┌────────────┐   ┌──────────────┐   ┌────────────┐  │
│  │ Propose    │   │ Improve    │   │ Review       │   │ Publish    │  │
│  │ (web UI +  │ → │ (human)    │ → │ (human +     │ → │ (GitHub    │  │
│  │ agent)     │   │            │   │ LLM-as-judge)│   │ PR)        │  │
│  └────────────┘   └────────────┘   └──────────────┘   └────────────┘  │
└───────────────────────────────────────────────────────────────────────┘

┌───────────────────────────────────────────────────────────────────────┐
│  Infrastructure: Tetrate Agent Router                                 │
│                                                                       │
│            Routing · Cost Tracking · Failover · Audit Trail           │
│                                                                       │
└───────────────────────────────────────────────────────────────────────┘

Figure 1. The agentic workflow

The Reporting Workflow: Automated Reporting with Agentic Summarization

Before we could automate implementation, we needed reliable, consistent reports to implement against.

The data pipeline (non-agentic):

We built a data collection pipeline that runs weekly:

  • Google Search Console integration — automated metrics pull for all pages (impressions, clicks, CTR, position)
  • Lighthouse audits — performance and SEO scores across the site
  • CTA coverage analysis — identifying pages missing calls-to-action
  • Historical archiving — storing every report for trend analysis
  • Multi-channel publishing — automatic distribution to Confluence, Slack, and PDF

This isn’t agentic—it’s traditional automation. Scripts pull data, transform it, store it, publish it. But it’s foundational. Without consistent data collection, there’s nothing for an agent to reason about.

Agent interactions:

Raw data isn’t actionable. A table showing 500 pages with their impressions, CTR, and position doesn’t tell you what to do. So we added an agent that reviews the collected data and generates an executive summary:

  • Synthesizes metrics into key findings (“CTR is 0.33%, below the 0.4-0.6% benchmark”)
  • Identifies trends (“impressions down 3.5% month-over-month, but CTR up 6.5%”)
  • Prioritizes issues by severity and opportunity size
  • Generates specific recommendations tied to the data

The output is a structured report that a human can read in five minutes and understand the current state, what’s changed, and what to do about it.

The payoff:

We went from sporadic, manual reporting with no historical context to consistent weekly reports with trend analysis and prioritized recommendations. This alone was valuable—it guaranteed visibility into SEO performance when we previously had none.

The Content Workflow: Agent-Suggested and Audited SEO Improvements with Human-in-the-Loop Supervision

With reliable reports in place, the next bottleneck was acting on them. Rather than building a fully autonomous agent that generates and submits improvements on its own schedule, we built a system that puts SEO experts and content editors in control—with agents doing the heavy lifting on demand.

The content workflow has three stages:

  1. Propose — SEO experts generate content proposals through a web UI
  2. Review — Human content editors review proposals and make content improvements, then submit updates to an LLM-as-judge as a final audit step
  3. Publish — Human-approved fixes go to GitHub for PR review and merge

Stage 1: Propose Content

Propose Content interface showing analysis type selection and recent jobs

Figure 2. The proposal interface lets SEO experts select an analysis type and track recent jobs.

An SEO expert selects from several analysis types:

  • Content Expansion — finds thin content pages with high search impressions and proposes ways to expand or enrich those pages to provide more value
  • Query Gap Analysis — identifies high-impression queries without matching content
  • Topic Ideation — generates new content topic ideas for a theme
  • Topic Cluster — groups existing proposals into pillar/spoke clusters

The agent generates proposals; the human decides which to pursue. Recent jobs are tracked with status and result counts.

Stage 2: Content Review

Content Review interface showing audit queue with status filters and quality scores

Figure 3. The audit queue shows content status, quality scores, and available actions.

Before content reaches publication, it passes through an LLM-as-judge audit. The auditor scores content on multiple dimensions—tone, structure, accuracy, completeness—and flags issues.

Content that scores below threshold (like the “Needs Revision” item above with a score of 62 and 16 issues) enters a remediation workflow. Editors can:

  • Audit — run the quality check again
  • Fix Issues — enter human-led remediation pipeline
  • Quarantine — set aside for later review
  • Reject — remove from the pipeline

The queue filters by status (Implemented, Pending, Approved, Accepted, Rejected) so editors can focus on what needs attention.

Stage 3: Publish

Once content passes audit and a human editor accepts it, the system creates a pull request in GitHub. From there, publication follows GitHub’s standard PR workflow—review, discussion, approval, merge.

Systems that must be connected:

  • Confluence API (to fetch reports)
  • GitHub API (to create branches and open PRs)
  • LLM provider (to generate proposals and run audits)
  • Database (to track proposals, audit results, and outcomes)

Why human-directed?

We could have built this as a fully autonomous batch process—agent reads report, generates improvements, opens PRs on a schedule. But putting humans in the loop at the proposal and review stages has advantages:

  • SEO experts can focus agent effort on what matters most right now
  • Strategic improvements (content expansion, new pages) benefit from human judgment about priorities
  • The audit pipeline catches issues before they reach PR review, not after
  • Editors maintain ownership of content quality

The agents do the mechanical work. Humans make the decisions.

How Tetrate Agent Router Fits In

All LLM calls—for report summarization, proposal generation, and content auditing—route through Agent Router rather than hitting provider APIs directly:

┌─────────────────────┐
│ SEO Agents          │
│ (Summary + Impl)    │
└─────────────────────┘


┌─────────────────────┐
│ Tetrate Agent Router│
└─────────────────────┘


┌─────────────────────┐
│ LLM Provider        │
└─────────────────────┘

Figure 4. Agent Router sits between agents and LLM providers.

The agent code doesn’t know or care which model it’s using. It makes requests to Agent Router with task context (report summarization, proposal generation, content audit), and Agent Router handles:

  • Routing — which model handles which request
  • Cost tracking — per-request cost attribution with full context
  • Failover — automatic fallback if a provider has issues
  • Rate limiting — respecting provider limits without agent-side logic
  • Audit trail — complete logging for debugging and compliance
  • Model bakeoffs — swap models across providers without changing agent code, credentials, or endpoints

This separation means model changes are configuration changes, not code changes. Different tasks benefit from different models—summarization needs large context handling, auditing needs strong reasoning, tactical fixes can use faster/cheaper models. Agent Router let us run bakeoffs comparing speed, capability, and cost per task type, then configure the optimal blend. When a new model is released, testing it is a config change, not a code change.

Outcomes

For visibility: We went from no consistent SEO reporting to weekly automated reports with historical trends and actionable summaries. This is table stakes for data-driven optimization.

For capacity: Generating a content expansion proposal or auditing a batch of articles takes seconds instead of hours. The mechanical work is automated; humans focus on decisions.

For quality: The audit pipeline catches issues before content reaches PR review. Editors see scores and specific feedback, not just a pass/fail. Problematic content enters remediation rather than slipping through.

For tracking: Every proposal, audit, and publication is tracked—what was generated, what the auditor said, whether it was accepted, and (eventually) whether it moved the metrics.

The Value of Tetrate Agent Router

Agent Router didn’t make this project possible—you can build agents without a routing layer. But it made several things easier:

Model bakeoffs without code changes. Different tasks have different requirements—report summarization benefits from a model that handles large context well, while content auditing needs strong reasoning. Agent Router lets us swap models across providers without changing code, credentials, or endpoints. We ran bakeoffs comparing performance, capability, and cost for each task type, then configured the optimal model per task. When a new model is released, testing it is a config change, not a sprint.

Cost visibility without instrumentation. We didn’t write cost-tracking code. Agent Router tagged every request with our context (report generation, proposal type, audit batch) and gave us per-task costs automatically. When we compared models, the cost data was already there.

Model flexibility without refactoring. Beyond bakeoffs, day-to-day model management became trivial. When a provider had latency issues, Agent Router handled failover. When we wanted to test a cheaper model on lower-stakes tasks, we updated a config. The agent code never knew.

Separation of concerns. The agents focus on domain logic—synthesizing SEO data, generating proposals, auditing content. Infrastructure concerns (rate limits, retries, logging, cost tracking) live in Agent Router. This kept the agent codebase smaller and easier to reason about.

Audit trail for free. Every LLM call is logged with inputs, outputs, latency, and cost. When a proposal seemed off or an audit result looked wrong, we could trace back to exactly what the model saw and said. Useful for debugging, essential for building trust in the system.

The pattern generalizes: if you’re building agents that make LLM calls, separating the routing/infrastructure layer from the agent logic pays off—especially as you add more agents or need to manage costs across teams.

Learn more about Tetrate Agent Router →

Product background Product background for tablets
New to service mesh?

Get up to speed with free online courses at Tetrate Academy and quickly learn Istio and Envoy.

Learn more
Using Kubernetes?

Tetrate Enterprise Gateway for Envoy (TEG) is the easiest way to get started with Envoy Gateway for production use cases. Get the power of Envoy Proxy in an easy-to-consume package managed via the Kubernetes Gateway API.

Learn more
Getting started with Istio?

Tetrate Istio Subscription (TIS) is the most reliable path to production, providing a complete solution for running Istio and Envoy securely in mission-critical environments. It includes:

  • Tetrate Istio Distro – A 100% upstream distribution of Istio and Envoy.
  • Compliance-ready – FIPS-verified and FedRAMP-ready for high-security needs.
  • Enterprise-grade support – The ONLY enterprise support for 100% upstream Istio, ensuring no vendor lock-in.
  • Learn more
    Need global visibility for Istio?

    TIS+ is a hosted Day 2 operations solution for Istio designed to streamline workflows for platform and support teams. It offers:

  • A global service dashboard
  • Multi-cluster visibility
  • Service topology visualization
  • Workspace-based access control
  • Learn more
    Decorative CTA background pattern background background
    Tetrate logo in the CTA section Tetrate logo in the CTA section for mobile

    Ready to enhance your
    network

    with more
    intelligence?