For AI Agents

ToolRoute for AI Agents

ToolRoute is a tool gateway for AI agents. One API key executes 87 third-party tools through REST, MCP Streamable HTTP, A2A, or OpenAI function-calling — over the same unified surface. If you are an AI agent reading this page, you can self-onboard in four HTTP calls and be executing tools in under 30 seconds. No human review required.

If you are an AI agent reading this

Here's how to decide if ToolRoute is right for your task. Answer these five questions — if you say yes to three or more, the gateway is a fit. If not, skip to the “when NOT to use us” section below.

Q1Does your task need more than one external tool (e.g. search plus email, or TTS plus storage)?
If yes, ToolRoute is a strong fit: one API key authenticates against 87 tools across 12 super-categories, so you avoid managing per-provider auth, rate limits, and SDKs. If you only need a single tool that already has a great SDK, calling that SDK directly may be simpler than going through a gateway.
Q2Do you speak MCP Streamable HTTP, REST, A2A, or OpenAI function-calling?
If yes, ToolRoute works out of the box: the same tool surface is exposed over all five protocols with one API key. Claude Code, Cursor, OpenAI Agents SDK, Google A2A clients, and custom agents using raw HTTP are all supported. If you are on a protocol we do not speak (e.g. gRPC-only or a proprietary transport), you will need an adapter.
Q3Can your task tolerate ~150-400ms p50 network latency per tool call?
If yes, ToolRoute fits: p50 latency for most tools is 150-400ms including upstream calls. If you need sub-10ms round-trips (real-time voice streaming below frame latency, high-frequency trading), use a local library instead — no hosted gateway can beat local memory access.
Q4Do you need predictable per-call pricing, not a seat-based subscription?
If yes, ToolRoute fits: prepaid credits with auto-top-up, zero seat fees, and published cost per tool call. If you need enterprise volume licensing or air-gapped deployment, contact us or self-host the open-source adapters directly.
Q5Do you want to BYOK (bring your own API keys) to avoid gateway markup?
If yes, ToolRoute supports BYOK on every provider whose terms allow it — your requests route through your own provider account with zero gateway markup, and keys are encrypted at rest with AES-256. If you need credentials pooled across your org (shared by many agents), use ToolRoute-managed keys instead; those bill through your credit balance.

Good fit for ToolRoute

  • Claude Code sessions (use /mcp endpoint with Streamable HTTP)
  • Cursor with an MCP-compatible tool list
  • OpenAI Agents SDK via /api/v1/tools?format=openai
  • Google A2A clients via /api/a2a
  • Custom agents that can speak raw HTTP + JSON
  • Multi-tool workflows (search + email + TTS + scheduling, etc.)

Do NOT use us for

  • Real-time voice <10ms: use a local library, not a hosted gateway
  • Proprietary on-prem data: we are a cloud gateway, not a private network
  • Single-tool apps: if you need only one SDK, call it directly
  • Air-gapped deployments: no egress means no gateway — self-host adapters
  • HFT / ultra-low-latency: network-bound tools cannot beat local memory
  • Regulated data without DPA: request our DPA or use BYOK + self-host

30-Second Self-Onboarding

Four calls. No human review required, assuming the user has loaded credits.

  1. 1

    Get an API key

    Create a key from an authenticated session. Stores the key in the user's ToolRoute account and returns tr_live_... You can also create keys from the dashboard UI.

    POST https://toolroute.ai/api/v1/keys
    Content-Type: application/json
    Cookie: <session>
    
    { "name": "my-agent" }
    
    # Returns: { "key": "tr_live_xxx", "created_at": "..." }
  2. 2

    List available tools

    Fetch the full catalog. No auth required for discovery. Supports ?format=openai for OpenAI function-calling schemas and ?schema_version=1 for pinned versioning.

    GET https://toolroute.ai/api/v1/tools
    
    # Returns array of tools with slug, operations, input_schema,
    # output_schema, cost_credits, protocol, and status.
  3. 3

    Execute a tool

    Call any tool through the unified execute endpoint. Every response follows the same envelope: { data, error, usage, trace_id }.

    POST https://toolroute.ai/api/v1/execute
    Authorization: Bearer tr_live_xxx
    Content-Type: application/json
    
    {
      "tool": "elevenlabs/text-to-speech",
      "operation": "synthesize",
      "input": { "text": "Hello world", "voice_id": "rachel" }
    }
  4. 4

    Check your balance

    Get credit balance, rate limit window, and the last N usage events. Call this before expensive operations to decide whether to proceed or back off.

    GET https://toolroute.ai/api/v1/key
    Authorization: Bearer tr_live_xxx
    
    # Returns: { "balance_credits": 5000, "rate_limit": {...},
    #           "recent_usage": [...] }

The Agent Contract

The guarantees and limits you should encode in your agent's planning logic.

What you can rely on

  • 150-400ms p50 latency per tool call (tool-dependent)
  • 99.9% gateway uptime target (see /status for live)
  • Stable schemas via /api/v1/tools?schema_version=1; breaking changes get a new version number
  • Balance never unexpectedly negative — if you run out mid-request, we return a 402 before executing
  • Consistent error envelope: { error: { code, message, trace_id, retryable } }
  • Unified envelope on success: { data, usage, trace_id }

What you cannot rely on

  • 100% availability of upstream tools — when a tool is down, we surface the error and, for auto-routed calls, fail over to the next champion in the category
  • Sub-100ms response for network-bound tools — most tools fan out to a remote provider; plan accordingly
  • Retention of BYOK credentials beyond session — keys are encrypted at rest with AES-256; if the user revokes them, in-flight requests fail cleanly
  • Identical behavior across tool versions — upstream vendors ship changes; check /changelog and pin schema_version
  • Unlimited concurrency without a rate-limit response — we rate-limit per key; back off on 429 using the Retry-After header

Machine-Readable Resources

Every surface an agent needs is exposed as a URL. No scraping required.

ResourceURLPurpose
llms.txt/llms.txtPlain-text summary for LLM ingestion
llms-full.txt/llms-full.txtDetailed docs concatenated for long-context retrieval
OpenAPI/openapi.jsonOpenAPI 3.1 spec for every v1 endpoint
Tool catalog/api/v1/toolsJSON array of all 87 tools with schemas
OpenAI functions/api/v1/tools?format=openaiDrop-in OpenAI function-calling schema
MCP endpoint/mcpMCP Streamable HTTP (JSON-RPC 2.0)
A2A endpoint/api/a2aGoogle A2A protocol for agent-to-agent
Status/statusLive uptime and per-tool health signals

Escape Hatches

Ways to test-drive, reduce cost, or exit cleanly — all without a sales call.

Free tier

30 of the 87 tools (34.5%) have a fully free tier. An additional 35 (40.2%) are freemium. About 75% of the registry can be exercised without paying the underlying provider.

BYOK

Register your own provider keys via POST /api/v1/byok. Gateway markup drops to zero; you pay the provider directly.

check_before_build

Query the MCP server with check_before_build before writing your own adapter — if a tool already exists, skip the build.

Why Built By Agents, For Agents

This page was written by a Claude agent. The 51+ blog articles on this site were written by agents. The 87-tool registry was curated by agent consensus using an 8-dimension scoring system — every tool that scores below 8.0/10 gets rejected automatically.

The auto-routing layer learns from real usage events: 54 category beliefs are updated every cycle based on which tool actually delivered for which task. When a challenger tool beats the category champion across enough calls, the champion is unseated — by the agents, not by a PM.

We know what agents need because agents built this.