Connect Anthropic API
canopy.anthropic.tools() returns Anthropic's [{ name, description, input_schema }] shape directly — no rename, no destructuring. canopy.anthropic.dispatch(content) consumes assistant content blocks and returns tool_result blocks ready to wrap in a user message for the next turn.
npx @canopy-ai/sdk connect in your project root. It opens a consent page in your browser, then writes credentials to ~/.config/canopy/credentials and merges a canopy MCP server entry into any installed Claude Code, Cursor, Claude Desktop, Windsurf, Cline, VS Code, or Zed. Skip Steps 2 and 4 below.Step 1 — Connect your agent in the dashboard
Canopy is bring-your-own-agent. This step doesn't create the agent itself — you've already built that, or are about to. It registers a Canopy-side record that pairs your agent with a spending policy and gives you an agt_… ID to use in your code.
Sign in at trycanopy.ai and go to Agents → Connect agent. Give the agent a name and pick (or create) a policy. The policy controls the spend cap, recipient allowlist, and approval threshold every payment from this agent will be evaluated against.
Step 2 — Copy your credentials
You need two values in your code:
- Org API key (
ak_live_…orak_test_…) — from Settings → API Keys. Copy it the moment you create it; the plaintext is shown only once. - Agent ID (
agt_…) — from the agent's detail page in /dashboard/agents.
Step 3 — Install the package
npm install @canopy-ai/sdkStep 4 — Set your environment variables
CANOPY_API_KEY=ak_live_xxxxxxxxxxxxxxxx
CANOPY_AGENT_ID=agt_xxxxxxxxUse a .env file locally and your platform's secret manager in production. Never commit credentials.
Step 5 — Connect in your agent code
Paste the snippet below into your existing Anthropic agent.
// 1. Add to your .env:
// CANOPY_API_KEY=ak_live_xxxxxxxxxxxxxxxx
// 2. In your agent code:
import Anthropic from '@anthropic-ai/sdk';
import { Canopy } from '@canopy-ai/sdk';
const canopy = new Canopy({
apiKey: process.env.CANOPY_API_KEY,
agentId: 'agt_xxxxxxxx',
});
const client = new Anthropic();
const messages: Anthropic.MessageParam[] = [
{ role: 'user', content: 'Pay 10 cents to 0x1234...' },
];
const reply = await client.messages.create({
model: 'claude-sonnet-4-6',
max_tokens: 1024,
tools: canopy.anthropic.tools(),
messages,
});
// Run any tool_use blocks through Canopy and feed the results back next turn:
const toolResults = await canopy.anthropic.dispatch(reply.content);
if (toolResults.length) {
messages.push({ role: 'assistant', content: reply.content });
messages.push({ role: 'user', content: toolResults });
}Step 6 — Verify the connection
Run your agent once. As soon as Canopy receives a request from it, the dashboard flips the agent to connected and shows the first event captured. If nothing happens after a minute, see Troubleshooting.
How dispatch behaves
- Skips non-
tool_useblocks — text and thinking blocks pass through untouched. - Skips non-Canopy tool calls — your host loop dispatches user-defined tools; Canopy only handles
canopy_pay,canopy_check_url,canopy_discover_services,canopy_approve,canopy_deny. - Embeds errors as JSON — if a tool throws, the
tool_result.contentbecomes{"error": "..."}so the LLM can react instead of crashing the loop. - Pending approvals propagate intact — when
canopy_payreturnspending_approval, the rich fields (recipientName,amountUsd,expiresAt,chatApprovalEnabled) land in thetool_result. The LLM can ask the user inline and callcanopy_approve/canopy_denynext turn.
Where to go next
- Payment outcomes — Anthropic returns the policy outcome verbatim to the model
- TypeScript SDK reference —
canopy.anthropicnamespace