Skip to content

Provider Authentication

Configure how AgentSpec connects to a codegen provider for code generation (agentspec generate) and source scanning (agentspec scan).

Overview

AgentSpec supports three codegen providers and automatically picks the best one available.

ProviderWho it's forWhat you need
Claude subscription (Pro / Max)Anyone with a Claude.ai paid planClaude CLI installed and logged in
OpenAI-compatibleAnyone using OpenRouter, Groq, Together, Ollama, OpenAI, Nvidia NIM, or any OpenAI-compatible endpointAGENTSPEC_LLM_API_KEY + AGENTSPEC_LLM_MODEL (and optionally AGENTSPEC_LLM_BASE_URL)
Anthropic APITeams using the Anthropic API directlyANTHROPIC_API_KEY env var

When multiple providers are available, Claude subscription is used first. You can override this at any time.


Choosing a provider

Claude SubscriptionOpenAI-compatibleAnthropic API
CostIncluded in Pro/Max planDepends on endpoint (free for Ollama)Pay per token
Default modelclaude-sonnet-4-6None (you must set AGENTSPEC_LLM_MODEL)claude-opus-4-6
Best forLocal dev, individual useAnything OpenAI-compatible, local inference, multi-provider routingCI/CD, teams, high volume on Claude
AuthBrowser login (interactive)API key (non-interactive)API key (non-interactive)
Endpoint overrideNoYes (AGENTSPEC_LLM_BASE_URL)Yes (ANTHROPIC_BASE_URL)
Rate limitsPlan-dependent daily capEndpoint-dependentAPI tier-dependent
CI-compatibleNo (requires interactive login)YesYes

Check your current status

bash
agentspec provider-status
  AgentSpec -- Provider Status
  ─────────────────────────────

Claude subscription
  ✓ Installed              yes
    Version                2.1.81 (Claude Code)
  ✓ Authenticated          yes
  ✓ Account                you@example.com
  ✓ Plan                   Claude Pro

Anthropic API
  ✗ ANTHROPIC_API_KEY      not set
  - ANTHROPIC_BASE_URL     not set (using default)

OpenAI-compatible
  ✗ AGENTSPEC_LLM_API_KEY  not set

Environment & resolution
  - Provider override      not set (auto-detect)
  - Model override         not set (default: claude-opus-4-6)

  ✓ Would use: Claude subscription

──────────────────────────────────────────────────
✓ Ready -- Claude subscription (Claude Pro) · you@example.com
  agentspec generate and scan will use the claude-subscription provider

Machine-readable output for CI:

bash
agentspec provider-status --json

Exit codes: 0 = ready, 1 = no auth configured.


Method 1 -- Claude Subscription (Pro / Max)

Use your existing Claude.ai subscription. No API key or per-token cost. Usage is covered by your plan's daily allowance.

Prerequisites

  • [ ] Claude Pro or Max subscription at claude.ai
  • [ ] Claude CLI installed

1. Install the Claude CLI

bash
# macOS
brew install claude

# or download directly
# https://claude.ai/download

Verify:

bash
claude --version

2. Authenticate

bash
claude auth login

This opens a browser window. Sign in with your Claude.ai account. Your session is stored locally.

Verify authentication status:

bash
claude auth status

3. Run AgentSpec

No env vars needed:

bash
agentspec generate agent.yaml --framework langgraph

The spinner shows which provider is active:

  Generating with Claude (subscription) · 12.4k chars

How it works

Under the hood, AgentSpec uses the @anthropic-ai/claude-agent-sdk to call Claude via the query() function. Each generation creates a temporary directory and streams responses with a 5-second heartbeat interval.

Default model

claude-sonnet-4-6. Override with:

bash
export ANTHROPIC_MODEL=claude-opus-4-6

Plan limits

Usage counts against your Claude Pro or Max daily limit. If you hit the cap, AgentSpec throws a quota_exceeded error:

Error: Usage limit reached. Your Claude plan's daily allowance has been consumed.

Wait for the limit to reset (usually midnight UTC) or switch to API mode:

bash
export AGENTSPEC_CODEGEN_PROVIDER=anthropic-api
export ANTHROPIC_API_KEY=sk-ant-...

Session expiry

Claude CLI sessions can expire after extended inactivity. If you see "not authenticated" or "not logged in", re-run:

bash
claude auth login

Not suitable for CI

Claude subscription requires an interactive browser login. For CI/CD pipelines, use the Anthropic API or OpenAI-compatible provider instead.


Method 2 -- Anthropic API Key

Use a direct Anthropic API key. Best for CI pipelines, Docker environments, teams without a subscription, or when you need explicit cost control.

Prerequisites

1. Get an API key

Go to console.anthropic.com > API Keys > Create key.

2. Set the env var

bash
export ANTHROPIC_API_KEY=sk-ant-...

For permanent use, add to your shell profile (~/.zshrc, ~/.bashrc) or a .env file.

3. Run AgentSpec

bash
agentspec generate agent.yaml --framework langgraph

The spinner shows:

  Generating with claude-opus-4-6 (API) · 12.4k chars

Default model

claude-opus-4-6. Override with:

bash
export ANTHROPIC_MODEL=claude-sonnet-4-6

Token budget

Each generation request uses max_tokens: 32768. A typical agentspec generate call consumes roughly 2,000 input tokens (manifest + skill prompt) and 4,000-12,000 output tokens (generated code), depending on manifest complexity.

Rate limits

Governed by your Anthropic API tier. If you hit the rate limit, AgentSpec surfaces a rate_limited error:

Error: Rate limited by the Anthropic API. Back off and retry, or upgrade your API tier.

Cost

Billed per input/output token at your tier's rate. Check anthropic.com/pricing for current token prices.

Proxy / custom base URL

Route all API calls through a custom endpoint (useful for corporate proxies, VPNs, or self-hosted API gateways):

bash
export ANTHROPIC_BASE_URL=https://my-proxy.example.com

Only applies when using the Anthropic API provider. Has no effect on Claude subscription or the OpenAI-compatible provider.

Probing

agentspec provider-status sends GET /v1/models with your API key (6-second timeout) to verify the key is valid and the endpoint is reachable. If the probe fails, the provider is marked as unavailable in the status output.


Method 3 -- OpenAI-compatible endpoint

Use any endpoint that speaks the OpenAI wire format: OpenAI.com, OpenRouter, Groq, Together, Ollama, Nvidia NIM, or a local self-hosted model. A single env var family drives all of them.

Prerequisites

  • [ ] An API key for the endpoint you want to use (or a dummy string for local Ollama)
  • [ ] Knowledge of the endpoint's base URL and a valid model ID on that endpoint

1. Set the env vars

bash
export AGENTSPEC_LLM_API_KEY=sk-or-v1-...
export AGENTSPEC_LLM_MODEL=qwen/qwen3-235b-a22b
export AGENTSPEC_LLM_BASE_URL=https://openrouter.ai/api/v1

AGENTSPEC_LLM_API_KEY and AGENTSPEC_LLM_MODEL are both required. AGENTSPEC_LLM_BASE_URL is optional and defaults to https://api.openai.com/v1.

2. Run AgentSpec

bash
agentspec generate agent.yaml --framework langgraph

Concrete setups per backend

BackendAPI_KEYBASE_URLMODEL example
OpenAI.comsk-...(omit, defaults)gpt-4o-mini
OpenRoutersk-or-v1-...https://openrouter.ai/api/v1qwen/qwen3-235b-a22b
Groqgsk_...https://api.groq.com/openai/v1llama-3.3-70b-versatile
Together...https://api.together.xyz/v1meta-llama/Llama-3.3-70B-Instruct-Turbo
Ollama (local)ollama (dummy)http://localhost:11434/v1llama3.2
Nvidia NIMnvapi-...https://integrate.api.nvidia.com/v1meta/llama-3.3-70b-instruct

Ollama note: Ollama doesn't require a real API key, but the OpenAI SDK refuses to construct with an empty string. Set AGENTSPEC_LLM_API_KEY=ollama (any non-empty value works).

Default model

There is no universal default. Each endpoint exposes different models, so AGENTSPEC_LLM_MODEL is required when AGENTSPEC_LLM_API_KEY is set. If you omit the model, AgentSpec fails fast at resolve time.

Rate limits

Governed by the endpoint you point at. OpenAI-compatible endpoints surface 429 / quota errors through the OpenAI SDK's structured error classes, which AgentSpec maps to:

Error: Rate limited: <original message from the endpoint>

Cost

Depends on the endpoint. Free for local Ollama, pay-per-token for OpenRouter / Groq / Together / OpenAI / Nvidia NIM.

Live probing

agentspec provider-status sends GET {AGENTSPEC_LLM_BASE_URL}/models with Authorization: Bearer {AGENTSPEC_LLM_API_KEY} (6-second timeout) to verify the endpoint is reachable and your key is accepted. The result shows up as ready, misconfigured (e.g. model missing), or unreachable (HTTP 401, HTTP 404, network error).

Forcing the OpenAI-compatible provider

If you have both ANTHROPIC_API_KEY and AGENTSPEC_LLM_API_KEY set, the OpenAI-compatible provider wins by default in auto mode (priority order is claude-sub > openai-compatible > anthropic-api). To force it even when the Claude CLI is authenticated:

bash
export AGENTSPEC_CODEGEN_PROVIDER=openai-compatible

Environment variable reference

VariableProviderDefaultDescription
ANTHROPIC_API_KEYAnthropic API--API key from console.anthropic.com
ANTHROPIC_BASE_URLAnthropic APIhttps://api.anthropic.comCustom API endpoint / proxy
ANTHROPIC_MODELSubscription, APIclaude-sonnet-4-6 (sub) / claude-opus-4-6 (API)Model override
AGENTSPEC_LLM_API_KEYOpenAI-compatible--API key for the endpoint (dummy for local Ollama)
AGENTSPEC_LLM_MODELOpenAI-compatible--Model ID on the endpoint (required)
AGENTSPEC_LLM_BASE_URLOpenAI-compatiblehttps://api.openai.com/v1Endpoint root (include /v1)
AGENTSPEC_CODEGEN_PROVIDERAllautoForce a provider: claude-sub, anthropic-api, openai-compatible

Resolution order (auto mode)

When AGENTSPEC_CODEGEN_PROVIDER is not set, AgentSpec resolves providers in this order:

1. Claude CLI installed + logged in?  →  use claude-subscription
2. AGENTSPEC_LLM_API_KEY set?         →  use openai-compatible
3. ANTHROPIC_API_KEY set?             →  use anthropic-api
4. None available                     →  error with setup options

Subscription always wins when available. If you have both the CLI and env-based credentials, the env-based providers are ignored unless you force one with AGENTSPEC_CODEGEN_PROVIDER=openai-compatible (or =anthropic-api).


Force a specific provider

bash
# Always use subscription (fails fast if not logged in)
export AGENTSPEC_CODEGEN_PROVIDER=claude-sub

# Always use the Anthropic API (skips CLI check entirely)
export AGENTSPEC_CODEGEN_PROVIDER=anthropic-api

# Use any OpenAI-compatible endpoint (OpenRouter, Groq, Ollama, etc.)
export AGENTSPEC_CODEGEN_PROVIDER=openai-compatible

Useful for CI where you want explicit control and no ambiguity. Also useful locally when you want to test a specific provider's output.


CI / CD setup

In CI there is no interactive login, so use an API key provider.

GitHub Actions

yaml
env:
  ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
  AGENTSPEC_CODEGEN_PROVIDER: anthropic-api

GitHub Actions (OpenAI-compatible)

yaml
env:
  AGENTSPEC_LLM_API_KEY: ${{ secrets.AGENTSPEC_LLM_API_KEY }}
  AGENTSPEC_LLM_MODEL: qwen/qwen3-235b-a22b
  AGENTSPEC_LLM_BASE_URL: https://openrouter.ai/api/v1
  AGENTSPEC_CODEGEN_PROVIDER: openai-compatible

GitLab CI

yaml
variables:
  ANTHROPIC_API_KEY: $ANTHROPIC_API_KEY
  AGENTSPEC_CODEGEN_PROVIDER: anthropic-api

Always set AGENTSPEC_CODEGEN_PROVIDER explicitly in CI. Auto-detection works but adds a 4-second Claude CLI probe timeout on every run when the CLI isn't installed.


Troubleshooting

ErrorCauseFix
No codegen provider availableNo provider could be resolvedInstall Claude CLI, set AGENTSPEC_LLM_API_KEY + AGENTSPEC_LLM_MODEL, or set ANTHROPIC_API_KEY
AGENTSPEC_CODEGEN_PROVIDER=claude-sub but claude is not authenticatedForced to subscription, not logged inRun claude auth login
AGENTSPEC_CODEGEN_PROVIDER=anthropic-api but ANTHROPIC_API_KEY is not setForced to API, no keySet ANTHROPIC_API_KEY
AGENTSPEC_LLM_API_KEY is not setForced to openai-compatible, no keySet AGENTSPEC_LLM_API_KEY
AGENTSPEC_LLM_MODEL is required when AGENTSPEC_LLM_API_KEY is setMissing model IDSet AGENTSPEC_LLM_MODEL to a valid model on your endpoint
Invalid AGENTSPEC_LLM_API_KEYEndpoint rejected the keyRe-copy the key from your endpoint's dashboard
Model not found (on openai-compatible)Endpoint doesn't host the requested modelChange AGENTSPEC_LLM_MODEL to a model the endpoint exposes
Claude CLI is not authenticatedCLI installed but session expiredRun claude auth login
Claude CLI timed out after 300sGeneration too large for default timeoutSwitch to anthropic-api or openai-compatible
Usage limit reached / quota exceeded / daily limitClaude subscription plan cap hitWait for reset or switch to an env-based provider
Rate limited (429)API rate limit on the active providerBack off and retry, or upgrade your API tier
Invalid API keyWrong or revoked keyRegenerate at your provider's dashboard

See also

Released under the Apache 2.0 License.