Provider Authentication
Configure how AgentSpec connects to a codegen provider for code generation (agentspec generate) and source scanning (agentspec scan).
Overview
AgentSpec supports three codegen providers and automatically picks the best one available.
| Provider | Who it's for | What you need |
|---|---|---|
| Claude subscription (Pro / Max) | Anyone with a Claude.ai paid plan | Claude CLI installed and logged in |
| OpenAI-compatible | Anyone using OpenRouter, Groq, Together, Ollama, OpenAI, Nvidia NIM, or any OpenAI-compatible endpoint | AGENTSPEC_LLM_API_KEY + AGENTSPEC_LLM_MODEL (and optionally AGENTSPEC_LLM_BASE_URL) |
| Anthropic API | Teams using the Anthropic API directly | ANTHROPIC_API_KEY env var |
When multiple providers are available, Claude subscription is used first. You can override this at any time.
Choosing a provider
| Claude Subscription | OpenAI-compatible | Anthropic API | |
|---|---|---|---|
| Cost | Included in Pro/Max plan | Depends on endpoint (free for Ollama) | Pay per token |
| Default model | claude-sonnet-4-6 | None (you must set AGENTSPEC_LLM_MODEL) | claude-opus-4-6 |
| Best for | Local dev, individual use | Anything OpenAI-compatible, local inference, multi-provider routing | CI/CD, teams, high volume on Claude |
| Auth | Browser login (interactive) | API key (non-interactive) | API key (non-interactive) |
| Endpoint override | No | Yes (AGENTSPEC_LLM_BASE_URL) | Yes (ANTHROPIC_BASE_URL) |
| Rate limits | Plan-dependent daily cap | Endpoint-dependent | API tier-dependent |
| CI-compatible | No (requires interactive login) | Yes | Yes |
Check your current status
agentspec provider-status AgentSpec -- Provider Status
─────────────────────────────
Claude subscription
✓ Installed yes
Version 2.1.81 (Claude Code)
✓ Authenticated yes
✓ Account you@example.com
✓ Plan Claude Pro
Anthropic API
✗ ANTHROPIC_API_KEY not set
- ANTHROPIC_BASE_URL not set (using default)
OpenAI-compatible
✗ AGENTSPEC_LLM_API_KEY not set
Environment & resolution
- Provider override not set (auto-detect)
- Model override not set (default: claude-opus-4-6)
✓ Would use: Claude subscription
──────────────────────────────────────────────────
✓ Ready -- Claude subscription (Claude Pro) · you@example.com
agentspec generate and scan will use the claude-subscription providerMachine-readable output for CI:
agentspec provider-status --jsonExit codes: 0 = ready, 1 = no auth configured.
Method 1 -- Claude Subscription (Pro / Max)
Use your existing Claude.ai subscription. No API key or per-token cost. Usage is covered by your plan's daily allowance.
Prerequisites
- [ ] Claude Pro or Max subscription at claude.ai
- [ ] Claude CLI installed
1. Install the Claude CLI
# macOS
brew install claude
# or download directly
# https://claude.ai/downloadVerify:
claude --version2. Authenticate
claude auth loginThis opens a browser window. Sign in with your Claude.ai account. Your session is stored locally.
Verify authentication status:
claude auth status3. Run AgentSpec
No env vars needed:
agentspec generate agent.yaml --framework langgraphThe spinner shows which provider is active:
Generating with Claude (subscription) · 12.4k charsHow it works
Under the hood, AgentSpec uses the @anthropic-ai/claude-agent-sdk to call Claude via the query() function. Each generation creates a temporary directory and streams responses with a 5-second heartbeat interval.
Default model
claude-sonnet-4-6. Override with:
export ANTHROPIC_MODEL=claude-opus-4-6Plan limits
Usage counts against your Claude Pro or Max daily limit. If you hit the cap, AgentSpec throws a quota_exceeded error:
Error: Usage limit reached. Your Claude plan's daily allowance has been consumed.Wait for the limit to reset (usually midnight UTC) or switch to API mode:
export AGENTSPEC_CODEGEN_PROVIDER=anthropic-api
export ANTHROPIC_API_KEY=sk-ant-...Session expiry
Claude CLI sessions can expire after extended inactivity. If you see "not authenticated" or "not logged in", re-run:
claude auth loginNot suitable for CI
Claude subscription requires an interactive browser login. For CI/CD pipelines, use the Anthropic API or OpenAI-compatible provider instead.
Method 2 -- Anthropic API Key
Use a direct Anthropic API key. Best for CI pipelines, Docker environments, teams without a subscription, or when you need explicit cost control.
Prerequisites
- [ ] Anthropic API account at console.anthropic.com
- [ ] API key with sufficient tier limits
1. Get an API key
Go to console.anthropic.com > API Keys > Create key.
2. Set the env var
export ANTHROPIC_API_KEY=sk-ant-...For permanent use, add to your shell profile (~/.zshrc, ~/.bashrc) or a .env file.
3. Run AgentSpec
agentspec generate agent.yaml --framework langgraphThe spinner shows:
Generating with claude-opus-4-6 (API) · 12.4k charsDefault model
claude-opus-4-6. Override with:
export ANTHROPIC_MODEL=claude-sonnet-4-6Token budget
Each generation request uses max_tokens: 32768. A typical agentspec generate call consumes roughly 2,000 input tokens (manifest + skill prompt) and 4,000-12,000 output tokens (generated code), depending on manifest complexity.
Rate limits
Governed by your Anthropic API tier. If you hit the rate limit, AgentSpec surfaces a rate_limited error:
Error: Rate limited by the Anthropic API. Back off and retry, or upgrade your API tier.Cost
Billed per input/output token at your tier's rate. Check anthropic.com/pricing for current token prices.
Proxy / custom base URL
Route all API calls through a custom endpoint (useful for corporate proxies, VPNs, or self-hosted API gateways):
export ANTHROPIC_BASE_URL=https://my-proxy.example.comOnly applies when using the Anthropic API provider. Has no effect on Claude subscription or the OpenAI-compatible provider.
Probing
agentspec provider-status sends GET /v1/models with your API key (6-second timeout) to verify the key is valid and the endpoint is reachable. If the probe fails, the provider is marked as unavailable in the status output.
Method 3 -- OpenAI-compatible endpoint
Use any endpoint that speaks the OpenAI wire format: OpenAI.com, OpenRouter, Groq, Together, Ollama, Nvidia NIM, or a local self-hosted model. A single env var family drives all of them.
Prerequisites
- [ ] An API key for the endpoint you want to use (or a dummy string for local Ollama)
- [ ] Knowledge of the endpoint's base URL and a valid model ID on that endpoint
1. Set the env vars
export AGENTSPEC_LLM_API_KEY=sk-or-v1-...
export AGENTSPEC_LLM_MODEL=qwen/qwen3-235b-a22b
export AGENTSPEC_LLM_BASE_URL=https://openrouter.ai/api/v1AGENTSPEC_LLM_API_KEY and AGENTSPEC_LLM_MODEL are both required. AGENTSPEC_LLM_BASE_URL is optional and defaults to https://api.openai.com/v1.
2. Run AgentSpec
agentspec generate agent.yaml --framework langgraphConcrete setups per backend
| Backend | API_KEY | BASE_URL | MODEL example |
|---|---|---|---|
| OpenAI.com | sk-... | (omit, defaults) | gpt-4o-mini |
| OpenRouter | sk-or-v1-... | https://openrouter.ai/api/v1 | qwen/qwen3-235b-a22b |
| Groq | gsk_... | https://api.groq.com/openai/v1 | llama-3.3-70b-versatile |
| Together | ... | https://api.together.xyz/v1 | meta-llama/Llama-3.3-70B-Instruct-Turbo |
| Ollama (local) | ollama (dummy) | http://localhost:11434/v1 | llama3.2 |
| Nvidia NIM | nvapi-... | https://integrate.api.nvidia.com/v1 | meta/llama-3.3-70b-instruct |
Ollama note: Ollama doesn't require a real API key, but the OpenAI SDK refuses to construct with an empty string. Set
AGENTSPEC_LLM_API_KEY=ollama(any non-empty value works).
Default model
There is no universal default. Each endpoint exposes different models, so AGENTSPEC_LLM_MODEL is required when AGENTSPEC_LLM_API_KEY is set. If you omit the model, AgentSpec fails fast at resolve time.
Rate limits
Governed by the endpoint you point at. OpenAI-compatible endpoints surface 429 / quota errors through the OpenAI SDK's structured error classes, which AgentSpec maps to:
Error: Rate limited: <original message from the endpoint>Cost
Depends on the endpoint. Free for local Ollama, pay-per-token for OpenRouter / Groq / Together / OpenAI / Nvidia NIM.
Live probing
agentspec provider-status sends GET {AGENTSPEC_LLM_BASE_URL}/models with Authorization: Bearer {AGENTSPEC_LLM_API_KEY} (6-second timeout) to verify the endpoint is reachable and your key is accepted. The result shows up as ready, misconfigured (e.g. model missing), or unreachable (HTTP 401, HTTP 404, network error).
Forcing the OpenAI-compatible provider
If you have both ANTHROPIC_API_KEY and AGENTSPEC_LLM_API_KEY set, the OpenAI-compatible provider wins by default in auto mode (priority order is claude-sub > openai-compatible > anthropic-api). To force it even when the Claude CLI is authenticated:
export AGENTSPEC_CODEGEN_PROVIDER=openai-compatibleEnvironment variable reference
| Variable | Provider | Default | Description |
|---|---|---|---|
ANTHROPIC_API_KEY | Anthropic API | -- | API key from console.anthropic.com |
ANTHROPIC_BASE_URL | Anthropic API | https://api.anthropic.com | Custom API endpoint / proxy |
ANTHROPIC_MODEL | Subscription, API | claude-sonnet-4-6 (sub) / claude-opus-4-6 (API) | Model override |
AGENTSPEC_LLM_API_KEY | OpenAI-compatible | -- | API key for the endpoint (dummy for local Ollama) |
AGENTSPEC_LLM_MODEL | OpenAI-compatible | -- | Model ID on the endpoint (required) |
AGENTSPEC_LLM_BASE_URL | OpenAI-compatible | https://api.openai.com/v1 | Endpoint root (include /v1) |
AGENTSPEC_CODEGEN_PROVIDER | All | auto | Force a provider: claude-sub, anthropic-api, openai-compatible |
Resolution order (auto mode)
When AGENTSPEC_CODEGEN_PROVIDER is not set, AgentSpec resolves providers in this order:
1. Claude CLI installed + logged in? → use claude-subscription
2. AGENTSPEC_LLM_API_KEY set? → use openai-compatible
3. ANTHROPIC_API_KEY set? → use anthropic-api
4. None available → error with setup optionsSubscription always wins when available. If you have both the CLI and env-based credentials, the env-based providers are ignored unless you force one with AGENTSPEC_CODEGEN_PROVIDER=openai-compatible (or =anthropic-api).
Force a specific provider
# Always use subscription (fails fast if not logged in)
export AGENTSPEC_CODEGEN_PROVIDER=claude-sub
# Always use the Anthropic API (skips CLI check entirely)
export AGENTSPEC_CODEGEN_PROVIDER=anthropic-api
# Use any OpenAI-compatible endpoint (OpenRouter, Groq, Ollama, etc.)
export AGENTSPEC_CODEGEN_PROVIDER=openai-compatibleUseful for CI where you want explicit control and no ambiguity. Also useful locally when you want to test a specific provider's output.
CI / CD setup
In CI there is no interactive login, so use an API key provider.
GitHub Actions
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
AGENTSPEC_CODEGEN_PROVIDER: anthropic-apiGitHub Actions (OpenAI-compatible)
env:
AGENTSPEC_LLM_API_KEY: ${{ secrets.AGENTSPEC_LLM_API_KEY }}
AGENTSPEC_LLM_MODEL: qwen/qwen3-235b-a22b
AGENTSPEC_LLM_BASE_URL: https://openrouter.ai/api/v1
AGENTSPEC_CODEGEN_PROVIDER: openai-compatibleGitLab CI
variables:
ANTHROPIC_API_KEY: $ANTHROPIC_API_KEY
AGENTSPEC_CODEGEN_PROVIDER: anthropic-apiAlways set AGENTSPEC_CODEGEN_PROVIDER explicitly in CI. Auto-detection works but adds a 4-second Claude CLI probe timeout on every run when the CLI isn't installed.
Troubleshooting
| Error | Cause | Fix |
|---|---|---|
No codegen provider available | No provider could be resolved | Install Claude CLI, set AGENTSPEC_LLM_API_KEY + AGENTSPEC_LLM_MODEL, or set ANTHROPIC_API_KEY |
AGENTSPEC_CODEGEN_PROVIDER=claude-sub but claude is not authenticated | Forced to subscription, not logged in | Run claude auth login |
AGENTSPEC_CODEGEN_PROVIDER=anthropic-api but ANTHROPIC_API_KEY is not set | Forced to API, no key | Set ANTHROPIC_API_KEY |
AGENTSPEC_LLM_API_KEY is not set | Forced to openai-compatible, no key | Set AGENTSPEC_LLM_API_KEY |
AGENTSPEC_LLM_MODEL is required when AGENTSPEC_LLM_API_KEY is set | Missing model ID | Set AGENTSPEC_LLM_MODEL to a valid model on your endpoint |
Invalid AGENTSPEC_LLM_API_KEY | Endpoint rejected the key | Re-copy the key from your endpoint's dashboard |
Model not found (on openai-compatible) | Endpoint doesn't host the requested model | Change AGENTSPEC_LLM_MODEL to a model the endpoint exposes |
Claude CLI is not authenticated | CLI installed but session expired | Run claude auth login |
Claude CLI timed out after 300s | Generation too large for default timeout | Switch to anthropic-api or openai-compatible |
Usage limit reached / quota exceeded / daily limit | Claude subscription plan cap hit | Wait for reset or switch to an env-based provider |
Rate limited (429) | API rate limit on the active provider | Back off and retry, or upgrade your API tier |
Invalid API key | Wrong or revoked key | Regenerate at your provider's dashboard |
See also
- Code Generation -- how generation works under the hood
- agentspec generate -- CLI reference
- agentspec scan -- scan source code into a manifest
- CI Integration -- full CI pipeline examples