Skip to content

AutoGen Adapter

Generate Python AutoGen agent code from your agent.yaml manifest.

Usage

bash
export ANTHROPIC_API_KEY=your-api-key-here
agentspec generate agent.yaml --framework autogen --output ./generated/

Get an API key at console.anthropic.com.

Generated Files

FileDescription
agent.pyAssistantAgent setup, run_agent() coroutine, CLI entry point
tools.pyTool registry — imports and exposes AGENT_TOOLS list
tool_implementations.pyTyped async stub functions — fill in the bodies
manifest.pyRuntime manifest loader + get_capabilities()
agent.yamlCopy of your manifest (ships with the agent)
server.pyFastAPI + SSE streaming server (when spec.api is set)
guardrails.pyInput/output guardrail functions (when spec.guardrails is set)
requirements.txtRuntime dependencies
requirements-test.txtTest dependencies (pytest)
.env.exampleAll required environment variables
docker-compose.ymlAgent + AgentSpec sidecar
README.mdQuick-start guide

Manifest Mapping

agent.yaml fieldGenerated code
spec.model.provider: openaiOpenAIChatCompletionClient from autogen_ext.models.openai
spec.model.provider: anthropicAnthropicChatCompletionClient from autogen_ext.models.anthropic
spec.model.provider: azureAzureOpenAIChatCompletionClient from autogen_ext.models.azure
spec.model.provider: groqOpenAIChatCompletionClient with Groq base_url
spec.tools[]Plain async functions in tool_implementations.py, registered in tools=[]
spec.memoryListMemory (in-process) or Redis-backed ListMemory
spec.guardrailsrun_input_guardrails() + run_output_guardrails() in guardrails.py
spec.prompts.systemsystem_message on AssistantAgent
spec.apiFastAPI /chat endpoint with SSE streaming via agent.run_stream()
spec.observabilityLangfuse env vars wired before model client init

Example

yaml
metadata:
  name: research-assistant
  version: 1.0.0

spec:
  model:
    provider: openai
    id: gpt-4o
    apiKey: $env:OPENAI_API_KEY

  tools:
    - name: search-web
      description: Search the web for information
      annotations:
        readOnlyHint: true

  memory:
    shortTerm:
      backend: in-memory

  guardrails:
    input:
      - type: prompt-injection
      - type: pii-detector
        action: scrub
bash
agentspec generate agent.yaml --framework autogen --output ./generated/
cd generated
pip install -r requirements.txt
cp .env.example .env
python agent.py "What are the latest AI research papers?"

See also

Released under the Apache 2.0 License.