English
Documentation
Providers

Providers

OpenCode supports 75+ LLM providers through the AI SDK and Models.dev, enabling integration with numerous language model services and local models.

Setup Process

  1. Add API keys using the /connect command
  2. Configure the provider in your OpenCode config
  3. Credentials are stored in ~/.local/share/opencode/auth.json

Directory

Here's a quick reference of supported providers:

ProviderSetup MethodKey Features
AnthropicOAuth or API keyClaude Pro/Max support
OpenAIChatGPT Plus/Pro or API keyGPT-4o, o1 models
GitHub CopilotDevice code authPro+ subscription models
Google Vertex AIService account or gcloud auth40+ models
Amazon BedrockAWS credentials/profileVPC endpoint support
Azure OpenAIAPI key + resource nameCustom deployments
GroqAPI keyHigh-speed inference
DeepSeekAPI keyReasoning models
OpenRouterAPI keyMulti-provider routing
GitLab DuoAPI keyGitLab integration
OllamaLocal setupRun models locally
LM StudioLocal setupLocal model management

Additional providers include: 302.AI, Baseten, Cerebras, Cloudflare AI Gateway, Cortecs, Deep Infra, Firmware, Fireworks AI, Hugging Face, Helicone, IO.NET, Moonshot AI, MiniMax, Nebius Token Factory, OVHcloud AI Endpoints, SAP AI Core, Scaleway, Together AI, Venice AI, Vercel AI Gateway, xAI, Z.AI, ZenMux.

Base URL Configuration

You can customize the base URL for any provider by setting the baseURL option. This is useful when using proxy services or custom endpoints.

{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "anthropic": {
      "options": {
        "baseURL": "https://api.anthropic.com/v1"
      }
    }
  }
}

OpenCode Zen

OpenCode Zen is a list of models provided by the OpenCode team that have been tested and verified to work well.

  1. Run /connect, select opencode
  2. Visit opencode.ai/auth to authenticate
  3. Copy and paste your API key
  4. Use /models to view recommended models

Popular Providers

Anthropic

  1. Run /connect and select Anthropic
  2. Choose Claude Pro/Max for browser authentication
  3. Access models via /models command

OpenAI

  1. Create API key at platform.openai.com/api-keys
  2. Run /connect and search OpenAI
  3. Enter API key
  4. Select model with /models

Groq

Groq provides high-speed inference for various models.

  1. Create API key at console.groq.com
  2. Run /connect and search Groq
  3. Enter API key
  4. Select model with /models
{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "groq": {
      "options": {
        "apiKey": "{env:GROQ_API_KEY}"
      }
    }
  }
}

DeepSeek

DeepSeek offers powerful reasoning models.

  1. Create API key at platform.deepseek.com
  2. Run /connect and search DeepSeek
  3. Enter API key
  4. Select model with /models
{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "deepseek": {
      "options": {
        "apiKey": "{env:DEEPSEEK_API_KEY}"
      }
    }
  }
}

GitHub Copilot

GitHub Copilot integration requires a Pro+ subscription.

  1. Run /connect and select GitHub Copilot
  2. Complete device code authentication
  3. Access models via /models command
{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "github-copilot": {
      "models": {
        "gpt-4o": {
          "name": "GPT-4o (Copilot)"
        }
      }
    }
  }
}

GitLab Duo

GitLab Duo provides AI features integrated with GitLab.

{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "gitlab-duo": {
      "options": {
        "apiKey": "{env:GITLAB_API_KEY}"
      }
    }
  }
}

OpenRouter

{
  "provider": {
    "openrouter": {
      "models": {
        "moonshotai/kimi-k2": {
          "options": {
            "provider": {
              "order": ["baseten"],
              "allow_fallbacks": false
            }
          }
        }
      }
    }
  }
}

Ollama (Local)

{
  "provider": {
    "ollama": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "Ollama (local)",
      "options": {
        "baseURL": "http://localhost:11434/v1"
      },
      "models": {
        "llama2": {
          "name": "Llama 2"
        }
      }
    }
  }
}

LM Studio (Local)

{
  "provider": {
    "lmstudio": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "LM Studio (local)",
      "options": {
        "baseURL": "http://127.0.0.1:1234/v1"
      },
      "models": {
        "google/gemma-3n-e4b": {
          "name": "Gemma 3n-e4b (local)"
        }
      }
    }
  }
}

Amazon Bedrock

{
  "provider": {
    "amazon-bedrock": {
      "options": {
        "region": "us-east-1",
        "profile": "my-aws-profile"
      }
    }
  }
}

Authentication Precedence

When using Amazon Bedrock, authentication follows this precedence order:

  1. Bearer Token - If AWS_BEARER_TOKEN_BEDROCK is set (via /connect or environment variable), it takes precedence over all other methods
  2. AWS Credential Chain - Standard AWS credential resolution:
    • AWS profile configuration
    • Access keys (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
    • IAM roles
    • EKS IRSA (IAM Roles for Service Accounts)

Azure OpenAI

  1. Create Azure OpenAI resource in Azure portal
  2. Deploy model in Azure AI Foundry
  3. Run /connect and search Azure
  4. Set AZURE_RESOURCE_NAME environment variable

Custom Provider Setup

For OpenAI-compatible providers:

{
  "provider": {
    "myprovider": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "My AI Provider",
      "options": {
        "baseURL": "https://api.myprovider.com/v1",
        "apiKey": "{env:MY_API_KEY}"
      },
      "models": {
        "my-model": {
          "name": "My Model",
          "limit": {
            "context": 200000,
            "output": 65536
          }
        }
      }
    }
  }
}

Environment Variable Syntax

Use the {env:VARIABLE_NAME} syntax to reference environment variables in your configuration:

{
  "provider": {
    "myprovider": {
      "options": {
        "apiKey": "{env:MY_PROVIDER_API_KEY}"
      }
    }
  }
}

This allows you to keep sensitive credentials out of your config files.

Model Limits

The limit fields help OpenCode understand the context window and output limits of your models:

{
  "provider": {
    "myprovider": {
      "models": {
        "my-model": {
          "name": "My Model",
          "limit": {
            "context": 200000,
            "output": 65536
          }
        }
      }
    }
  }
}
  • context: Maximum input tokens the model can process
  • output: Maximum output tokens the model can generate

Custom Headers

You can add custom headers to API requests:

{
  "provider": {
    "myprovider": {
      "options": {
        "headers": {
          "Authorization": "Bearer custom-token",
          "X-Custom-Header": "value"
        }
      }
    }
  }
}

Troubleshooting

  1. Check authentication: Run opencode auth list to verify credentials
  2. Custom provider issues:
    • Verify provider ID matches between /connect and config
    • Confirm correct npm package
    • Check API endpoint in options.baseURL