Skip to content

Models

Ash uses named model aliases under [models.*]. You pick a default alias, then optionally route specific workflows to cheaper or stronger models.

Models In 30 Seconds

  • Configure one required alias: [models.default]
  • Point each alias at a provider model (openai, openai-oauth, or anthropic)
  • Set provider credentials via OAuth, config, or environment variables
  • Use alias names in CLI (--model) and skill overrides

Quick Start

Use this as a practical baseline:

# Required default model
[models.default]
provider = "openai-oauth"
model = "gpt-5.2"
max_tokens = 4096
# Fast/cheap option for lightweight tasks
[models.fast]
provider = "openai-oauth"
model = "gpt-5.2-mini"
max_tokens = 4096
# Coding-focused option
[models.codex]
provider = "openai-oauth"
model = "gpt-5.2-codex"
max_tokens = 8192

Then authenticate:

Terminal window
uv run ash auth login

Test alias selection:

Terminal window
uv run ash chat --model fast "Summarize this changelog"
uv run ash chat --model codex "Refactor this Python function"

Configure Providers And Credentials

Provider credentials can come from config:

[openai]
api_key = "sk-..."
[anthropic]
api_key = "sk-ant-..."

Or environment variables:

Terminal window
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-ant-...

For openai-oauth, run:

Terminal window
uv run ash auth login
uv run ash auth status

Resolution order:

  1. Config ([openai].api_key, [anthropic].api_key)
  2. Environment (OPENAI_API_KEY, ANTHROPIC_API_KEY)

Model Alias Options

Each alias block supports:

[models.default]
provider = "openai-oauth" # Required: "openai" | "openai-oauth" | "anthropic"
model = "gpt-5.2" # Required: provider model id
temperature = 0.7 # Optional: omit/null for reasoning-first models
max_tokens = 4096 # Optional: default is 4096
reasoning = "high" # Optional: OpenAI reasoning effort (low|medium|high)
thinking = "medium" # Optional: Anthropic thinking budget

Tune Reasoning Only When Needed

For OpenAI reasoning effort:

[models.pro]
provider = "openai"
model = "gpt-5.2-pro"
reasoning = "high" # low | medium | high

For Claude extended thinking:

[models.reasoning]
provider = "anthropic"
model = "claude-opus-4-6"
thinking = "medium" # off | minimal | low | medium | high

Start with defaults. Increase reasoning/thinking only for tasks that need deeper analysis.

Per-Skill Model Overrides

Skills can target model aliases directly:

[skills.debug]
model = "codex" # Use coding-focused alias
[skills.research]
model = "default" # Use standard alias

Skill model resolution order:

  1. [skills.<name>].model in config
  2. model in SKILL.md
  3. default alias

Troubleshooting

Unknown model alias

Terminal window
# Check aliases and spelling
uv run ash config show
# Retry with a known alias
uv run ash chat --model default "hello"

Provider auth failures

Terminal window
# Validate config shape and required fields
uv run ash config validate
# Diagnose environment and integration issues
uv run ash doctor

Common fix: ensure the provider key is present in config or exported env var.

Responses are cut off

Increase max_tokens for the alias used by that workflow.

[models.default]
provider = "openai"
model = "gpt-5.2"
max_tokens = 8192

Reference (Advanced)

Key implementation files:

  • src/ash/llm/base.py - provider interface
  • src/ash/llm/openai.py - OpenAI implementation
  • src/ash/llm/anthropic.py - Anthropic implementation
  • src/ash/llm/registry.py - provider registration and lookup
  • src/ash/llm/types.py - message/tool-call data types

Tool definitions are passed to the model from the tool registry. Providers return assistant messages and tool-call requests in normalized Ash types.