Models
Ash uses named model aliases under [models.*]. You pick a default alias, then optionally route specific workflows to cheaper or stronger models.
Models In 30 Seconds
- Configure one required alias:
[models.default] - Point each alias at a provider model (
openai,openai-oauth, oranthropic) - Set provider credentials via OAuth, config, or environment variables
- Use alias names in CLI (
--model) and skill overrides
Quick Start
Use this as a practical baseline:
# Required default model[models.default]provider = "openai-oauth"model = "gpt-5.2"max_tokens = 4096
# Fast/cheap option for lightweight tasks[models.fast]provider = "openai-oauth"model = "gpt-5.2-mini"max_tokens = 4096
# Coding-focused option[models.codex]provider = "openai-oauth"model = "gpt-5.2-codex"max_tokens = 8192Then authenticate:
uv run ash auth loginTest alias selection:
uv run ash chat --model fast "Summarize this changelog"uv run ash chat --model codex "Refactor this Python function"Configure Providers And Credentials
Provider credentials can come from config:
[openai]api_key = "sk-..."
[anthropic]api_key = "sk-ant-..."Or environment variables:
export OPENAI_API_KEY=sk-...export ANTHROPIC_API_KEY=sk-ant-...For openai-oauth, run:
uv run ash auth loginuv run ash auth statusResolution order:
- Config (
[openai].api_key,[anthropic].api_key) - Environment (
OPENAI_API_KEY,ANTHROPIC_API_KEY)
Model Alias Options
Each alias block supports:
[models.default]provider = "openai-oauth" # Required: "openai" | "openai-oauth" | "anthropic"model = "gpt-5.2" # Required: provider model idtemperature = 0.7 # Optional: omit/null for reasoning-first modelsmax_tokens = 4096 # Optional: default is 4096reasoning = "high" # Optional: OpenAI reasoning effort (low|medium|high)thinking = "medium" # Optional: Anthropic thinking budgetTune Reasoning Only When Needed
For OpenAI reasoning effort:
[models.pro]provider = "openai"model = "gpt-5.2-pro"reasoning = "high" # low | medium | highFor Claude extended thinking:
[models.reasoning]provider = "anthropic"model = "claude-opus-4-6"thinking = "medium" # off | minimal | low | medium | highStart with defaults. Increase reasoning/thinking only for tasks that need deeper analysis.
Per-Skill Model Overrides
Skills can target model aliases directly:
[skills.debug]model = "codex" # Use coding-focused alias
[skills.research]model = "default" # Use standard aliasSkill model resolution order:
[skills.<name>].modelin configmodelinSKILL.mddefaultalias
Troubleshooting
Unknown model alias
# Check aliases and spellinguv run ash config show
# Retry with a known aliasuv run ash chat --model default "hello"Provider auth failures
# Validate config shape and required fieldsuv run ash config validate
# Diagnose environment and integration issuesuv run ash doctorCommon fix: ensure the provider key is present in config or exported env var.
Responses are cut off
Increase max_tokens for the alias used by that workflow.
[models.default]provider = "openai"model = "gpt-5.2"max_tokens = 8192Reference (Advanced)
Key implementation files:
src/ash/llm/base.py- provider interfacesrc/ash/llm/openai.py- OpenAI implementationsrc/ash/llm/anthropic.py- Anthropic implementationsrc/ash/llm/registry.py- provider registration and lookupsrc/ash/llm/types.py- message/tool-call data types
Tool definitions are passed to the model from the tool registry. Providers return assistant messages and tool-call requests in normalized Ash types.