Overview
Ash is built from modular systems. This page gives you one baseline config and a map to the right subsystem doc for each task.
Systems In 30 Seconds
- Models define model aliases, provider choice, and credentials
- Tools execute actions (shell, web, files, browser, delegation)
- Skills define reusable subagent workflows in
SKILL.md - Agents run delegated autonomous loops (including nested execution)
- Memory controls context retention and extraction quality
- Sandbox enforces runtime isolation and file/network boundaries
- Chat Providers connect Ash to channels like Telegram
- Vision processes inbound images and injects context
- Browser runs session-based page actions
- Todos provide canonical task tracking via graph-backed lifecycle state
Core Config Baseline
Start with a minimal working ~/.ash/config.toml:
[models.default]provider = "openai-oauth"model = "gpt-5.2"max_tokens = 4096
[models.fast]provider = "openai-oauth"model = "gpt-5.2-mini"
[sandbox]runtime = "runc"network_mode = "bridge"workspace_access = "rw"
[memory]context_token_budget = 100000recency_window = 10extraction_enabled = true
[telegram]bot_token = "123456789:ABCdef..."allowed_users = ["@yourusername"]Then authenticate and verify:
uv run ash auth loginuv run ash config validateuv run ash doctorStart By Goal
If you want to:
- Pick model aliases and credential strategy -> Models
- Understand runtime actions and extension points -> Tools
- Build reusable workflow automation -> Skills
- Configure delegated and nested execution -> Agents
- Tune context and extraction behavior -> Memory
- Lock down runtime security and mounts -> Sandbox
- Run Telegram and channel delivery -> Chat Providers
- Enable inbound image handling -> Vision
- Enable browser-driven workflows -> Browser
- Track actionable tasks reliably -> Todos
Troubleshooting
If system behavior is unclear:
uv run ash config validateuv run ash doctoruv run ash logs --component providersThen jump to the specific subsystem page for targeted fixes.
Reference (Advanced)
Core stack snapshot:
- Language/runtime: Python 3.12+
- CLI: Typer
- Server: FastAPI + Uvicorn
- Validation/modeling: Pydantic
- Container runtime integration: Docker (
docker-py)