Agents
Ash uses an agentic architecture with a main orchestrator and specialized subagents for complex tasks.
Agent Core
The agent orchestrator manages conversations and coordinates between LLM, tools, and memory.
Agentic Loop
The agent implements an iterative loop:
1. Receive message2. Build context (system prompt, history, memories)3. Call LLM4. If tool calls requested: a. Execute tools b. Add results to context c. Go to step 35. Return final responseAgent Class
Location: src/ash/core/agent.py
class Agent: async def process( self, message: str, *, session_id: str, user_id: str, stream: bool = True, ) -> AsyncIterator[str]: """Process a user message and yield response chunks."""Iteration Limits
The agent limits tool iterations to prevent infinite loops:
- Default: 25 iterations
- Configurable: Via agent initialization
Context Building
For each LLM call, the agent builds context:
- System prompt - From SOUL.md + capabilities
- Memory retrieval - Relevant memories via semantic search
- Conversation history - Recent messages within token budget
- Tool definitions - Available tools schema
Session Management
Sessions track conversations per provider/chat:
class Session: id: str provider: str chat_id: str user_id: str messages: list[Message]Sessions are persisted to the database.
Streaming
The agent supports streaming responses:
async for chunk in agent.process(message, stream=True): print(chunk, end="")Non-streaming returns the complete response:
async for response in agent.process(message, stream=False): print(response) # Single complete responseError Handling
The agent handles:
- LLM errors - Retries with exponential backoff
- Tool failures - Reports error to LLM for recovery
- Context overflow - Prunes history to fit token budget
Subagents
Subagents are autonomous processes that run isolated LLM loops for complex multi-step tasks.
Unlike skills (which are markdown instruction files the main agent reads), subagents:
- Execute in their own LLM context
- Have their own system prompt
- Can restrict available tools
- Run multiple iterations independently
- Return results when complete
Agent Interface
Location: src/ash/agents/base.py
class Agent(ABC): @property @abstractmethod def config(self) -> AgentConfig: """Return agent configuration."""AgentConfig
@dataclassclass AgentConfig: name: str # Unique identifier description: str # Description for UseAgentTool system_prompt: str # Custom system prompt tools: list[str] # Tool whitelist (empty = all) max_iterations: int = 10 # Safety limit model: str | None = None # Model override (None = default)AgentContext
Context passed to agent execution:
@dataclassclass AgentContext: session_id: str | None = None user_id: str | None = None chat_id: str | None = None input_data: dict[str, Any] = field(default_factory=dict)AgentResult
@dataclassclass AgentResult: content: str is_error: bool = False iterations: int = 0
@classmethod def success(cls, content: str, iterations: int = 0) -> "AgentResult": ...
@classmethod def error(cls, message: str) -> "AgentResult": ...Built-in Subagents
Research Agent
Location: src/ash/agents/builtin/research.py
Multi-step research tasks with web search and content extraction:
config = AgentConfig( name="research", description="Perform multi-step research tasks", system_prompt="You are a research assistant...", tools=["web_search", "web_fetch", "bash"], max_iterations=25,)Use cases:
- Researching topics across multiple sources
- Gathering information for complex questions
- Comparing different approaches or solutions
Skill Writer Agent
Location: src/ash/agents/builtin/skill_writer.py
Creates new skills autonomously:
config = AgentConfig( name="skill-writer", description="Create new skills for the assistant", system_prompt="You are a skill writer...", tools=["bash", "read_file", "write_file"], max_iterations=15,)Use cases:
- Creating new workflow skills
- Generating skill templates
- Automating skill development
Using Subagents
Via UseAgentTool
The main agent invokes subagents via the use_agent tool:
# Agent tool call{ "name": "use_agent", "input": { "agent": "research", "message": "Find the best practices for Python async programming", "input": {"depth": "thorough"} }}Direct Invocation (Code)
from ash.agents.registry import AgentRegistryfrom ash.agents.executor import AgentExecutor
registry = AgentRegistry()registry.load_builtin()
executor = AgentExecutor( registry=registry, llm_provider=llm_provider, tool_registry=tool_registry,)
result = await executor.execute( agent_name="research", message="Research topic X", context=AgentContext(session_id="123"),)Configuration
Override agent settings in config.toml:
[agents.research]model = "sonnet" # Use sonnet modelmax_iterations = 50 # Allow more iterations
[agents.skill-writer]model = "sonnet"max_iterations = 20Agent vs Skill
| Feature | Agent | Skill |
|---|---|---|
| Execution | Own LLM loop | Main agent follows instructions |
| Tools | Can restrict | Uses main agent’s tools |
| Context | Isolated | Shared with conversation |
| Iterations | Multiple | N/A |
| Definition | Python code | Markdown file |
Creating Custom Subagents
- Create a new file in
src/ash/agents/builtin/:
from ash.agents.base import Agent, AgentConfig, AgentContext
class MyAgent(Agent): @property def config(self) -> AgentConfig: return AgentConfig( name="my-agent", description="What this agent does", system_prompt="""You are a specialized agent...
Your task is to...
Available tools: bash, web_search""", tools=["bash", "web_search"], max_iterations=15, )
def build_system_prompt(self, context: AgentContext) -> str: # Optionally customize prompt based on context base = self.config.system_prompt if context.input_data.get("verbose"): base += "\n\nProvide detailed explanations." return base- Register in
src/ash/agents/builtin/__init__.py:
from ash.agents.builtin.my_agent import MyAgent
BUILTIN_AGENTS = [ ResearchAgent, SkillWriterAgent, MyAgent, # Add here]- Configure in
config.toml:
[agents.my-agent]model = "sonnet"max_iterations = 20Safety
Agents have built-in safety limits:
- Max iterations - Prevents runaway loops
- Tool restrictions - Can whitelist specific tools
- Isolated context - Runs separately from main conversation
The main agent decides when to delegate to subagents and receives their results.