Agent Configuration Reference
This page provides comprehensive documentation for the agent configuration JSON schema. While most agent settings are configurable through the Wabee AI Studio interface, you can use this JSON configuration to override and fine-tune settings during agent creation or updates.
Configuration Overview
The agent configuration allows you to precisely control every aspect of your agent's behavior, from LLM settings to workflow execution parameters. This is particularly useful for:
- Advanced customization beyond the standard UI options
- Programmatic agent deployment via API
- Environment-specific configurations (dev, staging, production)
- Integration with external systems requiring specific parameters
Root Configuration Structure
{
"name": "My Custom Agent",
"description": "Agent description",
"max_input": 4096,
"max_iterations": 10,
"max_execution_time": 180,
"llms": {...},
"embedding": {...},
"prompt": {...},
"workflow": {...},
"reasoning_type": "simple",
"local_recursion_limit": 15,
"session_summarization_config": {...},
"scratchpad_summarization_config": {...}
}
Core Fields
Basic Configuration
Field | Type | Required | Default | Description |
---|---|---|---|---|
name |
String | Yes | - | Agent name identifier |
description |
String | No | "" | Human-readable description of the agent |
max_input |
Integer | No | null | Maximum input size in tokens (≥0) |
max_iterations |
Integer | No | 10 | Maximum reasoning iterations (≥1) |
max_execution_time |
Float | No | 180 | Maximum execution time in seconds (≥30) |
reasoning_type |
String | No | "simple" | Reasoning approach: "simple" or "multi_perspective" |
local_recursion_limit |
Integer | No | 15 | Recursion limit for local operations (≥2) |
LLM Configuration
The llms
field contains a dictionary of language models keyed by their identifier:
{
"llms": {
"primary": {
"model_type": "openai",
"max_input": 4096,
"max_output": 2048,
"timeout": 30.0,
"temperature": 0.7,
"top_p": 1.0,
"fallback": "backup_model",
"use_stream": false,
"config": {
"type": "openai",
"model": "gpt-4",
"api_key": "your-api-key",
"base_url": "https://api.openai.com/v1",
"context_window": 8192
}
}
}
}
LLM Model Types
OpenAI Configuration
{
"type": "openai",
"model": "gpt-4",
"api_key": "your-api-key",
"base_url": "https://api.openai.com/v1",
"context_window": 8192
}
Azure OpenAI Configuration
{
"type": "azure",
"model": "gpt-4",
"openai_api_base": "https://your-resource.openai.azure.com/",
"openai_api_key": "your-api-key",
"openai_api_version": "2024-02-15-preview",
"deployment_name": "gpt-4-deployment",
"context_window": 8192
}
Bedrock Configuration
{
"type": "bedrock",
"model": "anthropic.claude-v2",
"aws_access_key_id": "your-access-key",
"aws_secret_access_key": "your-secret-key",
"aws_region_name": "us-east-1",
"context_window": 100000,
"performance_config": {
"latency": "optimized"
}
}
Standard/Custom Provider Configuration
{
"type": "standard",
"model": "custom-model",
"api_key": "your-api-key",
"base_url": "https://custom-provider.com/v1",
"context_window": 4096
}
OpenRouter Configuration
{
"type": "openrouter",
"model": "anthropic/claude-3-sonnet",
"api_key": "your-openrouter-key",
"base_url": "https://openrouter.ai/api/v1",
"context_window": 200000,
"provider": {
"order": ["anthropic", "openai"],
"allow_fallbacks": true
}
}
Embedding Configuration
Configure embedding models for semantic search and RAG:
{
"embedding": {
"name": "text-embedding-ada-002",
"embedding_size": 1536,
"config": {
"api_key": "your-openai-key",
"model_name": "text-embedding-ada-002"
}
}
}
Azure Embedding Configuration
{
"embedding": {
"name": "azure-embedding",
"embedding_size": 1536,
"config": {
"api_key": "your-azure-key",
"endpoint": "https://your-resource.openai.azure.com/",
"api_version": "2024-02-15-preview",
"model_name": "text-embedding-ada-002"
}
}
}
Prompt Configuration
The prompt configuration defines your agent's personality, behavior, and objectives. This configuration allows you to create agents with distinct personalities, communication styles, and behavioral guidelines.
Prompt Configuration Structure
{
"prompt": {
"name": "react-en-v2",
"goal": "Provide expert technical support for our SaaS platform, helping users resolve issues quickly and learn best practices",
"assistant_control": "Follow company guidelines. Escalate billing issues to human support. Never share internal system details.",
"personality_traits": [
"Patient and empathetic with frustrated users",
"Detail-oriented when gathering issue information",
"Proactive in suggesting preventive measures"
],
"core_beliefs": [
"Every user deserves respectful, timely assistance",
"Clear communication prevents misunderstandings",
"Teaching users helps reduce future issues"
],
"communication_style": "Professional yet friendly, using clear technical language while avoiding jargon when possible",
"backstory": "You are an experienced technical support specialist who has helped thousands of users successfully navigate software challenges"
}
}
Prompt Fields
Field | Type | Required | Description |
---|---|---|---|
name |
String | Yes | Must be "react-en-v2" for V2 prompts |
goal |
String | Yes | The agent's primary objective and mission |
assistant_control |
String | No | Behavioral guidelines and constraints |
personality_traits |
Array[String] | No | List of personality characteristics |
core_beliefs |
Array[String] | No | Fundamental principles guiding behavior |
communication_style |
String | No | How the agent should communicate |
backstory |
String | No | Background context for role-playing |
How Prompts Are Consolidated
In V2 configurations, all fields are combined to create a comprehensive agent persona:
- Goal forms the foundation of the agent's purpose
- Personality traits shape behavioral tendencies
- Core beliefs guide decision-making
- Communication style affects response formatting
- Backstory provides contextual grounding
- Assistant control sets hard boundaries and rules
Best Practices
- Be Specific: Detailed goals and traits lead to more consistent behavior
- Set Clear Boundaries: Use
assistant_control
for non-negotiable rules - Align Traits and Beliefs: Ensure consistency across all personality fields
- Test Personality: Verify the agent behaves as intended in various scenarios
- Consider Your Use Case: Tailor personality to your specific domain and users
Workflow Configuration
Configure agent execution behavior:
{
"workflow": {
"type": "react",
"spec": {
"implicit_tool_selection": false,
"workflow_plan": {
"tasks": [...],
"constraints": [...],
"variables_required_to_final_answer": [...],
"parallelism": false,
"version": "1.0"
}
}
}
}
Workflow Types
- "react" - Standard ReAct reasoning loop
- "hierarchical" - Multi-agent hierarchical execution
- "plan-and-act" - Planning-based execution
- "custom" - Custom workflow implementation
Automation Mode Integration
When workflow_plan
is specified, the agent operates in Automation Mode. See the Automation Configuration Reference for detailed information about structuring deterministic workflows.
Memory Configuration
Session Memory Summarization
{
"session_summarization_config": {
"summarization_model": "gpt-4o-mini",
"summary_k": 12
}
}
Scratchpad Summarization
{
"scratchpad_summarization_config": {
"summary_k": 10,
"min_k": 3
}
}
Agent Long-Term Memory
Currently, the agent long-term memory model is not configurable. Wabee AI Studio uses a default memory model that is optimized for general use cases. This model automatically manages long-term memory storage and retrieval.
Node-Specific LLM Configuration
Wabee AI Studio provides optimized default models for each node in the agent workflow, configured for the best performance in general use cases. However, you can override these defaults to use specific models for particular nodes when needed.
This is useful when you want to: - Use a faster model for simple reasoning tasks - Use a more powerful model for complex analysis - Optimize costs by using different models for different tasks - Use specialized models for specific node types
Configuration Example
{
"node_llm_config": {
"reason": {
"model_key": "primary"
},
"tool": {
"model_key": "fast_model"
},
"final_answer": {
"model_key": "primary"
},
"result_analysis": {
"model_key": "powerful_model"
}
}
}
Valid Node Names
The following node names can be used in the configuration:
Node Name | Description | Common Use Case |
---|---|---|
reason |
Main reasoning and decision-making node | Primary agent logic |
final_answer |
Generates the final response to the user | Response formatting |
tool |
Executes tool calls | Tool invocation |
tool_selection |
Selects appropriate tools | Tool decision logic |
tool_scratchpad_preparation |
Prepares context for tool selection | Tool context management |
agent_selection |
Selects sub-agents in hierarchical workflows | Multi-agent coordination |
scratchpad_summary |
Summarizes working memory | Memory optimization |
vision_processing |
Processes image inputs | Visual analysis |
vision_prompt_preparation |
Prepares prompts for vision tasks | Vision context setup |
complexity_classifier |
Classifies task complexity | Routing decisions |
task_planner |
Creates execution plans | Workflow planning |
task_replanner |
Adjusts plans based on results | Adaptive planning |
result_analysis |
Analyzes task results | Quality assessment |
error_analysis |
Analyzes and handles errors | Error recovery |
Important Notes
- The
model_key
must reference a model defined in thellms
dictionary - If a node is not specified, it will use the default model configured by Wabee
- Node names are case-sensitive and must match exactly
- Not all nodes are used in every workflow type
Agent Relationships
Configure parent-child agent relationships:
{
"child_agent_uri_ids": ["child_agent_1", "child_agent_2"],
"parent_agent_ids": ["parent_agent_1"]
}
Validation Rules
Required Fields
name
must be a non-empty stringllms
must contain at least one valid LLM configurationprompt
must be either V1 or V2 format
Constraints
max_iterations
≥ 1max_execution_time
≥ 30 secondslocal_recursion_limit
≥ 2session_summarization_config.summary_k
≥ 5scratchpad_summarization_config.summary_k
≥ 5
Best Practices
- Use V2 Prompts: Prefer
PromptConfigModelV2
for better personality control - Configure Fallbacks: Set
fallback
models for reliability - Optimize Timeouts: Set appropriate
timeout
values for your use case - Monitor Resources: Use
max_input
andmax_execution_time
to control costs - Structured Workflows: Use
workflow_plan
for deterministic automation
Common Examples
Basic Chat Agent
{
"name": "Customer Support Agent",
"description": "Handles customer inquiries with web search capability",
"max_iterations": 5,
"max_execution_time": 120,
"llms": {
"primary": {
"model_type": "openai",
"temperature": 0.3,
"config": {
"type": "openai",
"model": "gpt-4",
"api_key": "your-key",
"context_window": 8192
}
}
},
"prompt": {
"name": "react-en-v2",
"goal": "Provide helpful customer support",
"personality_traits": ["Empathetic", "Solution-oriented"],
"communication_style": "Professional and friendly"
},
"workflow": {
"type": "react",
"spec": {
"implicit_tool_selection": false
}
}
}
RAG-Enabled Research Agent
{
"name": "Research Assistant",
"llms": {
"primary": {
"model_type": "openai",
"config": {
"type": "openai",
"model": "gpt-4",
"api_key": "your-key"
}
}
},
"embedding": {
"name": "research-embeddings",
"embedding_size": 1536,
"config": {
"api_key": "your-key",
"model_name": "text-embedding-ada-002"
}
}
}
Related Documentation
- Automation Configuration Reference - For structured workflow configuration
- Memory Management - Advanced memory configuration options