## ๐ Praison AI
PraisonAI is a production-ready Multi-AI Agents framework with self-reflection. It's designed to create AI Agents that can automate and solve problems, ranging from simple tasks to complex challenges. By integrating PraisonAI Agents, AG2 (Formerly AutoGen), and CrewAI into a low-code solution, it simplifies the building and management of multi-agent LLM systems, emphasizing simplicity, customization, and effective human-agent collaboration.
## ๐ Quick Start
Get started with PraisonAI in under 1 minute:
```bash
# Install
pip install praisonaiagents
# Set API key
export OPENAI_API_KEY=your_key_here
# Create a simple agent
python -c "from praisonaiagents import Agent; Agent(instructions='You are a helpful AI assistant').start('Write a haiku about AI')"
โจ Features
๐ค Core Agents
| Feature |
Code |
Docs |
| Single Agent |
Example |
๐ |
| Multi Agents |
Example |
๐ |
| Auto Agents |
Example |
๐ |
| Self Reflection AI Agents |
Example |
๐ |
| Reasoning AI Agents |
Example |
๐ |
| Multi Modal AI Agents |
Example |
๐ |
๐ Workflows
| Feature |
Code |
Docs |
| Simple Workflow |
Example |
๐ |
| Workflow with Agents |
Example |
๐ |
Agentic Routing (route()) |
Example |
๐ |
Parallel Execution (parallel()) |
Example |
๐ |
Loop over List/CSV (loop()) |
Example |
๐ |
Evaluator-Optimizer (repeat()) |
Example |
๐ |
| Conditional Steps |
Example |
๐ |
| Workflow Branching |
Example |
๐ |
| Workflow Early Stop |
Example |
๐ |
| Workflow Checkpoints |
Example |
๐ |
๐ป Code & Development
| Feature |
Code |
Docs |
| Code Interpreter Agents |
Example |
๐ |
| AI Code Editing Tools |
Example |
๐ |
| External Agents (All) |
Example |
๐ |
| Claude Code CLI |
Example |
๐ |
| Gemini CLI |
Example |
๐ |
| Codex CLI |
Example |
๐ |
| Cursor CLI |
Example |
๐ |
๐ง Memory & Knowledge
| Feature |
Code |
Docs |
| Memory (Short & Long Term) |
Example |
๐ |
| File-Based Memory |
Example |
๐ |
| Claude Memory Tool |
Example |
๐ |
| Add Custom Knowledge |
Example |
๐ |
| RAG Agents |
Example |
๐ |
| Chat with PDF Agents |
Example |
๐ |
| Data Readers (PDF, DOCX, etc.) |
CLI |
๐ |
| Vector Store Selection |
CLI |
๐ |
| Retrieval Strategies |
CLI |
๐ |
| Rerankers |
CLI |
๐ |
| Index Types (Vector/Keyword/Hybrid) |
CLI |
๐ |
| Query Engines (Sub-Question, etc.) |
CLI |
๐ |
๐ฌ Research & Intelligence
| Feature |
Code |
Docs |
| Deep Research Agents |
Example |
๐ |
| Query Rewriter Agent |
Example |
๐ |
| Native Web Search |
Example |
๐ |
| Built-in Search Tools |
Example |
๐ |
| Unified Web Search |
Example |
๐ |
| Web Fetch (Anthropic) |
Example |
๐ |
๐ Planning & Execution
| Feature |
Code |
Docs |
| Planning Mode |
Example |
๐ |
| Planning Tools |
Example |
๐ |
| Planning Reasoning |
Example |
๐ |
| Prompt Chaining |
Example |
๐ |
| Evaluator Optimiser |
Example |
๐ |
| Orchestrator Workers |
Example |
๐ |
๐ฅ Specialized Agents
| Feature |
Code |
Docs |
| Data Analyst Agent |
Example |
๐ |
| Finance Agent |
Example |
๐ |
| Shopping Agent |
Example |
๐ |
| Recommendation Agent |
Example |
๐ |
| Wikipedia Agent |
Example |
๐ |
| Programming Agent |
Example |
๐ |
| Math Agents |
Example |
๐ |
| Markdown Agent |
Example |
๐ |
| Prompt Expander Agent |
Example |
๐ |
๐จ Media & Multimodal
| Feature |
Code |
Docs |
| Image Generation Agent |
Example |
๐ |
| Image to Text Agent |
Example |
๐ |
| Video Agent |
Example |
๐ |
| Camera Integration |
Example |
๐ |
๐ Protocols & Integration
| Feature |
Code |
Docs |
| MCP Transports |
Example |
๐ |
| WebSocket MCP |
Example |
๐ |
| MCP Security |
Example |
๐ |
| MCP Resumability |
Example |
๐ |
| MCP Config Management |
Example |
๐ |
| LangChain Integrated Agents |
Example |
๐ |
๐ก๏ธ Safety & Control
| Feature |
Code |
Docs |
| Guardrails |
Example |
๐ |
| Human Approval |
Example |
๐ |
| Rules & Instructions |
Example |
๐ |
| Stateful Agents |
Example |
๐ |
| Autonomous Workflow |
Example |
๐ |
| Structured Output Agents |
Example |
๐ |
| Model Router |
Example |
๐ |
| Prompt Caching |
Example |
๐ |
| Fast Context |
Example |
๐ |
๐ ๏ธ Tools & Configuration
| Feature |
Code |
Docs |
| Advanced Callback Systems |
Example |
๐ |
| Hooks |
Example |
๐ |
| Middleware System |
Example |
๐ |
| Configurable Model |
Example |
๐ |
| Rate Limiter |
Example |
๐ |
| Injected Tool State |
Example |
๐ |
| Shadow Git Checkpoints |
Example |
๐ |
| Background Tasks |
Example |
๐ |
| Policy Engine |
Example |
๐ |
| Thinking Budgets |
Example |
๐ |
| Output Styles |
Example |
๐ |
| Context Compaction |
Example |
๐ |
๐ Monitoring & Management
| Feature |
Code |
Docs |
| Sessions Management |
Example |
๐ |
| Auto-Save Sessions |
Example |
๐ |
| History in Context |
Example |
๐ |
| Telemetry |
Example |
๐ |
| Project Docs (.praison/docs/) |
Example |
๐ |
| AI Commit Messages |
Example |
๐ |
| @Mentions in Prompts |
Example |
๐ |
๐ฅ๏ธ CLI Features
| Feature |
Code |
Docs |
| Slash Commands |
Example |
๐ |
| Autonomy Modes |
Example |
๐ |
| Cost Tracking |
Example |
๐ |
| Repository Map |
Example |
๐ |
| Interactive TUI |
Example |
๐ |
| Git Integration |
Example |
๐ |
| Sandbox Execution |
Example |
๐ |
| CLI Compare |
Example |
๐ |
| Profile/Benchmark |
Example |
๐ |
| Auto Mode |
Example |
๐ |
| Init |
Example |
๐ |
| File Input |
Example |
๐ |
| Final Agent |
Example |
๐ |
| Max Tokens |
Example |
๐ |
๐งช Evaluation
| Feature |
Code |
Docs |
| Accuracy Evaluation |
Example |
๐ |
| Performance Evaluation |
Example |
๐ |
| Reliability Evaluation |
Example |
๐ |
| Criteria Evaluation |
Example |
๐ |
๐ฏ Agent Skills
| Feature |
Code |
Docs |
| Skills Management |
Example |
๐ |
| Custom Skills |
Example |
๐ |
โฐ 24/7 Scheduling
| Feature |
Code |
Docs |
| Agent Scheduler |
Example |
๐ |
๐ฆ Installation
Python SDK
Lightweight package dedicated for coding:
pip install praisonaiagents
For the full framework with CLI support:
pip install praisonai
JavaScript SDK
npm install praisonai
Environment Variables
| Variable |
Required |
Description |
OPENAI_API_KEY |
Yes* |
OpenAI API key |
ANTHROPIC_API_KEY |
No |
Anthropic Claude API key |
GOOGLE_API_KEY |
No |
Google Gemini API key |
GROQ_API_KEY |
No |
Groq API key |
OPENAI_BASE_URL |
No |
Custom API endpoint (for Ollama, Groq, etc.) |
*At least one LLM provider API key is required.
export OPENAI_API_KEY=your_key_here
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_API_KEY=your_groq_key
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
๐ป Usage Examples
Python Code Examples
from praisonaiagents import Agent
agent = Agent(instructions="Your are a helpful AI assistant")
agent.start("Write a movie script about a robot in Mars")
from praisonaiagents import Agent, PraisonAIAgents
research_agent = Agent(instructions="Research about AI")
summarise_agent = Agent(instructions="Summarise research agent's findings")
agents = PraisonAIAgents(agents=[research_agent, summarise_agent])
agents.start()
from praisonaiagents import Agent
def search_web(query: str) -> str:
return f"Search results for: {query}"
agent = Agent(
name="AI Assistant",
instructions="Research and write about topics",
planning=True,
planning_tools=[search_web],
planning_reasoning=True
)
result = agent.start("Research AI trends in 2025 and write a summary")
from praisonaiagents import DeepResearchAgent
agent = DeepResearchAgent(
model="o4-mini-deep-research",
verbose=True
)
result = agent.research("What are the latest AI trends in 2025?")
print(result.report)
print(f"Citations: {len(result.citations)}")
from praisonaiagents import DeepResearchAgent
agent = DeepResearchAgent(
model="deep-research-pro",
verbose=True
)
result = agent.research("Research quantum computing advances")
print(result.report)
from praisonaiagents import QueryRewriterAgent, RewriteStrategy
agent = QueryRewriterAgent(model="gpt-4o-mini")
result = agent.rewrite("AI trends")
print(result.primary_query)
result = agent.rewrite("What is quantum computing?", strategy=RewriteStrategy.HYDE)
result = agent.rewrite("GPT-4 vs Claude 3?", strategy=RewriteStrategy.STEP_BACK)
result = agent.rewrite("RAG setup and best embedding models?", strategy=RewriteStrategy.SUB_QUERIES)
result = agent.rewrite("What about cost?", chat_history=[...])
from praisonaiagents import Agent
from praisonaiagents.memory import FileMemory
agent = Agent(
name="Personal Assistant",
instructions="You are a helpful assistant that remembers user preferences.",
memory=True,
user_id="user123"
)
result = agent.start("My name is John and I prefer Python")
from praisonaiagents import Agent
agent = Agent(name="Assistant", instructions="You are helpful.")
from praisonaiagents.memory import FileMemory, AutoMemory
memory = FileMemory(user_id="user123")
auto = AutoMemory(memory, enabled=True)
memories = auto.process_interaction(
"My name is John and I prefer Python for backend work"
)
from praisonaiagents import Agent, Workflow
researcher = Agent(
name="Researcher",
role="Research Analyst",
goal="Research topics thoroughly",
instructions="Provide concise, factual information."
)
writer = Agent(
name="Writer",
role="Content Writer",
goal="Write engaging content",
instructions="Write clear, engaging content based on research."
)
workflow = Workflow(steps=[researcher, writer])
result = workflow.start("What are the benefits of AI agents?")
print(result["output"])
from praisonaiagents.hooks import (
HookRegistry, HookRunner, HookEvent, HookResult,
BeforeToolInput
)
registry = HookRegistry()
@registry.on(HookEvent.BEFORE_TOOL)
def log_tools(event_data: BeforeToolInput) -> HookResult:
print(f"Tool: {event_data.tool_name}")
return HookResult.allow()
@registry.on(HookEvent.BEFORE_TOOL)
def security_check(event_data: BeforeToolInput) -> HookResult:
if "delete" in event_data.tool_name.lower():
return HookResult.deny("Delete operations blocked")
return HookResult.allow()
runner = HookRunner(registry)
from praisonaiagents.checkpoints import CheckpointService
service = CheckpointService(workspace_dir="./my_project")
await service.initialize()
result = await service.save("Before refactoring")
await service.restore(result.checkpoint.id)
diff = await service.diff()
import asyncio
from praisonaiagents.background import BackgroundRunner, BackgroundConfig
async def main():
config = BackgroundConfig(max_concurrent_tasks=3)
runner = BackgroundRunner(config=config)
async def my_task(name: str) -> str:
await asyncio.sleep(2)
return f"Task {name} completed"
task = await runner.submit(my_task, args=("example",), name="my_task")
await task.wait(timeout=10.0)
print(task.result)
asyncio.run(main())
from praisonaiagents.policy import (
PolicyEngine, Policy, PolicyRule, PolicyAction
)
engine = PolicyEngine()
policy = Policy(
name="no_delete",
rules=[
PolicyRule(
action=PolicyAction.DENY,
resource="tool:delete_*",
reason="Delete operations blocked"
)
]
)
engine.add_policy(policy)
result = engine.check("tool:delete_file", {})
print(f"Allowed: {result.allowed}")
from praisonaiagents.thinking import ThinkingBudget, ThinkingTracker
budget = ThinkingBudget.high()
tracker = ThinkingTracker()
session = tracker.start_session(budget_tokens=16000)
tracker.end_session(session, tokens_used=12000)
summary = tracker.get_summary()
print(f"Utilization: {summary['average_utilization']:.1%}")
from praisonaiagents.output import OutputStyle, OutputFormatter
style = OutputStyle.concise()
formatter = OutputFormatter(style)
text = "# Hello\n\nThis is **bold** text."
plain = formatter.format(text)
print(plain)
from praisonaiagents.compaction import (
ContextCompactor, CompactionStrategy
)
compactor = ContextCompactor(
max_tokens=4000,
strategy=CompactionStrategy.SLIDING,
preserve_recent=3
)
messages = [...]
compacted, result = compactor.compact(messages)
print(f"Compression: {result.compression_ratio:.1%}")
PraisonAI accepts both old (agents.yaml) and new (workflow.yaml) field names. Use the **canonical names** for new projects:
| Canonical (Recommended) | Alias (Also Works) | Purpose |
|-------------------------|-------------------|---------|
| `agents` | `roles` | Define agent personas |
| `instructions` | `backstory` | Agent behavior/persona |
| `action` | `description` | What the step does |
| `steps` | `tasks` (nested) | Define work items |
| `name` | `topic` | Workflow identifier |
framework: praisonai
process: workflow
topic: "Research AI trends"
workflow:
planning: true
reasoning: true
verbose: true
variables:
topic: AI trends
agents:
classifier:
role: Request Classifier
instructions: "Classify requests into categories"
goal: Classify requests
researcher:
role: Research Analyst
instructions: "Expert researcher"
goal: Research topics
tools:
- tavily_search
steps:
- agent: classifier
action: "Classify: {{topic}}"
- name: routing
route:
technical: [tech_expert]
default: [researcher]
- name: parallel_research
parallel:
- agent: researcher
action: "Research market trends"
- agent: researcher
action: "Research competitors"
- agent: researcher
action: "Analyze {{item}}"
loop:
over: topics
- agent: aggregator
action: "Synthesize findings"
repeat:
until: "comprehensive"
max_iterations: 3
from praisonaiagents import Agent, MCP
agent = Agent(tools=MCP("npx @modelcontextprotocol/server-memory"))
agent = Agent(tools=MCP("https://api.example.com/mcp"))
agent = Agent(tools=MCP("wss://api.example.com/mcp", auth_token="token"))
agent = Agent(tools=MCP("http://localhost:8080/sse"))
agent = Agent(
tools=MCP(
command="npx",
args=["-y", "@modelcontextprotocol/server-brave-search"],
env={"BRAVE_API_KEY": "your-key"}
)
)
def my_custom_tool(query: str) -> str:
"""Custom tool function."""
return f"Result: {query}"
agent = Agent(
name="MultiToolAgent",
instructions="Agent with multiple MCP servers",
tools=[
MCP("uvx mcp-server-time"),
MCP("npx @modelcontextprotocol/server-memory"),
my_custom_tool
]
)
from praisonaiagents.mcp import ToolsMCPServer
def search_web(query: str, max_results: int = 5) -> dict:
"""Search the web for information."""
return {"results": [f"Result for {query}"]}
def calculate(expression: str) -> dict:
"""Evaluate a mathematical expression."""
return {"result": eval(expression)}
server = ToolsMCPServer(name="my-tools")
server.register_tools([search_web, calculate])
server.run()
from praisonaiagents import Agent, A2A
from fastapi import FastAPI
def search_web(query: str) -> str:
"""Search the web for information."""
return f"Results for: {query}"
agent = Agent(
name="Research Assistant",
role="Research Analyst",
goal="Help users research topics",
tools=[search_web]
)
a2a = A2A(agent=agent, url="http://localhost:8000/a2a")
app = FastAPI()
app.include_router(a2a.get_router())
JavaScript Code Examples
const { Agent } = require('praisonai');
const agent = new Agent({ instructions: 'You are a helpful AI assistant' });
agent.start('Write a movie script about a robot in Mars');
๐ Documentation
๐ Supported Providers
PraisonAI supports 100+ LLM providers through seamless integration:
| Provider |
Example |
| OpenAI |
Example |
| Anthropic |
Example |
| Google Gemini |
Example |
| Ollama |
Example |
| Groq |
Example |
| DeepSeek |
Example |
| xAI Grok |
Example |
| Mistral |
Example |
| Cohere |
Example |
| Perplexity |
Example |
| Fireworks |
Example |
| Together AI |
Example |
| OpenRouter |
Example |
| HuggingFace |
Example |
| Azure OpenAI |
Example |
| AWS Bedrock |
Example |
| Google Vertex |
Example |
| Databricks |
Example |
| Cloudflare |
Example |
| AI21 |
Example |
| Replicate |
Example |
| SageMaker |
Example |
| Moonshot |
Example |
| vLLM |
Example |
๐ Process Types & Patterns
AI Agents Flow
graph LR
%% Define the main flow
Start([โถ Start]) --> Agent1
Agent1 --> Process[โ Process]
Process --> Agent2
Agent2 --> Output([โ Output])
Process -.-> Agent1
%% Define subgraphs for agents and their tasks
subgraph Agent1[ ]
Task1[๐ Task]
AgentIcon1[๐ค AI Agent]
Tools1[๐ง Tools]
Task1 --- AgentIcon1
AgentIcon1 --- Tools1
end
subgraph Agent2[ ]
Task2[๐ Task]
AgentIcon2[๐ค AI Agent]
Tools2[๐ง Tools]
Task2 --- AgentIcon2
AgentIcon2 --- Tools2
end
classDef input fill:#8B0000,stroke:#7C90A0,color:#fff
classDef process fill:#189AB4,stroke:#7C90A0,color:#fff
classDef tools fill:#2E8B57,stroke:#7C90A0,color:#fff
classDef transparent fill:none,stroke:none
class Start,Output,Task1,Task2 input
class Process,AgentIcon1,AgentIcon2 process
class Tools1,Tools2 tools
class Agent1,Agent2 transparent
AI Agents with Tools
flowchart TB
subgraph Tools
direction TB
T3[Internet Search]
T1[Code Execution]
T2[Formatting]
end
Input[Input] ---> Agents
subgraph Agents
direction LR
A1[Agent 1]
A2[Agent 2]
A3[Agent 3]
end
Agents ---> Output[Output]
T3 --> A1
T1 --> A2
T2 --> A3
style Tools fill:#189AB4,color:#fff
style Agents fill:#8B0000,color:#fff
style Input fill:#8B0000,color:#fff
style Output fill:#8B0000,color:#fff
AI Agents with Memory
flowchart TB
subgraph Memory
direction TB
STM[Short Term]
LTM[Long Term]
end
subgraph Store
direction TB
DB[(Vector DB)]
end
Input[Input] ---> Agents
subgraph Agents
direction LR
A1[Agent 1]
A2[Agent 2]
A3[Agent 3]
end
Agents ---> Output[Output]
Memory <--> Store
Store <--> A1
Store <--> A2
Store <--> A3
style Memory fill:#189AB4,color:#fff
style Store fill:#2E8B57,color:#fff
style Agents fill:#8B0000,color:#fff
style Input fill:#8B0000,color:#fff
style Output fill:#8B0000,color:#fff
AI Agents with Different Processes
Sequential Process
graph LR
Input[Input] --> A1
subgraph Agents
direction LR
A1[Agent 1] --> A2[Agent 2] --> A3[Agent 3]
end
A3 --> Output[Output]
classDef input fill:#8B0000,stroke:#7C90A0,color:#fff
classDef process fill:#189AB4,stroke:#7C90A0,color:#fff
classDef transparent fill:none,stroke:none
class Input,Output input
class A1,A2,A3 process
class Agents transparent
Hierarchical Process
graph TB
Input[Input] --> Manager
subgraph Agents
Manager[Manager Agent]
subgraph Workers
direction LR
W1[Worker 1]
W2[Worker 2]
W3[Worker 3]
end
Manager --> W1
Manager --> W2
Manager --> W3
end
W1 --> Manager
W2 --> Manager
W3 --> Manager
Manager --> Output[Output]
classDef input fill:#8B0000,stroke:#7C90A0,color:#fff
classDef process fill:#189AB4,stroke:#7C90A0,color:#fff
classDef transparent fill:none,stroke:none
class Input,Output input
class Manager,W1,W2,W3 process
class Agents,Workers transparent
Workflow Process
graph LR
Input[Input] --> Start
subgraph Workflow
direction LR
Start[Start] --> C1{Condition}
C1 --> |Yes| A1[Agent 1]
C1 --> |No| A2[Agent 2]
A1 --> Join
A2 --> Join
Join --> A3[Agent 3]
end
A3 --> Output[Output]
classDef input fill:#8B0000,stroke:#7C90A0,color:#fff
classDef process fill:#189AB4,stroke:#7C90A0,color:#fff
classDef decision fill:#2E8B57,stroke:#7C90A0,color:#fff
classDef transparent fill:none,stroke:none
class Input,Output input
class Start,A1,A2,A3,Join process
class C1 decision
class Workflow transparent
Agentic Routing Workflow
flowchart LR
In[In] --> Router[LLM Call Router]
Router --> LLM1[LLM Call 1]
Router --> LLM2[LLM Call 2]
Router --> LLM3[LLM Call 3]
LLM1 --> Out[Out]
LLM2 --> Out
LLM3 --> Out
style In fill:#8B0000,color:#fff
style Router fill:#2E8B57,color:#fff
style LLM1 fill:#2E8B57,color:#fff
style LLM2 fill:#2E8B57,color:#fff
style LLM3 fill:#2E8B57,color:#fff
style Out fill:#8B0000,color:#fff
Agentic Orchestrator Worker
flowchart LR
In[In] --> Router[LLM Call Router]
Router --> LLM1[LLM Call 1]
Router --> LLM2[LLM Call 2]
Router --> LLM3[LLM Call 3]
LLM1 --> Synthesizer[Synthesizer]
LLM2 --> Synthesizer
LLM3 --> Synthesizer
Synthesizer --> Out[Out]
style In fill:#8B0000,color:#fff
style Router fill:#2E8B57,color:#fff
style LLM1 fill:#2E8B57,color:#fff
style LLM2 fill:#2E8B57,color:#fff
style LLM3 fill:#2E8B57,color:#fff
style Synthesizer fill:#2E8B57,color:#fff
style Out fill:#8B0000,color:#fff
Agentic Autonomous Workflow
flowchart LR
Human[Human] <--> LLM[LLM Call]
LLM -->|ACTION| Environment[Environment]
Environment -->|FEEDBACK| LLM
LLM --> Stop[Stop]
style Human fill:#8B0000,color:#fff
style LLM fill:#2E8B57,color:#fff
style Environment fill:#8B0000,color:#fff
style Stop fill:#333,color:#fff
Agentic Parallelization
flowchart LR
In[In] --> LLM2[LLM Call 2]
In --> LLM1[LLM Call 1]
In --> LLM3[LLM Call 3]
LLM1 --> Aggregator[Aggregator]
LLM2 --> Aggregator
LLM3 --> Aggregator
Aggregator --> Out[Out]
style In fill:#8B0000,color:#fff
style LLM1 fill:#2E8B57,color:#fff
style LLM2 fill:#2E8B57,color:#fff
style LLM3 fill:#2E8B57,color:#fff
style Aggregator fill:#fff,color:#000
style Out fill:#8B0000,color:#fff
Agentic Prompt Chaining
flowchart LR
In[In] --> LLM1[LLM Call 1] --> Gate{Gate}
Gate -->|Pass| LLM2[LLM Call 2] -->|Output 2| LLM3[LLM Call 3] --> Out[Out]
Gate -->|Fail| Exit[Exit]
style In fill:#8B0000,color:#fff
style LLM1 fill:#2E8B57,color:#fff
style LLM2 fill:#2E8B57,color:#fff
style LLM3 fill:#2E8B57,color:#fff
style Out fill:#8B0000,color:#fff
style Exit fill:#8B0000,color:#fff
Agentic Evaluator Optimizer
flowchart LR
In[In] --> Generator[LLM Call Generator]
Generator -->|SOLUTION| Evaluator[LLM Call Evaluator] -->|ACCEPTED| Out[Out]
Evaluator -->|REJECTED + FEEDBACK| Generator
style In fill:#8B0000,color:#fff
style Generator fill:#2E8B57,color:#fff
style Evaluator fill:#2E8B57,color:#fff
style Out fill:#8B0000,color:#fff
Repetitive Agents
flowchart LR
In[Input] --> LoopAgent[("Looping Agent")]
LoopAgent --> Task[Task]
Task --> |Next iteration| LoopAgent
Task --> |Done| Out[Output]
style In fill:#8B0000,color:#fff
style LoopAgent fill:#2E8B57,color:#fff,shape:circle
style Task fill:#2E8B57,color:#fff
style Out fill:#8B0000,color:#fff
๐ง Configuration & Integration
Ollama Integration
export OPENAI_BASE_URL=http://localhost:11434/v1
Groq Integration
Replace xxxx with Groq API KEY:
export OPENAI_API_KEY=xxxxxxxxxxx
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
100+ Models Support
PraisonAI supports 100+ LLM models from various providers. Visit our models documentation for the complete list.
๐ Agents Playbook
Simple Playbook Example
Create agents.yaml file and add the code below:
framework: praisonai
topic: Artificial Intelligence
agents:
screenwriter:
instructions: "Skilled in crafting scripts with engaging dialogue about {topic}."
goal: Create scripts from concepts.
role: Screenwriter
tasks:
scriptwriting_task:
description: "Develop scripts with compelling characters and dialogue about {topic}."
expected_output: "Complete script ready for production."
To run the playbook:
praisonai agents.yaml
๐ ๏ธ Custom Tools / Create Plugins
Using @tool Decorator
from praisonaiagents import Agent, tool
@tool
def search(query: str) -> str:
"""Search the web for information."""
return f"Results for: {query}"
@tool
def calculate(expression: str) -> float:
"""Evaluate a math expression."""
return eval(expression)
agent = Agent(
instructions="You are a helpful assistant",
tools=[search, calculate]
)
agent.start("Search for AI news and calculate 15*4")
Using BaseTool Class
from praisonaiagents import Agent, BaseTool
class WeatherTool(BaseTool):
name = "weather"
description = "Get current weather for a location"
def run(self, location: str) -> str:
return f"Weather in {location}: 72ยฐF, Sunny"
agent = Agent(
instructions="You are a weather assistant",
tools=[WeatherTool()]
)
agent.start("What's the weather in Paris?")
Creating a Tool Package (pip installable)
[project]
name = "my-praisonai-tools"
version = "1.0.0"
dependencies = ["praisonaiagents"]
[project.entry-points."praisonaiagents.tools"]
my_tool = "my_package:MyTool"
from praisonaiagents import BaseTool
class MyTool(BaseTool):
name = "my_tool"
description = "My custom tool"
def run(self, param: str) -> str:
return f"Result: {param}"
After pip install, tools are auto-discovered:
agent = Agent(tools=["my_tool"])
๐ง Memory & Context
PraisonAI provides zero-dependency persistent memory for agents. For detailed examples, see section 6. Agent Memory in the Python Code Examples.
๐ Knowledge & Retrieval (RAG)
PraisonAI provides a complete knowledge stack for building RAG applications with multiple vector stores, retrieval strategies, rerankers, and query modes.
Knowledge CLI Commands
| Command |
Description |
praisonai knowledge add <file|dir|url> |
Add documents to knowledge base |
praisonai knowledge query <question> |
Query knowledge base with RAG |
praisonai knowledge list |
List indexed documents |
praisonai knowledge clear |
Clear knowledge base |
praisonai knowledge stats |
Show knowledge base statistics |
Knowledge CLI Options
| Option |
Values |
Description |
--vector-store |
memory, chroma, pinecone, qdrant, weaviate |
Vector store backend |
--retrieval |
basic, fusion, recursive, auto_merge |
Retrieval strategy |
--reranker |
simple, llm, cross_encoder, cohere |
Reranking method |
--index-type |
vector, keyword, hybrid |
Index type |
--query-mode |
default, sub_question, summarize |
Query mode |
Knowledge CLI Examples
praisonai knowledge add ./docs/
praisonai knowledge add https://example.com/page.html
praisonai knowledge add "*.pdf"
praisonai knowledge query "How to authenticate?" --retrieval fusion --reranker llm
praisonai knowledge query "authentication flow" \
--vector-store chroma \
--retrieval fusion \
--reranker llm \
--index-type hybrid \
--query-mode sub_question
Knowledge SDK Usage
from praisonaiagents import Agent, Knowledge
agent = Agent(
name="Research Assistant",
knowledge=["docs/manual.pdf", "data/faq.txt"],
knowledge_config={
"vector_store": {"provider": "chroma"}
}
)
response = agent.chat("How do I authenticate?")
knowledge = Knowledge()
knowledge.add("document.pdf")
results = knowledge.search("authentication", limit=5)
Knowledge Stack Features Table
| Feature |
Description |
SDK Docs |
CLI Docs |
| Data Readers |
Load PDF, Markdown, Text, HTML, URLs |
SDK |
CLI |
| Vector Stores |
ChromaDB, Pinecone, Qdrant, Weaviate, In-Memory |
SDK |
CLI |
| Retrieval Strategies |
Basic, Fusion (RRF), Recursive, Auto-Merge |
SDK |
CLI |
| Rerankers |
Simple, LLM, Cross-Encoder, Cohere |
SDK |
CLI |
| Index Types |
Vector, Keyword (BM25), Hybrid |
SDK |
CLI |
| Query Engines |
Default, Sub-Question, Summarize |
SDK |
CLI |
๐ง Technical Details
Research & Intelligence
- ๐ฌ Deep Research Agents - OpenAI & Gemini support for automated research
- ๐ Query Rewriter Agent - HyDE, Step-back, Multi-query strategies for RAG optimization
- ๐ Native Web Search - Real-time search via OpenAI, Gemini, Anthropic, xAI, Perplexity
- ๐ฅ Web Fetch - Retrieve full content from URLs (Anthropic)
- ๐ Prompt Expander Agent - Expand short prompts into detailed instructions
Memory & Caching
- ๐พ Prompt Caching - Reduce costs & latency (OpenAI, Anthropic, Bedrock, Deepseek)
- ๐ง Claude Memory Tool - Persistent cross-conversation memory (Anthropic Beta)
- ๐พ File-Based Memory - Zero-dependency persistent memory for all agents
- ๐ Built-in Search Tools - Tavily, You.com, Exa for web search, news, content extraction
Planning & Workflows
- ๐ Planning Mode - Plan before execution for agents & multi-agent systems
- ๐ง Planning Tools - Research with tools during planning phase
- ๐ง Planning Reasoning - Chain-of-thought planning for complex tasks
- โ๏ธ Prompt Chaining - Sequential prompt workflows with conditional gates
- ๐ Evaluator Optimiser - Generate and optimize through iterative feedback
- ๐ท Orchestrator Workers - Distribute tasks among specialised workers
- โก Parallelisation - Execute tasks in parallel for improved performance
- ๐ Repetitive Agents - Handle repetitive tasks through automated loops
- ๐ค Autonomous Workflow - Monitor, act, adapt based on environment feedback
Specialised Agents
- ๐ผ๏ธ Image Generation Agent - Create images from text descriptions
- ๐ท Image to Text Agent - Extract text and descriptions from images
- ๐ฌ Video Agent - Analyse and process video content
- ๐ Data Analyst Agent - Analyse data and generate insights
- ๐ฐ Finance Agent - Financial analysis and recommendations
- ๐ Shopping Agent - Price comparison and shopping assistance
- โญ Recommendation Agent - Personalised recommendations
- ๐ Wikipedia Agent - Search and extract Wikipedia information
- ๐ป Programming Agent - Code development and analysis
- ๐ Markdown Agent - Generate and format Markdown content
- ๐ Model Router - Smart model selection based on task complexity
MCP Protocol
- ๐ MCP Transports - stdio, Streamable HTTP, WebSocket, SSE (Protocol 2025-11-25)
- ๐ WebSocket MCP - Real-time bidirectional connections with auto-reconnect
- ๐ MCP Security - Origin validation, DNS rebinding prevention, secure sessions
- ๐ MCP Resumability - SSE stream recovery via Last-Event-ID
A2A & A2UI Protocols
- ๐ A2A Protocol - Agent-to-Agent communication for inter-agent collaboration
- ๐ผ๏ธ A2UI Protocol - Agent-to-User Interface for generating rich UIs from agents
- ๐ UI Templates - ChatTemplate, ListTemplate, FormTemplate, DashboardTemplate
- ๐ง Surface Builder - Fluent API for building declarative UIs
Safety & Control
- ๐ค Agent Handoffs - Transfer context between specialised agents
- ๐ก๏ธ Guardrails - Input/output validation and safety checks
- โ
Human Approval - Require human confirmation for critical actions
- ๐ Tool Approval CLI -
--trust (auto-approve all) and --approve-level (risk-based approval)
- ๐ฌ Sessions Management - Isolated conversation contexts
- ๐ Stateful Agents - Maintain state across interactions
Developer Tools
- โก Fast Context - Rapid parallel code search (10-20x faster)
- ๐ Rules & Instructions - Auto-discover CLAUDE.md, AGENTS.md, GEMINI.md
- ๐ช Hooks - Pre/post operation hooks for custom logic
- ๐ Telemetry - Track agent performance and usage
- ๐น Camera Integration - Capture and analyse camera input
Other Features
- ๐ CrewAI & AG2 Integration - Use CrewAI or AG2 (Formerly AutoGen) Framework
- ๐ป Codebase Chat - Chat with entire codebase
- ๐จ Interactive UIs - Multiple interactive interfaces
- ๐ YAML Configuration - YAML-based agent and workflow configuration
- ๐ ๏ธ Custom Tools - Easy custom tool integration
- ๐ Internet Search - Multiple providers (Tavily, You.com, Exa, DuckDuckGo, Crawl4AI)
- ๐ผ๏ธ VLM Support - Vision Language Model support
- ๐๏ธ Voice Interaction - Real-time voice interaction
๐พ Persistence (Databases)
Enable automatic conversation persistence with 2 lines of code:
from praisonaiagents import Agent, db
agent = Agent(
name="Assistant",
db=db(database_url="postgresql://localhost/mydb"),
session_id="my-session"
)
agent.chat("Hello!")
Persistence CLI Commands
| Command |
Description |
praisonai persistence doctor |
Validate DB connectivity |
praisonai persistence run |
Run agent with persistence |
praisonai persistence resume |
Resume existing session |
praisonai persistence export |
Export session to JSONL |
praisonai persistence import |
Import session from JSONL |
praisonai persistence migrate |
Apply schema migrations |
praisonai persistence status |
Show schema status |
Knowledge CLI Commands {#knowledge-cli}
| Command |
Description |
praisonai knowledge add <source> |
Add file, directory, URL, or glob pattern |
praisonai knowledge query "<question>" |
Query knowledge base with RAG |
praisonai knowledge list |
List indexed documents |
praisonai knowledge clear |
Clear knowledge base |
praisonai knowledge stats |
Show knowledge base statistics |
Knowledge Query Flags:
| Flag |
Values |
Default |
--vector-store |
memory, chroma, pinecone, qdrant, weaviate |
chroma |
--retrieval-strategy |
basic, fusion, recursive, auto_merge |
basic |
--reranker |
none, simple, llm, cross_encoder, cohere |
none |
--index-type |
vector, keyword, hybrid |
vector |
--query-mode |
default, sub_question, summarize |
default |
--workspace |
Path to workspace directory |
Current dir |
--session |
Session ID for persistence |
- |
Examples:
praisonai knowledge add document.pdf
praisonai knowledge add ./docs/
praisonai knowledge add "*.md"
praisonai knowledge query "How to authenticate?" \
--vector-store chroma \
--retrieval-strategy fusion \
--reranker simple \
--query-mode sub_question
Databases Table
| Database |
Store Type |
Install |
Example |
Docs |
| PostgreSQL |
Conversation |
pip install "praisonai[tools]" |
simple_db_agent.py |
docs |
| MySQL |
Conversation |
pip install "praisonai[tools]" |
- |
docs |
| SQLite |
Conversation |
pip install "praisonai[tools]" |
- |
docs |
| SingleStore |
Conversation |
pip install "praisonai[tools]" |
- |
docs |
| Supabase |
Conversation |
pip install "praisonai[tools]" |
- |
docs |
| SurrealDB |
Conversation |
pip install "praisonai[tools]" |
- |
docs |
| Qdrant |
Knowledge |
pip install "praisonai[tools]" |
knowledge_qdrant.py |
docs |
| ChromaDB |
Knowledge |
pip install "praisonai[tools]" |
- |
docs |
| Pinecone |
Knowledge |
pip install pinecone |
pinecone_wow.py |
docs |
| Weaviate |
Knowledge |
pip install weaviate-client |
weaviate_wow.py |
docs |
| LanceDB |
Knowledge |
pip install lancedb |
lancedb_real_wow.py |
docs |
| Milvus |
Knowledge |
pip install "praisonai[tools]" |
- |
docs |
| PGVector |
Knowledge |
pip install psycopg2-binary |
pgvector_real_wow.py |
docs |
| Redis Vector |
Knowledge |
pip install "praisonai[tools]" |
- |
docs |
| Cassandra |
Knowledge |
pip install "praisonai[tools]" |
- |
docs |
| ClickHouse |
Knowledge |
pip install "praisonai[tools]" |
- |
docs |
| Redis |
State |
pip install "praisonai[tools]" |
state_redis.py |
docs |
| MongoDB |
State |
pip install "praisonai[tools]" |
- |
[docs](https://docs.p |