Design how your
agent thinks.
Every other framework gives you a tool-calling loop. Promptise gives you composable reasoning — snap together pre-built nodes to define exactly how your agent plans, acts, reflects, and decides. Build any reasoning pattern. Ship it to production.
from promptise import build_agent
from promptise.engine import PromptGraph
from promptise.engine.reasoning_nodes import (
PlanNode, ThinkNode, ReflectNode,
SynthesizeNode, PromptNode,
)
# Define how your agent reasons
agent = await build_agent(
model="openai:gpt-5-mini",
servers=my_servers,
agent_pattern=PromptGraph("analyst", nodes=[
PlanNode("plan", is_entry=True),
PromptNode("act", inject_tools=True),
ThinkNode("think"),
ReflectNode("reflect"),
SynthesizeNode("answer", is_terminal=True),
]),
)The problem
Every agent framework gives you
the same tool-calling loop.
LangGraph, CrewAI, AutoGen — they all run the same pattern: LLM calls tools, sees results, calls more tools, produces answer. You can't make your agent think differently. You can't add self-reflection. You can't add planning. You can't add adversarial critique. You're stuck with ReAct.
One-size-fits-all
Generic tool loop for every task. Data analysis, research, coding, planning — all get the same reasoning pattern.
No self-correction
Agent makes a mistake? It keeps going. No reflection, no re-planning, no quality gates. Hope for the best.
No production safety
No retry on failure. No budget enforcement. No circuit breakers. No way to abort a critical step that went wrong.
The solution
Composable reasoning nodes.
Like building bricks for agent logic.
Pick the nodes you need. The agent builds its own execution path at runtime. 7 built-in patterns, 20 node types, or build your own.
Agent builds its own path from the pool.
Break task into subgoals, self-evaluate quality
Analyze gaps, rate confidence, recommend next step
Execute tools from MCP servers
Interpret tool results, extract entities and facts
Self-evaluate, identify mistakes, suggest corrections
Combine all findings into a final answer
Why this matters
Every task deserves its
own reasoning strategy.
A research agent shouldn't think the same way as a code reviewer. A financial analyst shouldn't reason like a customer support bot. With Promptise, you design the exact thinking process your agent uses — purpose-built for the task, not forced into a generic loop.
Click each use case to see the reasoning pattern and code.
Deep Research Agent
A research agent that plans its investigation, searches multiple sources, cross-references findings for accuracy, and produces a cited report. Generic ReAct would just call search tools randomly until it has enough text.
Why this pattern wins
The PlanNode creates a research strategy upfront. The ValidateNode catches contradictions between sources. The SynthesizeNode produces a structured report with citations — not a stream-of-consciousness dump.
agent = await build_agent(
model="openai:gpt-5-mini",
servers={"search": search_server},
agent_pattern=PromptGraph("researcher", nodes=[
PlanNode("plan", is_entry=True),
PromptNode("search", inject_tools=True),
ObserveNode("observe"),
ValidateNode("verify",
criteria=["Sources agree", "No contradictions"],
on_fail="search"), # Loop back if invalid
SynthesizeNode("report", is_terminal=True),
]),
)The key insight
The agent decides
its own path.
In autonomous mode (the default), you don't wire edges between nodes. You give the agent a pool of reasoning nodes — and it decides at each step which one to use next. The LLM sees the available nodes with their descriptions and picks the most productive next step.
This means the same node pool can handle different questions differently. A simple question might go Act → Synthesize in 2 steps. A complex one might go Plan → Act → Think → Act → Reflect → Act → Synthesize in 7 steps. The agent adapts its reasoning depth to the task.
You can also wire static edges for predictable pipelines. Or mix both — some paths fixed, some decided at runtime. Full control over the reasoning architecture.
Autonomous mode (default)
Agent picks from the pool at each step. No edges needed.
PromptGraph("agent", nodes=[
PlanNode("plan", is_entry=True),
PromptNode("act", inject_tools=True),
ThinkNode("think"),
SynthesizeNode("answer", is_terminal=True),
])
# Agent decides: plan → act → think → act → answerStatic mode (opt-in)
You wire every edge explicitly. Predictable execution.
graph = PromptGraph("pipeline", mode="static")
graph.sequential("extract", "analyze", "report")
# Always: extract → analyze → reportProduction safety
16 typed flags.
One line each.
Add production behavior to any node with a single flag. The engine handles retry logic, caching, error recovery, and budget enforcement automatically.
Abort graph on failure
Retry with exponential backoff
Cache results by input
Skip if previous failed
Clean context per node
Use smaller model
# One flag = production behavior
PromptNode("fetch",
flags={NodeFlag.CRITICAL,
NodeFlag.RETRYABLE,
NodeFlag.CACHEABLE},
)
# Engine handles: retry with backoff,
# cache by input keys, abort graph on
# final failure. Zero extra code.Fully extensible
Build your own nodes.
Fine-tune reasoning to your use case.
The 20 built-in nodes are the starting point — not the ceiling. Create domain-specific reasoning nodes with custom logic, instructions, tools, strategies, models, data flow, and guards. Configure every detail.
This is what makes Promptise unique. Every other framework hands you a generic ReAct loop and hopes it works for everything. We let you match the reasoning structure to the actual shape of your problem — which means you don't just get a faster and cheaper agent. You get a measurably more accurate one. A research agent that plans → searches → verifies → synthesizes will always beat a tool loop at the same task. Fine-tuned reasoning is the single biggest lever for production quality — and you control all of it.
@node decorator
Turn any async function into a graph node. Simplest way to add custom logic.
@node("fetch_data")
async def fetch_data(state):
data = await api.get(...)
state.context["data"] = data
return NodeResult(
node_name="fetch_data",
output=data
)BaseNode subclass
Full control over execution. Add custom state, validation, and routing logic.
class SQLNode(BaseNode):
def __init__(self, name, *, db):
super().__init__(name,
flags={NodeFlag.RETRYABLE})
self.db = db
async def execute(self, state, cfg):
rows = await self.db.query(...)
return NodeResult(...)Custom flags
Define domain-specific flags. Check them in hooks for cross-cutting behavior.
class MyFlags:
PII_SENSITIVE = "pii"
AUDIT_REQUIRED = "audit"
PromptNode("handle_user",
flags={MyFlags.PII_SENSITIVE,
NodeFlag.OBSERVABLE})
# Check in hooks:
if node.has_flag(MyFlags.AUDIT):
await audit_log.record(...)Cost optimization
Different model per node.
Use a cheap model for routing and classification. A powerful model for complex reasoning. A local model for sensitive data. Each node picks its own LLM.
# Cheap for simple tasks
ThinkNode("think",
model_override="openai:gpt-4o-mini")
# Powerful for reasoning
PromptNode("analyze",
model_override="anthropic:claude-sonnet-4")
# Local for sensitive data
PromptNode("pii_handler",
model_override=local_llama)Data flow
Nodes pass data seamlessly.
output_key writes to shared state. input_keys reads from it. inherit_context_from passes one node's full output to the next. No manual plumbing.
# Node A writes output
PromptNode("search",
output_key="findings",
inject_tools=True)
# Node B reads it
PromptNode("analyze",
input_keys=["findings"])
# Or inherit everything
SynthesizeNode("answer",
inherit_context_from="analyze")Comparison
What you get that
others don't.
| Capability | Promptise | LangGraph | CrewAI |
|---|---|---|---|
| Custom reasoning patterns | — | — | |
| 20 pre-built node types | — | — | |
| Autonomous path building | — | — | |
| Per-node model override | — | — | |
| 16 typed execution flags | — | — | |
| Parallel tool execution | — | — | |
| Auto schema injection | — | — | |
| 0.02ms engine overhead | — | — | |
| YAML serialization | — | — | |
| Custom node types | — | ||
| Tool calling loop |
Performance
0.02ms overhead.
All time goes to the LLM.
Precomputed edge indexes, cached system prompts, parallel tool execution. The engine adds near-zero latency — your agent is only as slow as the LLM provider.
Build your first
custom reasoning pattern.
5 lines of code. No PhD required.