Not a pipeline.
A living engine.
Every other framework runs your agent as a static graph — same nodes, same order, every single call. Promptise doesn't. The execution engine is dynamic: the agent picks its own path at every step, based on what it just saw.
The graph is built
at runtime.
Other frameworks ask you to draw a flowchart and hope the LLM follows it. Promptise gives the agent the nodes and lets it compose its own execution path every turn.
An easy question? Three nodes and done. A research task? Fifteen nodes with two loops. A simple lookup that turned ambiguous halfway through? The engine notices and reroutes — without you writing a single if-statement.
The agent
picks its own next move.
At every node, the engine evaluates the possible next steps against three signals: confidence in the current answer, budget remaining, and anomaly detection (loops, stuck patterns, empty outputs).
The LLM sees the options. Scores them. Picks. If the score is too low, the node is skipped. If budget is tight, expensive paths drop out. You get emergent reasoning — not hardcoded.
Orchestrating the LLM.
Not calling it.
A generic framework stuffs everything into one prompt and hopes the context window holds. Promptise assembles the prompt from 14 context providers, each with its own priority, token budget, and scope.
Identity and rules go in first. Tools and memory come next. Conversation history last. When the budget is tight, lowest priority drops gracefully. Every LLM call is engineered — not improvised.
Every capability.
One loop.
The engine doesn't just call tools. It calls MCP tool servers (discovered at runtime), peer agents (over JWT-authenticated HTTP), sandboxed code execution, vector memory, guardrail scanners, and semantic caches — as a unified capability set.
The agent sees them all as typed capabilities it can invoke. You wire none of this by hand. You just declare the servers and peers on build_agent().
Context that
breathes.
Context isn't a static string you paste once. It's a live assembly: memory searches fire, blackboard state updates, team messages arrive, tool results enter. The engine re-computes the prompt envelope on every turn.
When budget tightens, low-priority blocks trim. When the task deepens, memory grows. When another agent sends a message, it slots in as TeamContext. The context window is managed for you — not something you wrestle with.
Self-correcting —
because the graph is.
ValidateNode can reject the last output. CritiqueNode can flag it as weak. When that happens the engine doesn't throw an error — it routes execution back to an earlier node and tries again with the rejection in context.
No outer retry loop. No brittle scaffolding around your agent. The correction is part of the graph. The LLM sees the rejection reason. Next attempt is smarter.
Fully customizable.
End to end.
Every layer is replaceable. Swap the model per-node — use a cheap model for planning, a premium one for synthesis. Write your own NodeType. Register a new ContextProvider. Compose strategies. Add flags.
The engine is not a black box. It's a composition of primitives. You can shape every one of them to your domain — and ship the result as a reusable PromptGraph.
Node types
Per-node model
Context providers
Strategies
Guards & flags
One engine. Every
agent you'll ever build.
Dynamic graphs. Autonomous path selection. Context assembly from 14 providers. Capability orchestration. Self-correcting loops. Shape every layer — or take the defaults and ship today.