How openrappter works under the hood
openrappter is a layered, local-first AI agent framework. Channels funnel messages into a WebSocket gateway, which dispatches them through a registry of agents. Those agents talk to pluggable LLM providers, a hybrid memory system, and a persistent storage layer — all on your own machine.
Every agent is a single file. The metadata contract, documentation, and deterministic logic all live together using native language constructs — Python dicts or TypeScript objects. No YAML. No config files. No magic parsing. The code IS the contract.
Three rules apply to every agent:
perform() is the abstract method subclasses implement. It receives kwargs, returns a JSON string, and runs deterministically.data_slush key to pass curated signals to downstream agents in a chain.export class WeatherAgent extends BasicAgent { constructor() { const metadata: AgentMetadata = { name: 'WeatherAgent', description: 'Fetches current weather', parameters: { type: 'object', properties: { city: { type: 'string', description: 'City name' } }, required: ['city'] } }; super('WeatherAgent', metadata); } async perform(kwargs: Record<string, unknown>) { const city = kwargs.city as string; // ... fetch weather ... return JSON.stringify({ temperature: 72, condition: 'sunny', city }); } }
class WeatherAgent(BasicAgent): def __init__(self): self.name = 'WeatherAgent' self.metadata = { "name": self.name, "description": "Fetches current weather", "parameters": { "type": "object", "properties": { "city": {"type": "string", "description": "City name"} }, "required": ["city"] } } super().__init__(name=self.name, metadata=self.metadata) def perform(self, **kwargs): city = kwargs.get('city') # ... fetch weather ... return json.dumps({"temperature": 72, "condition": "sunny", "city": city})
Before every agent action, openrappter automatically enriches the call with temporal awareness, memory echoes, query signals, and behavioral hints. Agents never run "blind" — they always have orientation context synthesized from all available signals.
The sloshing pipeline gathers signals across five categories and synthesizes them into an Orientation — a confidence score, recommended approach, and contextual hints — before your perform() runs.
Access signals via the dot-notation helper in both runtimes:
async perform(kwargs: Record<string, unknown>) { // Dot-notation helper — traverses nested context const timeOfDay = this.getSignal('temporal.time_of_day'); const confidence = this.getSignal('orientation.confidence'); const approach = this.getSignal('orientation.approach'); // Full context object is also available directly const memories = this.context?.memory?.recent_memories ?? []; // Upstream slush from a chained agent const upstream = this.context?.upstream_slush; const prevTemp = upstream?.temp_f; }
def perform(self, **kwargs): # Dot-notation helper — traverses nested context time_of_day = self.get_signal('temporal.time_of_day') confidence = self.get_signal('orientation.confidence') approach = self.get_signal('orientation.approach') # Full context dict is also available directly memories = self.context.get('memory', {}).get('recent_memories', []) # Upstream slush from a chained agent upstream = self.context.get('upstream_slush', {}) prev_temp = upstream.get('temp_f')
After perform() returns, if the result JSON contains a data_slush key, the framework automatically extracts it to lastDataSlush (TypeScript) / last_data_slush (Python). Pass that to the next agent as upstream_slush and it merges into that agent's context — no LLM needed between calls.
Pipeline flow: Agent A perform() → data_slush extracted → passed as upstream_slush → Agent B execute() → merged into Agent B's context.
Key rules: data_slush should be curated (not a raw data dump), keys should be descriptive, and source_agent is a recommended convention for traceability.
// Agent A — curate signals in data_slush async perform(kwargs: Record<string, unknown>) { const weather = await fetchWeather(kwargs.city as string); return JSON.stringify({ status: 'success', result: weather, data_slush: { // extracted to lastDataSlush automatically source_agent: this.name, temp_f: weather.temperature, condition: weather.condition, } }); } // Orchestrator: chain Agent A → Agent B const resultA = await agentA.execute({ city: 'Atlanta' }); const resultB = await agentB.execute({ query: 'Should I bring a jacket?', upstream_slush: agentA.lastDataSlush, // agent B sees temp_f + condition });
# Agent A — curate signals in data_slush def perform(self, **kwargs): weather = fetch_weather(kwargs.get('city')) return json.dumps({ "status": "success", "result": weather, "data_slush": { # extracted to last_data_slush automatically "source_agent": self.name, "temp_f": weather["temperature"], "condition": weather["condition"], } }) # Orchestrator: chain Agent A → Agent B result_a = agent_a.execute(city='Atlanta') result_b = agent_b.execute( query='Should I bring a jacket?', upstream_slush=agent_a.last_data_slush, # agent B sees temp_f + condition )
Every agent invocation runs the same pipeline: gather implicit context, merge upstream signals, run your deterministic code, then extract output signals for downstream agents.
The execute() method is the public entry point and should never be overridden. Subclasses implement only perform(). This separation guarantees that sloshing, upstream merging, and slush extraction always run regardless of which agent is invoked.
TypeScript includes three built-in orchestration primitives for coordinating multiple agents: BroadcastManager for fan-out, AgentRouter for rule-based dispatch, and SubAgentManager for nested invocation with depth and loop guards.
Send a message to multiple agents simultaneously. Three modes control how results are collected:
Rule-based routing dispatches messages to specific agents based on match criteria. Rules are evaluated in priority order, and session key isolation prevents context bleed between different callers.
Enables agents to invoke other agents recursively. Two guard mechanisms prevent runaway recursion:
TypeScript and Python share the same repo. The agent contract is identical across both runtimes. Drop a new agent file in the appropriate agents/ directory and the registry discovers it automatically.
openrappter/ ├── typescript/ │ ├── src/ │ │ ├── agents/ # All agent implementations │ │ ├── channels/ # 15+ messaging channels │ │ ├── providers/ # LLM provider integrations │ │ ├── gateway/ # WebSocket server │ │ ├── memory/ # Chunking, embeddings, search │ │ ├── storage/ # SQLite + in-memory adapters │ │ ├── config/ # YAML/JSON config + Zod │ │ ├── clawhub.ts # ClawHub client │ │ └── skills/ # Skill registry │ ├── dist/ # Compiled output │ └── package.json ├── python/ │ └── openrappter/ │ ├── agents/ # Python agent mirror │ └── clawhub.py # ClawHub client ├── docs/ # GitHub Pages site └── CLAUDE.md # Agent architecture guide