Architecture

How openrappter works under the hood

System Overview

Architecture at a glance

openrappter is a layered, local-first AI agent framework. Channels funnel messages into a WebSocket gateway, which dispatches them through a registry of agents. Those agents talk to pluggable LLM providers, a hybrid memory system, and a persistent storage layer — all on your own machine.

Channels ┌─────────ⓜ──────────ⓜ───────────ⓜ──────────┐ │ CLI │ Slack │ Discord │ Telegram │ ...15+ more └────┤────└─────┤─────┤────└─────┘ │ │ │ │ ▼ ▼ ▼ ▼ ┌──────────────────────────────────────────┐ │ WebSocket Gateway │ │ (JSON-RPC 2.0 / Streaming) │ └───────────────────┤───────────────────┘ │ ▼ ┌──────────────────────────────────────────┐ │ Agent Registry │ │ ┌──────┐ ┌──────┐ ┌──────┐ ┌───────┐ │ │ │Shell │ │Memory│ │ Web │ │Browser│ │ │ │Agent │ │Agent │ │Agent │ │ Agent │ │ │ └──┤───┘ └──┤───┘ └──┤───┘ └───┤───┘ │ │ └────────┤────────┤─────────┘ │ └───────────────────┤───────────────────┘ │ ┌─────────────┼─────────────┐ ▼ ▼ ▼ ┌─────────┐ ┌──────────┐ ┌──────────┐ │Providers│ │ Memory │ │ Storage │ │ Copilot │ │ Chunker │ │ SQLite │ │Anthropic│ │Embeddings│ │In-Memory │ │ OpenAI │ │ Search │ │ │ │ Gemini │ │ │ │ │ │ Ollama │ │ │ │ │ └─────────┘ └──────────┘ └──────────┘
Agent Contract

One file = one agent

Every agent is a single file. The metadata contract, documentation, and deterministic logic all live together using native language constructs — Python dicts or TypeScript objects. No YAML. No config files. No magic parsing. The code IS the contract.

Three rules apply to every agent:

export class WeatherAgent extends BasicAgent {
  constructor() {
    const metadata: AgentMetadata = {
      name: 'WeatherAgent',
      description: 'Fetches current weather',
      parameters: {
        type: 'object',
        properties: {
          city: { type: 'string', description: 'City name' }
        },
        required: ['city']
      }
    };
    super('WeatherAgent', metadata);
  }

  async perform(kwargs: Record<string, unknown>) {
    const city = kwargs.city as string;
    // ... fetch weather ...
    return JSON.stringify({ temperature: 72, condition: 'sunny', city });
  }
}
class WeatherAgent(BasicAgent):
    def __init__(self):
        self.name = 'WeatherAgent'
        self.metadata = {
            "name": self.name,
            "description": "Fetches current weather",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {"type": "string", "description": "City name"}
                },
                "required": ["city"]
            }
        }
        super().__init__(name=self.name, metadata=self.metadata)

    def perform(self, **kwargs):
        city = kwargs.get('city')
        # ... fetch weather ...
        return json.dumps({"temperature": 72, "condition": "sunny", "city": city})
Data Sloshing

Implicit context enrichment

Before every agent action, openrappter automatically enriches the call with temporal awareness, memory echoes, query signals, and behavioral hints. Agents never run "blind" — they always have orientation context synthesized from all available signals.

The sloshing pipeline gathers signals across five categories and synthesizes them into an Orientation — a confidence score, recommended approach, and contextual hints — before your perform() runs.

Temporal

time_of_day day_of_week is_weekend hour minute

Query

intent entities sentiment complexity keywords

Memory

recent_memories relevant_context

Behavioral

session_duration interaction_count patterns

Orientation (synthesized)

confidence approach hints

Access signals via the dot-notation helper in both runtimes:

async perform(kwargs: Record<string, unknown>) {
  // Dot-notation helper — traverses nested context
  const timeOfDay  = this.getSignal('temporal.time_of_day');
  const confidence = this.getSignal('orientation.confidence');
  const approach   = this.getSignal('orientation.approach');

  // Full context object is also available directly
  const memories = this.context?.memory?.recent_memories ?? [];

  // Upstream slush from a chained agent
  const upstream = this.context?.upstream_slush;
  const prevTemp = upstream?.temp_f;
}
def perform(self, **kwargs):
    # Dot-notation helper — traverses nested context
    time_of_day  = self.get_signal('temporal.time_of_day')
    confidence   = self.get_signal('orientation.confidence')
    approach     = self.get_signal('orientation.approach')

    # Full context dict is also available directly
    memories = self.context.get('memory', {}).get('recent_memories', [])

    # Upstream slush from a chained agent
    upstream  = self.context.get('upstream_slush', {})
    prev_temp = upstream.get('temp_f')
Data Slush Pipeline

Agent-to-agent signal chaining

After perform() returns, if the result JSON contains a data_slush key, the framework automatically extracts it to lastDataSlush (TypeScript) / last_data_slush (Python). Pass that to the next agent as upstream_slush and it merges into that agent's context — no LLM needed between calls.

Pipeline flow: Agent A perform()data_slush extracted → passed as upstream_slush → Agent B execute() → merged into Agent B's context.

Key rules: data_slush should be curated (not a raw data dump), keys should be descriptive, and source_agent is a recommended convention for traceability.

// Agent A — curate signals in data_slush
async perform(kwargs: Record<string, unknown>) {
  const weather = await fetchWeather(kwargs.city as string);
  return JSON.stringify({
    status: 'success',
    result: weather,
    data_slush: {           // extracted to lastDataSlush automatically
      source_agent: this.name,
      temp_f: weather.temperature,
      condition: weather.condition,
    }
  });
}

// Orchestrator: chain Agent A → Agent B
const resultA = await agentA.execute({ city: 'Atlanta' });
const resultB = await agentB.execute({
  query: 'Should I bring a jacket?',
  upstream_slush: agentA.lastDataSlush,   // agent B sees temp_f + condition
});
# Agent A — curate signals in data_slush
def perform(self, **kwargs):
    weather = fetch_weather(kwargs.get('city'))
    return json.dumps({
        "status": "success",
        "result": weather,
        "data_slush": {          # extracted to last_data_slush automatically
            "source_agent": self.name,
            "temp_f": weather["temperature"],
            "condition": weather["condition"],
        }
    })

# Orchestrator: chain Agent A → Agent B
result_a = agent_a.execute(city='Atlanta')
result_b = agent_b.execute(
    query='Should I bring a jacket?',
    upstream_slush=agent_a.last_data_slush,  # agent B sees temp_f + condition
)
Execution Flow

What happens when you call execute()

Every agent invocation runs the same pipeline: gather implicit context, merge upstream signals, run your deterministic code, then extract output signals for downstream agents.

execute(kwargs) │ ├──► slosh(query) // gather implicit context │ ├── temporal signals │ ├── query analysis │ ├── memory echoes │ └── behavioral hints │ │ │ ▼ │ Orientation │ (confidence, approach, hints) │ ├──► merge upstream_slush // from previous agent │ ├──► perform(kwargs) // YOUR code runs here │ └──► extract data_slush // for downstream agents

The execute() method is the public entry point and should never be overridden. Subclasses implement only perform(). This separation guarantees that sloshing, upstream merging, and slush extraction always run regardless of which agent is invoked.

Multi-Agent Orchestration

Broadcast, route, and nest agents

TypeScript includes three built-in orchestration primitives for coordinating multiple agents: BroadcastManager for fan-out, AgentRouter for rule-based dispatch, and SubAgentManager for nested invocation with depth and loop guards.

BroadcastManager

Send a message to multiple agents simultaneously. Three modes control how results are collected:

BroadcastManager (all mode) broadcast(message) │ ├──────────────┐ │ │ │ ▼ ▼ ▼ ┌───────┐ ┌───────┐ ┌───────┐ │Agent A│ │Agent B│ │Agent C│ └───┤───┘ └───┤───┘ └───┤───┘ │ │ │ └──────────────┘ │ results[A, B, C]

AgentRouter

Rule-based routing dispatches messages to specific agents based on match criteria. Rules are evaluated in priority order, and session key isolation prevents context bleed between different callers.

AgentRouter incoming message │ ▼ ┌─────────────┐ │ Rule matching │ │ (by priority) │ └─────┤─────┘ │ ├─ sender == "alice" ► CustomerAgent ├─ channel == "billing" ► BillingAgent ├─ /^deploy/i ► DeployAgent └─ (default) ► FallbackAgent

SubAgentManager

Enables agents to invoke other agents recursively. Two guard mechanisms prevent runaway recursion:

SubAgentManager AgentA.perform() │ └─► subAgent.invoke(AgentB, depth=1) │ └─► subAgent.invoke(AgentC, depth=2) │ └─► depth check / loop check │ AgentC.execute() │ result ↑ result propagates up
Directory Structure

Monorepo layout

TypeScript and Python share the same repo. The agent contract is identical across both runtimes. Drop a new agent file in the appropriate agents/ directory and the registry discovers it automatically.

openrappter/
├── typescript/
│   ├── src/
│   │   ├── agents/         # All agent implementations
│   │   ├── channels/       # 15+ messaging channels
│   │   ├── providers/      # LLM provider integrations
│   │   ├── gateway/        # WebSocket server
│   │   ├── memory/         # Chunking, embeddings, search
│   │   ├── storage/        # SQLite + in-memory adapters
│   │   ├── config/         # YAML/JSON config + Zod
│   │   ├── clawhub.ts      # ClawHub client
│   │   └── skills/         # Skill registry
│   ├── dist/               # Compiled output
│   └── package.json
├── python/
│   └── openrappter/
│       ├── agents/         # Python agent mirror
│       └── clawhub.py      # ClawHub client
├── docs/                   # GitHub Pages site
└── CLAUDE.md               # Agent architecture guide