Installation
OpenRappter is a local-first AI agent framework with parallel implementations in TypeScript and Python. Everything runs on your machine — no cloud account required to get started.
Prerequisites
- Node.js 20+ — required for the TypeScript runtime and CLI
- Python 3.10+ — optional, required only if using the Python runtime
- GitHub Copilot — the default zero-config LLM provider (uses your existing subscription)
- Git — required for the clone-based install method
Method 1 — One-line Install (recommended)
The fastest way to get started. The install script detects your platform, downloads the latest release, and adds openrappter to your PATH.
curl -fsSL https://kody-w.github.io/openrappter/install.sh | bash
After the script completes, open a new terminal and verify the install:
openrappter --version
Method 2 — Git Clone
Clone the repository directly for full access to source code, examples, and the ability to contribute.
git clone https://github.com/kody-w/openrappter.git cd openrappter/typescript npm install npm run build npm link # adds openrappter to PATH globally
For the Python runtime:
cd openrappter/python pip install -e .
Method 3 — Teach-Your-Agent Install
If you are already running an AI assistant, you can paste the skills.md link into your conversation and ask it to install OpenRappter for you. The agent will follow the instructions in that file, run the install script, and confirm the result — no terminal required on your part.
Verify
openrappter --version # openrappter v1.9.1 openrappter status # Runtime: TypeScript/Node 20.x # Provider: copilot (authenticated) # Memory: ~/.openrappter/memory.json # Skills: 0 installed
gh auth login inline without requiring you to run any commands manually.
Configuration
OpenRappter reads its configuration from ~/.openrappter/config.yaml. The file is created with sensible defaults on first run. All values can be overridden with environment variables.
Example config.yaml
# ~/.openrappter/config.yaml
provider:
default: copilot # copilot | anthropic | openai | gemini | ollama
copilot:
model: gpt-4o # model passed to the Copilot API
anthropic:
api_key: ${ANTHROPIC_API_KEY}
model: claude-opus-4-6
openai:
api_key: ${OPENAI_API_KEY}
model: gpt-4o
ollama:
base_url: http://localhost:11434
model: llama3.2
memory:
backend: sqlite # sqlite | json
path: ~/.openrappter/memory.db
embedding_model: all-MiniLM-L6-v2
chunk_size: 512
chunk_overlap: 64
gateway:
enabled: true
port: 8765
host: 127.0.0.1
rate_limit: 100 # requests per minute per connection
shell:
allowlist: [] # if non-empty, only these commands are permitted
blocklist:
- "rm -rf /"
- "sudo rm"
require_approval: false # set true to approve every shell command
skills:
auto_load: true
directory: ~/.openrappter/skills
logging:
level: info # debug | info | warn | error
file: ~/.openrappter/logs/openrappter.log
Environment Variable Expansion
Any value in the YAML can reference environment variables using ${VAR_NAME} syntax. The config loader expands these at parse time. This is the recommended way to handle API keys — keep them out of the config file and source them from your shell profile or a .env file.
Zod Validation
The configuration schema is defined and validated with Zod v4 at typescript/src/config/schema.ts. On startup, if any required field is missing or has the wrong type, openrappter prints a structured error with the exact path and expected type — not a raw crash.
Live Reload
The config system uses a file watcher. When config.yaml is saved, the running process picks up the new values without a restart. Provider keys, log levels, rate limits, and gateway settings all hot-reload. Changes to the memory backend require a restart.
Environment Variables Reference
| Variable | Purpose | Default |
|---|---|---|
ANTHROPIC_API_KEY | Anthropic Claude API key | — |
OPENAI_API_KEY | OpenAI API key | — |
GEMINI_API_KEY | Google Gemini API key | — |
OPENRAPPTER_CONFIG | Override config file path | ~/.openrappter/config.yaml |
OPENRAPPTER_LOG_LEVEL | Override log level | info |
OPENRAPPTER_PORT | Override gateway port | 8765 |
OPENRAPPTER_PROVIDER | Override default provider | copilot |
Agents Reference
An agent is a single-responsibility unit of AI-assisted computation. Every agent in OpenRappter is a single file — the metadata contract, documentation, and implementation all live together. There is no YAML, no config file, no magic parsing. The code is the contract.
Single-File Agent Pattern
All agents extend BasicAgent and implement one method: perform(). The constructor declares a metadata object that describes the agent's name, purpose, and accepted parameters as a JSON Schema fragment. This metadata is used by the orchestration layer to route requests and validate inputs.
import { BasicAgent, AgentMetadata } from 'openrappter'; export class MyAgent extends BasicAgent { constructor() { const metadata: AgentMetadata = { name: 'MyAgent', description: 'Does something useful', parameters: { type: 'object', properties: { query: { type: 'string' } }, required: ['query'] } }; super('MyAgent', metadata); } async perform(kwargs: Record<string, unknown>) { const query = kwargs.query as string; // access sloshed context signals const timeOfDay = this.getSignal('temporal.time_of_day'); return { result: `Hello from MyAgent at ${timeOfDay}` }; } }
from openrappter.agents.basic_agent import BasicAgent class MyAgent(BasicAgent): def __init__(self): self.name = 'MyAgent' self.metadata = { "name": self.name, "description": "Does something useful", "parameters": { "type": "object", "properties": {"query": {"type": "string"}}, "required": ["query"] } } super().__init__(name=self.name, metadata=self.metadata) def perform(self, **kwargs): query = kwargs.get("query", "") # access sloshed context signals time_of_day = self.get_signal("temporal.time_of_day") return {"result": f"Hello from MyAgent at {time_of_day}"}
Execution Flow
When you call execute(kwargs), the framework runs this pipeline:
execute(kwargs)is the public entry pointslosh(query)gathers implicit context (temporal signals, query signals, memory echoes, behavioral hints, priors) and synthesizes anOrientationobject with confidence score, suggested approach, and contextual hints- Any
upstream_slushpassed from a previous agent is merged intothis.context perform(kwargs)is called — this is the method you implement- If the result JSON contains a
data_slushkey, it is extracted tolastDataSlush(TypeScript) /last_data_slush(Python) for downstream agent chaining
Built-in Agents
BasicAgent
Abstract base class. All agents extend this. Provides data sloshing, signal access, upstream context merging, and the execution pipeline.
| Method | Description |
|---|---|
execute(kwargs) | Public entry point. Runs slosh, merges upstream, calls perform. |
perform(kwargs) | Abstract. Implement this in your subclass. |
slosh(query) | Gathers implicit context. Called automatically by execute(). |
getSignal(key) | Dot-notation access to sloshed context. e.g. getSignal('temporal.time_of_day') |
ShellAgent
Executes shell commands and performs file system operations. Supports natural language query parsing so agents upstream can pass a free-text request and ShellAgent will derive the right action.
| Parameter | Type | Description |
|---|---|---|
action | string | One of: bash, read, write, list |
command | string | Shell command to run (action: bash) |
path | string | File or directory path (actions: read, write, list) |
content | string | Content to write (action: write) |
query | string | Natural language request — agent will determine action |
// Execute a shell command const result = await shellAgent.execute({ action: 'bash', command: 'ls -la ~/projects' }); // Read a file const file = await shellAgent.execute({ action: 'read', path: '/etc/hosts' }); // Write a file await shellAgent.execute({ action: 'write', path: './notes.txt', content: 'hello world' });
MemoryAgent
Stores and retrieves information from the persistent memory store. In TypeScript the backend is SQLite with hybrid search; in Python it is a JSON file at ~/.openrappter/memory.json.
| Parameter | Type | Description |
|---|---|---|
action | string | One of: store, recall, forget |
key | string | Named identifier for the memory entry |
content | string | Text to store (action: store) |
query | string | Search query for semantic recall (action: recall) |
Assistant
A configurable AI assistant with a custom system prompt. Wraps the active LLM provider, forwards sloshed context as additional system context, and streams responses through the gateway when available. Accepts system_prompt, message, and optional conversation_id parameters.
BrowserAgent
Headless browser automation powered by Playwright. Supports navigation, clicking, form filling, screenshot capture, and structured data extraction from web pages. Requires playwright to be installed separately (npm install playwright then npx playwright install chromium).
WebAgent
HTTP request agent for fetching web content. Supports GET and POST with configurable headers, automatic HTML-to-text extraction, JSON response parsing, and rate-limit-aware retry logic. Does not require a browser runtime — uses native fetch under the hood.
MessageAgent
Multi-channel message dispatch. Sends a message to one or more configured channels (Slack, Discord, Telegram, etc.) by channel name. Handles serialization, authentication, and delivery confirmation. Parameters: channel, message, optional thread_id.
TTSAgent
Text-to-speech synthesis via edge-tts. Converts text to an MP3 audio file or plays it directly through the system audio output. Supports all Microsoft Edge TTS voices. Parameters: text, voice (e.g. en-US-JennyNeural), optional output_path.
SessionsAgent
Session state management for multi-turn conversations. Stores and retrieves keyed session data so agents can maintain context across separate execute() calls without polluting the shared memory store. Parameters: action (get, set, clear), session_id, key, value.
CronAgent
Scheduled task execution. Accepts a cron expression and an agent invocation payload. Registers the task with the internal scheduler and fires it at the specified interval. Tasks persist across restarts when the memory backend is SQLite. Parameters: schedule (cron string), agent, kwargs, optional name.
ImageAgent
Image processing powered by Sharp. Supports resize, crop, format conversion (JPEG, PNG, WebP, AVIF), metadata extraction, thumbnail generation, and basic filter application. Parameters: action, input_path, output_path, and action-specific options like width, height, format.
HackerNewsAgent
Hacker News feed aggregation and summarization. Fetches top stories, new stories, or Ask HN / Show HN posts via the official HN API. Can return raw story data or request an LLM-powered summary of the top items for a digest workflow. Parameters: feed (top, new, ask, show), limit, optional summarize boolean.
OuroborosAgent
Self-evolution capability scoring with RPG lineage tracking. Evaluates an agent's output quality across multiple capability dimensions — word statistics, sentiment detection, Caesar cipher encoding, pattern recognition, and reflection accuracy — and produces a quality score from 0–100. Maintains a persistent lineage file tracking scores over time with streak multipliers, evolution tiers, and XP gain. Used for benchmarking model improvements and driving the self-improving agent feedback loop.
// Evaluate capability quality const result = await ouroborosAgent.execute({ input: 'The quick brown fox jumps over the lazy dog', capabilities: ['word_stats', 'sentiment', 'patterns'] }); // { score: 87, tier: 'Adept', xp_gained: 43, streak_multiplier: 1.5 }
Multi-Agent Patterns
OpenRappter provides three composable primitives for coordinating multiple agents: BroadcastManager for fan-out execution, AgentRouter for rule-based message routing, and SubAgentManager for nested hierarchical invocation. All three are available in the TypeScript runtime at typescript/src/agents/.
BroadcastManager
Send the same request to multiple agents simultaneously. Three dispatch modes control how results are collected:
- all — dispatch to all agents and wait for every response before returning. Results are an array in dispatch order.
- race — dispatch to all agents and return as soon as the first one succeeds. Useful for redundancy or speed-sensitive paths.
- fallback — try agents in order, moving to the next only if the current one fails. Useful for graceful degradation across providers.
import { BroadcastManager } from 'openrappter'; const broadcast = new BroadcastManager([agentA, agentB, agentC]); // Wait for all three const allResults = await broadcast.send({ query: 'status check' }, { mode: 'all' }); // Return the fastest const firstResult = await broadcast.send({ query: 'translate hello' }, { mode: 'race' }); // Try agentA, fall back to agentB, then agentC const safeResult = await broadcast.send({ query: 'generate report' }, { mode: 'fallback' });
AgentRouter
Rule-based message routing. Each rule specifies a match condition (sender, channel, group, or regex pattern) and a target agent. Rules are evaluated in priority order. Session key isolation ensures that concurrent conversations on the same router do not cross-contaminate context.
import { AgentRouter } from 'openrappter'; const router = new AgentRouter([ { match: { channel: 'slack', pattern: /^!deploy/ }, agent: deployAgent, priority: 10 }, { match: { sender: 'cron' }, agent: schedulerAgent, priority: 5 }, { match: { group: 'default' }, agent: assistantAgent, priority: 0 } ]); // Routes to deployAgent (highest priority match) await router.route({ sender: 'alice', channel: 'slack', message: '!deploy production', session_key: 'alice:slack' });
SubAgentManager
Nested agent invocation with depth limits and loop detection. Allows an agent to spawn child agents as part of its own execution. The manager tracks the call stack and refuses to execute if the same agent appears more than once in the current chain, preventing infinite recursion. The default max depth is 5.
import { SubAgentManager } from 'openrappter'; class PlannerAgent extends BasicAgent { private sub: SubAgentManager; constructor() { super('PlannerAgent', metadata); this.sub = new SubAgentManager({ maxDepth: 3 }); } async perform(kwargs) { // Invoke a child agent — depth and loop tracking automatic const research = await this.sub.invoke(webAgent, { query: kwargs.topic, upstream_slush: this.lastDataSlush }); return { plan: research.summary }; } }
LLM Providers
OpenRappter uses a provider registry to abstract LLM backends. Switching providers is a single config change — agent code never imports provider-specific SDKs directly. The BasicAgent base class exposes a callLLM(messages, options) method that routes to whichever provider is active.
GitHub Copilot (default)
Zero-configuration. Uses your existing GitHub Copilot subscription via the gh CLI token. No API key setup required. This is the recommended provider for getting started because authentication is handled inline on first use.
# config.yaml provider: default: copilot copilot: model: gpt-4o # or gpt-4o-mini, claude-3.5-sonnet, o1-mini
Anthropic
Access Claude models including claude-opus-4-6, claude-sonnet-4-5, and claude-haiku-3-5. Requires an Anthropic API key.
# config.yaml
provider:
default: anthropic
anthropic:
api_key: ${ANTHROPIC_API_KEY}
model: claude-opus-4-6
max_tokens: 8192
OpenAI
Supports all GPT-4o and o-series models. Compatible with any OpenAI-compatible endpoint by setting a custom base_url.
# config.yaml provider: default: openai openai: api_key: ${OPENAI_API_KEY} model: gpt-4o base_url: https://api.openai.com/v1 # override for compatible endpoints
Google Gemini
Access Gemini 2.0 Flash, Gemini 1.5 Pro, and experimental models via Google AI Studio credentials.
# config.yaml
provider:
default: gemini
gemini:
api_key: ${GEMINI_API_KEY}
model: gemini-2.0-flash-exp
Ollama (local)
Run models entirely locally with no API key or internet connection required. Requires a running Ollama instance. Pull models with ollama pull llama3.2 before use.
# config.yaml provider: default: ollama ollama: base_url: http://localhost:11434 model: llama3.2 # any model available in your Ollama instance timeout: 120000 # ms — local models can be slow on first load
Provider Registry Pattern
Providers are registered at typescript/src/providers/registry.ts. To add a custom provider, implement the LLMProvider interface and register it before starting the agent runtime:
import { registerProvider } from 'openrappter'; registerProvider('my-provider', { async chat(messages, options) { // call your API here return { content: response.text }; } });
{ provider: 'anthropic' } to callLLM(). This is useful for routing cheap queries to a fast model and expensive queries to a more capable one.
Messaging Channels
OpenRappter connects to 15+ messaging platforms through a unified channel interface. All channels implement the same Channel interface: send(message), receive(handler), and connect(). Add a channel in config.yaml and it is automatically registered with the router.
Supported Channels
| Channel | Auth Method | Status |
|---|---|---|
| CLI | None (local) | Built-in |
| Slack | Bot token + App manifest | Stable |
| Discord | Bot token | Stable |
| Telegram | BotFather token | Stable |
| Meta Cloud API | Stable | |
| Signal | signal-cli daemon | Beta |
| Microsoft Teams | Azure App registration | Stable |
| Google Chat | Service account | Stable |
| Matrix | Access token | Beta |
| Mattermost | Bot token | Stable |
| Feishu / Lark | App credentials | Beta |
| Line | Channel access token | Beta |
| Twitch | OAuth token | Beta |
| Nostr | Private key (nsec) | Experimental |
| iMessage | macOS + AppleScript | macOS only |
Channel Configuration
# config.yaml channels: slack: enabled: true bot_token: ${SLACK_BOT_TOKEN} signing_secret: ${SLACK_SIGNING_SECRET} default_channel: "#general" discord: enabled: true bot_token: ${DISCORD_BOT_TOKEN} guild_id: ${DISCORD_GUILD_ID} telegram: enabled: true bot_token: ${TELEGRAM_BOT_TOKEN} allowed_chat_ids: [] # empty = allow all chats
Channel Registry Pattern
Channels are loaded from typescript/src/channels/. To implement a custom channel, extend the BaseChannel class and register it by name. The channel will then be available in the router and reachable via MessageAgent.
import { BaseChannel, registerChannel } from 'openrappter'; class MyChannel extends BaseChannel { async connect() { /* establish connection */ } async send(message) { /* deliver message */ } async receive(handler) { /* subscribe to incoming messages */ } } registerChannel('my-channel', MyChannel);
WebSocket Gateway
The WebSocket gateway provides a real-time bidirectional interface to the agent runtime using the JSON-RPC 2.0 protocol. It enables web frontends, external services, and other processes to invoke agents, subscribe to events, and stream responses without polling.
Starting the Gateway
openrappter gateway start --port 8765
Or enable it in config.yaml under gateway.enabled: true to start it automatically with the main process.
Connection Lifecycle
- Client connects to
ws://localhost:8765 - Server sends a
connectedevent with session ID and server version - Client sends JSON-RPC requests; server responds with results or streaming chunks
- Client subscribes to event channels using
subscribemethod - Server pushes events to subscribed clients as JSON-RPC notifications
JSON-RPC Request / Response
// Request: invoke an agent { "jsonrpc": "2.0", "id": "req-001", "method": "agent.execute", "params": { "agent": "ShellAgent", "kwargs": { "action": "bash", "command": "uptime" }, "stream": false } } // Response { "jsonrpc": "2.0", "id": "req-001", "result": { "output": "12:34 up 3 days, 4:21, 2 users, load averages: 1.23 0.98 0.84", "exit_code": 0 } }
Streaming Responses
Set "stream": true in the request params to receive incremental token delivery. The server sends multiple agent.chunk notifications followed by a final agent.done notification with the complete result.
// Streaming chunk notification { "jsonrpc": "2.0", "method": "agent.chunk", "params": { "id": "req-001", "delta": "Here is the " } } { "jsonrpc": "2.0", "method": "agent.done", "params": { "id": "req-001", "result": { "output": "Here is the analysis..." } } }
Event Types
| Event | Description | Subscribe Method |
|---|---|---|
agent.* | Agent execution lifecycle (start, chunk, done, error) | subscribe.agent |
chat.* | Incoming and outgoing chat messages | subscribe.chat |
channel.* | Channel connect/disconnect/error events | subscribe.channel |
cron.* | Scheduled job fire and completion events | subscribe.cron |
presence.* | Connected client join/leave events | subscribe.presence |
Rate Limiting
The gateway enforces a per-connection rate limit (default: 100 requests/minute). Exceeding the limit results in a JSON-RPC error response with code -32029 (rate limit exceeded) and a retry_after field in milliseconds. Configure the limit in config.yaml under gateway.rate_limit.
Skills System
Skills extend an agent's capabilities at runtime without modifying the agent's source code. A skill is a SKILL.md file that describes a capability in natural language, plus an optional scripts/ directory with executable code. Skills are distributed through ClawHub and stored locally at ~/.openrappter/skills/.
ClawHub Integration
ClawHub is the skills registry. It is accessed via npx clawhub@latest — no global install required. The ClawHubClient in OpenRappter wraps these commands and exposes them as a programmatic API.
# Search for skills openrappter skills search "github pull requests" # Install a skill by name openrappter skills install gh-pr-reviewer # List installed skills openrappter skills list # Remove a skill openrappter skills remove gh-pr-reviewer
SKILL.md Format
A skill is defined by a SKILL.md file with a structured frontmatter block followed by documentation:
--- name: gh-pr-reviewer version: 1.2.0 description: Review GitHub pull requests and suggest improvements author: kody-w tags: [github, code-review, productivity] parameters: pr_url: type: string required: true description: Full GitHub PR URL to review focus: type: string enum: [security, performance, style, all] default: all scripts: - fetch_pr.sh - analyze_diff.py --- # gh-pr-reviewer Fetches a pull request diff from GitHub and runs a multi-pass review focusing on the specified concern area. Returns structured feedback grouped by file with line-level comments.
Skill Execution
When a skill is installed, OpenRappter wraps it as a ClawHubSkillAgent instance that extends BasicAgent. The skill's parameters become the agent's metadata parameters. Scripts in the scripts/ directory are executed via ShellAgent with the skill's directory as the working directory.
import { loadSkill } from 'openrappter'; // Skills auto-load at startup if config.skills.auto_load is true // Or load manually: const prReviewer = await loadSkill('gh-pr-reviewer'); const review = await prReviewer.execute({ pr_url: 'https://github.com/org/repo/pull/42', focus: 'security' });
Lock File
Installed skills are tracked in a lock file at ~/.openrappter/skills/.clawhub/lock.json. The lock file records the installed version, install date, and a content hash. Running openrappter skills verify checks all installed skills against their hashes and reports any that have been modified locally.
Writing a Skill
To publish a skill to ClawHub, create a directory with a SKILL.md file and optional scripts, then run:
npx clawhub@latest publish ./my-skill-directory
Memory System
The memory system provides persistent, searchable storage for agent knowledge. It supports both exact key-value retrieval and semantic similarity search via embeddings. The TypeScript runtime uses SQLite with hybrid search; the Python runtime uses a JSON file.
Content Chunking
Before storing, large documents are split into overlapping chunks using a sliding window algorithm. This improves recall precision — a query matches the specific passage that contains the answer rather than the entire document. Default settings: 512 token chunks with 64 token overlap.
// Chunking behavior (typescript/src/memory/chunker.ts) // Input: "The quick brown fox..." (2000 tokens) // Output: chunks of 512 tokens each, overlapping by 64 tokens // Chunk boundaries are respected at sentence boundaries where possible const chunker = new ContentChunker({ chunkSize: 512, overlap: 64 }); const chunks = chunker.split(longDocument);
Embeddings
The TypeScript runtime generates embeddings using a local model (default: all-MiniLM-L6-v2 via @xenova/transformers). Embeddings run entirely on-device with no API calls. The first run downloads the model (~25MB) and caches it at ~/.openrappter/models/.
To use an API-based embedding provider instead, set memory.embedding_provider: openai in config.yaml. This uses the text-embedding-3-small model by default.
Hybrid Search
Memory queries use hybrid search: a combination of vector similarity (cosine distance on embeddings) and keyword BM25 scoring. The two scores are combined with a configurable weight (default: 60% semantic, 40% keyword). This outperforms pure vector search on queries that contain specific identifiers, names, or technical terms.
// Recall with hybrid search (typescript) const results = await memoryAgent.execute({ action: 'recall', query: 'deployment configuration for staging', limit: 5, min_score: 0.7 });
Python Backend
The Python runtime stores all memory entries in a single JSON file at ~/.openrappter/memory.json. Recall is performed with a simple TF-IDF + cosine similarity implementation that requires no external dependencies. For high-volume use cases, switch to the SQLite backend by setting memory.backend: sqlite in your config.
# Python: store and recall memory_agent.perform(action="store", key="project_notes", content="Deploy to prod on Fridays only.") results = memory_agent.perform(action="recall", query="deployment schedule")
ContextMemoryAgent and ManageMemoryAgent (Python)
The Python runtime includes two higher-level memory agents built on top of MemoryAgent:
- ContextMemoryAgent — Automatically stores conversation turns and retrieves relevant context before each agent call. Designed to be wired into the
upstream_slushpipeline. - ManageMemoryAgent — Exposes CRUD operations for memory management via natural language commands. Useful for admin tasks like bulk deletion, key renaming, and export.
Plugin System
Plugins extend the OpenRappter runtime itself — adding new channels, providers, storage backends, middleware, or gateway event handlers — without forking the codebase. Plugins are loaded at startup from the ~/.openrappter/plugins/ directory.
Plugin Manifest
Every plugin requires a plugin.json manifest at its root:
{
"name": "my-plugin",
"version": "1.0.0",
"description": "Adds Notion as a memory backend",
"author": "yourname",
"main": "./index.js",
"hooks": ["onLoad", "onRequest", "onResponse"],
"permissions": ["memory.write", "config.read"]
}
Plugin Lifecycle
Plugins implement hooks that are called at defined points in the runtime lifecycle:
| Hook | Called When | Can Modify |
|---|---|---|
onLoad | Plugin is first loaded at startup | Registration, config |
onUnload | Runtime is shutting down | Cleanup only |
onRequest | Before any agent execute() call | kwargs, context |
onResponse | After any agent execute() returns | Result object |
onMessage | Any channel receives a message | Message object |
onError | Any unhandled error in the runtime | Error handling |
Example Plugin Structure
~/.openrappter/plugins/notion-memory/ plugin.json index.js README.md
// index.js — minimal plugin implementing onLoad hook import { registerStorageAdapter } from 'openrappter/plugins'; import { NotionStorageAdapter } from './notion-adapter.js'; export async function onLoad(ctx) { registerStorageAdapter('notion', new NotionStorageAdapter({ token: ctx.config.get('notion.token'), databaseId: ctx.config.get('notion.database_id') })); ctx.logger.info('Notion memory backend registered'); }
SDK Hooks API
The plugin SDK (openrappter/plugins) exports registration functions for each extensible subsystem: registerStorageAdapter, registerChannel, registerProvider, registerMiddleware, and registerEventHandler. The ctx object passed to each hook provides access to config, logger, memory, and the agent registry.
Security
OpenRappter operates with the same OS-level permissions as the user who runs it. Because agents can execute shell commands and read/write files, the framework includes several layers of protection. Security is opt-in by default for ease of development — enable stricter controls before deploying in shared or production environments.
ShellAgent Sandboxing
ShellAgent is the primary attack surface for prompt injection via shell execution. Two mechanisms limit its blast radius:
- Allowlist mode — Set
shell.allowlistto an array of permitted command prefixes. Any command not matching an entry is rejected before execution. Recommended for production deployments. - Blocklist mode — The default. A configurable list of dangerous patterns (e.g.
rm -rf /,sudo rm) is checked against the command string before execution. Overlapping glob patterns are supported.
# config.yaml — production lockdown example shell: allowlist: - "git status" - "git log" - "npm test" - "ls" - "cat" require_approval: false
Approval Workflows
Enable shell.require_approval: true to pause before executing any shell command and prompt the operator for confirmation. In gateway mode this sends an approval.required event to subscribed clients before proceeding. Useful for high-stakes operations like database migrations or deployments.
Rate Limiting
The WebSocket gateway enforces per-connection rate limits. For channel adapters, each channel can set its own rate limit in addition to the global gateway limit. Rate limits apply to both inbound message handling and outbound LLM API calls, protecting against runaway loops.
Audit Logging
When logging.level: debug or when logging.audit: true is set, all agent invocations, shell commands executed, files written, and channel messages sent are written to the audit log at ~/.openrappter/logs/audit.jsonl. Each entry is a newline-delimited JSON object with timestamp, agent name, action, arguments, result status, and duration.
Gateway Authentication
By default the gateway binds to 127.0.0.1 and is not accessible from external networks. For remote access, set a shared secret in config.yaml:
# config.yaml gateway: host: 0.0.0.0 secret: ${GATEWAY_SECRET} # required in Authorization: Bearer header
0.0.0.0 without setting a gateway.secret. Without a secret, any process on the network can invoke agents and execute shell commands.
Prompt Injection Mitigations
The slosh pipeline includes a simple injection detector that flags inputs containing common injection patterns (e.g. "ignore previous instructions", "you are now DAN"). Flagged inputs are not blocked by default but are logged at warn level and included in the sloshed orientation as a suspicious_input: true signal that agents can act on.
Config System
The config system is responsible for loading, validating, hot-reloading, and providing typed access to all runtime configuration. It is implemented in TypeScript at typescript/src/config/ and mirrors the Python implementation at python/openrappter/config.py.
Loading Order
Configuration is resolved in this priority order (highest wins):
- Environment variables (e.g.
OPENRAPPTER_LOG_LEVEL=debug) - Config file specified by
OPENRAPPTER_CONFIGenv var ~/.openrappter/config.yaml(default location)~/.openrappter/config.json(JSON alternative)- Built-in defaults compiled into the binary
Zod Schema Validation
The full config schema is defined in typescript/src/config/schema.ts using Zod v4. Validation runs on every load and reload. Errors are surfaced as structured messages that identify the exact field path and the expected vs received type:
import { configSchema } from 'openrappter/config'; // Validate a config object programmatically const result = configSchema.safeParse(rawConfig); if (!result.success) { result.error.issues.forEach(issue => console.error(`[${issue.path.join('.')}] ${issue.message}`) ); }
File Watcher and Live Reload
The config loader registers a file system watcher on the config file. When a change is detected, the new file is read, validated, and diffed against the current config. Fields that support hot-reload are updated in place; fields that require a restart (e.g. memory.backend) emit a config.restart_required warning without crashing the process.
import { getConfig, onConfigChange } from 'openrappter/config'; onConfigChange((newConfig, diff) => { console.log('Config updated:', diff); // diff contains only the fields that changed });
Migration System
As OpenRappter evolves, config schemas change. The migration system tracks the config format version in a _version field and automatically upgrades older config files to the current schema. Migrations are defined as transform functions in typescript/src/config/migrations/. The original file is backed up to config.yaml.bak before any migration is applied.
Environment Variable Expansion
Values using ${VAR_NAME} syntax are expanded at load time. Unset variables resolve to an empty string by default. Use ${VAR_NAME:-default_value} syntax to provide a fallback:
# Uses ANTHROPIC_API_KEY if set, otherwise 'sk-placeholder'
provider:
anthropic:
api_key: ${ANTHROPIC_API_KEY:-sk-placeholder}
Programmatic Config Access
import { getConfig } from 'openrappter/config'; const config = getConfig(); console.log(config.provider.default); // "copilot" console.log(config.gateway.port); // 8765
API Reference
CLI Commands
| Command | Description |
|---|---|
openrappter | Start the interactive CLI agent |
openrappter --version | Print version and exit |
openrappter status | Show runtime, provider, and memory status |
openrappter gateway start | Start the WebSocket gateway |
openrappter gateway stop | Stop a running gateway instance |
openrappter skills search <query> | Search ClawHub for skills |
openrappter skills install <name> | Install a skill from ClawHub |
openrappter skills list | List installed skills |
openrappter skills remove <name> | Uninstall a skill |
openrappter skills verify | Verify integrity of installed skills |
openrappter memory recall <query> | Search memory from the CLI |
openrappter memory clear | Delete all memory entries |
openrappter config validate | Validate the current config file |
Programmatic API
Import OpenRappter as a library in your TypeScript or JavaScript project:
import { BasicAgent, ShellAgent, MemoryAgent, BroadcastManager, AgentRouter, SubAgentManager, loadSkill, getConfig, onConfigChange, registerProvider, registerChannel, startGateway } from 'openrappter'; // Instantiate and run an agent const shell = new ShellAgent(); const result = await shell.execute({ action: 'bash', command: 'date' }); console.log(result.output); // Start the gateway programmatically const gw = await startGateway({ port: 8765 }); gw.on('agent.done', (event) => console.log(event));
Data Sloshing Signals
The slosh pipeline populates a context object before each perform() call. Access these signals with getSignal(key) using dot notation.
| Signal Key | Type | Description |
|---|---|---|
temporal.time_of_day | string | morning / afternoon / evening / night |
temporal.day_of_week | string | monday … sunday |
temporal.iso_date | string | ISO 8601 date string |
temporal.unix_ms | number | Current Unix timestamp in milliseconds |
query.intent | string | Inferred intent category of the query |
query.entities | string[] | Named entities extracted from query |
query.sentiment | string | positive / neutral / negative |
memory.echoes | object[] | Top memory recall results for the query |
memory.recent_keys | string[] | Keys written in the last 10 minutes |
behavioral.session_length | number | Number of turns in current session |
behavioral.last_agent | string | Name of last agent called in session |
orientation.confidence | number | 0–1 confidence score for chosen approach |
orientation.approach | string | Suggested reasoning approach |
orientation.hints | string[] | Contextual hints derived from slosh |
Environment Variables
| Variable | Default | Description |
|---|---|---|
ANTHROPIC_API_KEY | — | Anthropic Claude API key |
OPENAI_API_KEY | — | OpenAI API key |
GEMINI_API_KEY | — | Google Gemini API key |
SLACK_BOT_TOKEN | — | Slack bot token for channel integration |
DISCORD_BOT_TOKEN | — | Discord bot token |
TELEGRAM_BOT_TOKEN | — | Telegram BotFather token |
GATEWAY_SECRET | — | Bearer secret for gateway authentication |
OPENRAPPTER_CONFIG | ~/.openrappter/config.yaml | Override config file location |
OPENRAPPTER_LOG_LEVEL | info | debug / info / warn / error |
OPENRAPPTER_PORT | 8765 | Gateway WebSocket port |
OPENRAPPTER_PROVIDER | copilot | Override default LLM provider |
OPENRAPPTER_NO_TELEMETRY | — | Set to any value to disable anonymous usage stats |
TypeScript Exports
// Core agents export { BasicAgent, AgentMetadata, AgentResult } from './agents/BasicAgent'; export { ShellAgent } from './agents/ShellAgent'; export { MemoryAgent } from './agents/MemoryAgent'; export { OuroborosAgent } from './agents/OuroborosAgent'; // Multi-agent export { BroadcastManager } from './agents/broadcast'; export { AgentRouter } from './agents/router'; export { SubAgentManager } from './agents/subagent'; // Skills export { ClawHubClient, loadSkill, ClawHubSkillAgent } from './clawhub'; // Gateway export { startGateway, GatewayServer } from './gateway'; // Config export { getConfig, onConfigChange, configSchema } from './config'; // Providers export { registerProvider, LLMProvider } from './providers'; // Channels export { BaseChannel, registerChannel } from './channels';