Deconstructing OpenClaw & Rebuilding It for Teams
Details
OpenClaw is the fastest-growing open-source project in recent history, with 200K+ GitHub stars in under three months. Its architecture offers a masterclass in building AI agents: a hub-and-spoke gateway pattern, skills-as-markdown extensibility, sequential session processing, and a clean separation between the integration layer and the intelligence layer. But OpenClaw was designed for one user on one machine. What happens when you try to take these patterns to an org-level deployment with multiple users, shared knowledge, and per-user tool authentication?
The first half of this talk deconstructs OpenClaw's architecture layer by layer: the Gateway control plane, channel normalization, session management, the skills system, and the deliberate separation between the gateway and the agent runtime. Each pattern is examined for what it gets right and why it works.
The second half takes these patterns and stress-tests them for multi-user, team-level deployment. Drawing from hands-on experience building Sketch (an open-source, org-level AI assistant inspired by OpenClaw), this covers what breaks, what needs to be rethought, and the new architectural challenges that emerge when you go from personal to org: workspace isolation, identity resolution, shared vs. private knowledge, per-user tool auth, and the security model. Attendees walk away with concrete, reusable patterns for building agent systems at any scale.
# Outline
## Part 1: Deconstructing OpenClaw
- Why OpenClaw Matters for Agent Builders
- The explosive growth story: 0 to 200K stars in under 3 months
- Why OpenClaw's architecture deserves study, not just its features
- The core thesis: the hard problem in agents is not the LLM loop, it is everything around it
- The Gateway: Single Process as Control Plane
- How a single Node.js process on port 18789 manages sessions, routing, tool dispatch, and events
- WebSocket control messages, OpenAI-compatible HTTP APIs, and Control UI from one multiplexed port
- Why single-process is a conscious tradeoff, not a limitation, for the personal assistant use case
- Channel Normalization and Session Management
- One assistant, many interfaces: WhatsApp, Slack, Telegram, Discord, iMessage, and 50+ channels
- The unified message format as the contract between integration and intelligence layers
- Fault isolation: each adapter starts independently, one failing channel does not take down the Gateway
- Memory management
- Sequential session processing: why single-threaded message handling per session eliminates race conditions
- Skills, Extensibility, and the Agent Runtime
- Skills-as-markdown: SKILL.md files with YAML frontmatter instead of code modules
- Selective skill injection: only relevant skills loaded per turn, not everything on every prompt
- Self-modifying agents: writing and deploying new skills mid-conversation
- The Pi separation: OpenClaw handles connect/queue/remember/extend, Pi handles the agent loop
- The Latency Budget and Data Model
- End-to-end message flow: access control (<10ms), session load (<50ms), prompt assembly (<100ms), first token (200-500ms)
- Local-first data: JSONL session transcripts, plaintext config, and the ~/.openclaw directory structure
- Canvas (A2UI): agent-driven visual workspace as a separate process on port 18793
## Part 2: Rebuilding It for Teams
- The Single-User Ceiling
- Where personal AI assistants break down for orgs: no shared knowledge, no shared infrastructure, everyone must be a power user
- The real-world moment: running OpenClaw personally and hitting the wall when trying to extend it to a team
- Introducing Sketch: one deployment, multiple users, each with their own workspace, memory, and tool integrations
- Workspace Isolation and Identity Resolution
- From ~/.openclaw/ to data/workspaces/{user_id}/: scoped file operations, memory, and sessions that do not leak
- Identity resolution: mapping Slack IDs, phone numbers, and channel identities to user accounts
- Keeping the single-process simplicity while adding multi-user routing
- Shared Org Knowledge vs. Private User Memory
- The two-layer knowledge problem: org-wide documents everyone can access, plus personal context per user
- Preventing context leakage: one user's memory must never surface in another user's session
- How this differs from OpenClaw's model where everything is one person's data on one machine
- Per-User Tool Auth and the Security Model
- From plaintext credentials for one user to per-user OAuth for Gmail, GitHub, Jira, and more
- The skill supply chain problem at org scale: ClawHub's malicious skills and what it teaches us about governance
- Workspace-scoped tool permissions: controlling what the agent can do, per user
- Patterns to Take Home
- Gateway vs. framework: when each approach fits
- Start single-user, design for multi-user: which architectural seams to leave open from day one
- Why intentional constraints (single-process, sequential processing) remain a superpower even at team scale
- The agent deployment checklist every team should work through before going multi-user
