Before: copy-paste loop between Claude and terminal. One terminal. No visibility.
After: your AI controls every terminal directly. Parallel workers. Full visibility. You just watch.
Zero dependencies. No Python. No pip. No brew. Just Node.js (ships with VS Code).
CLAWS banner in every terminal. Worker tabs in the panel. "Claws Wrapped Terminal" in the dropdown. Shell commands ready.
Every developer runs code, builds, AI agents, and servers inside VS Code terminals. But there was no programmatic API to control them from the outside. You couldn't list what's running. Couldn't send a command from a script. Couldn't read the output of an interactive TUI session.
Claws changes that with one extension and 5 lines of code. A socket server inside VS Code accepts JSON commands from any external process — Python scripts, AI orchestrators, CI runners, or other machines on your network.
// Raw socket — works from any language
echo '{"id":1,"cmd":"create","name":"build","wrapped":true}' | nc -U .claws/claws.sock
echo '{"id":2,"cmd":"exec","id":"1","command":"npm test"}' | nc -U .claws/claws.sock
echo '{"id":3,"cmd":"close","id":"1"}' | nc -U .claws/claws.sock
// Or via MCP — Claude Code gets 8 tools natively, zero code needed
Claws runs inside VS Code as an extension. It creates a Unix socket server that external clients connect to. Wrapped terminals capture full pty output via script(1) — readable even for TUI sessions like Claude Code, vim, and htop.
Regular terminals are write-only from the outside. Wrapped terminals log every pty byte to disk. Claws reads it back with ANSI escapes stripped — giving you clean text of everything that happened, including interactive TUI sessions.
The terminal looks and behaves identically to a regular one. The script(1) layer is invisible. Create one from the dropdown: Claws Wrapped Terminal.
Each capability is a building block. Compose them to orchestrate any terminal workflow — from a single command to a fleet of autonomous AI workers.
List every terminal with PID, name, and status. Create new ones with custom names and working directories. Focus, show, or close any terminal programmatically. Stable numeric IDs that persist for the session.
Wrapped terminals log every pty byte via script(1). Read back Claude Code conversations, vim sessions, build logs, REPL outputs — all as clean ANSI-stripped text. The terminal looks and feels completely normal.
Run commands and get structured results: stdout, stderr, and exit code. File-based capture that works in every terminal type — no shell integration dependency. Configurable timeouts for long-running processes.
Detects when a terminal is running a TUI (vim, claude, less, htop). Warns before sending text that would land as TUI input instead of a shell command. Non-blocking by default. Strict mode available for hard-block.
Register one JSON block in any project's settings. Claude Code instantly gets 8 terminal control tools as native MCP calls. No imports, no client library, no socket code. Your AI gets terminal superpowers in one line.
Control terminals on remote machines via WebSocket with token auth + TLS. Team configuration with per-device access control. SSH tunnel pattern works today. WebSocket transport coming in v0.3.
Newline-delimited JSON. Every request has an id for correlation. Works with any language that can open a socket.
| Command | Input | Output |
|---|---|---|
list | {} | All terminals with PID, name, log path |
create | {name, wrapped?} | Terminal ID + log path |
send | {id, text} | Delivered to terminal input |
exec | {id, command} | stdout + stderr + exit code |
readLog | {id} | Clean text from pty log |
poll | {since?} | Command-completion events |
close | {id} | Terminal disposed |
Claws was designed for a specific use case: one AI session controlling multiple terminal sessions in parallel. Spawn workers, send mission prompts, monitor progress via pty log tailing, react to errors in real time, clean up when done.
# Spawn 3 parallel workers
workers = {}
for name, cmd in [("lint", "npm run lint"), ("test", "npm test"), ("build", "npm run build")]:
term = client.create(f"worker-{name}", wrapped=True)
client.send(term.id, cmd)
workers[name] = term
# Monitor all workers
import time
time.sleep(10)
for name, term in workers.items():
log = client.read_log(term.id, lines=20)
print(f"=== {name} ===\n{log}")
client.close(term.id)
Without wrapping, you can send but can't read. Every autonomous worker should be wrapped: true so you can observe its state.
Terminal names can be duplicated or changed. The numeric ID from create is unique and stable for the terminal's lifetime.
Every create should have a matching close. Stale terminals clutter the panel and leak pty log disk space.
exec captures output and waits. send is fire-and-forget. Use the right tool for the job.
Pty logs can contain passwords, tokens, and secrets. Never commit them. The default .gitignore already excludes .claws/.
For real-time observation, tail -F logfile | grep --line-buffered pattern is more responsive than periodic readLog calls.
The -F flush flag splits Ink-based TUI frames and causes visual corruption in Claude Code. Default buffering is correct.
If you don't know what's running in a terminal (shell vs TUI), check with list first. Blind sends into unknown TUIs corrupt their input state.
An orchestrator spawned 3 wrapped terminals simultaneously, each running an independent analysis task (latency profiling, token auditing, critical-path optimization). Each worker received a scoped mission prompt via send, ran autonomously for ~5 minutes, and wrote its findings to a designated output file. The orchestrator monitored all 3 via tail -F on their pty logs, reacting to events as they arrived. Total: 3 audit reports (140 + 265 + 420 lines) produced in parallel in ~6 minutes.
A single worker terminal received a mission to ship 6 atomic git commits implementing pipeline optimizations. The worker read prior analysis files, edited runbooks and Python code, ran verification after each commit, and produced 6 clean commits in 6 minutes 42 seconds — 393 insertions across committee.py, 4 runbook YAMLs, and a cost-log hook. The orchestrator monitored via pty log events and narrated each commit as it landed.
The orchestrator created a wrapped terminal, launched Claude Code inside it, and fired a /infographic-new command — triggering a 10-tier content generation pipeline. The orchestrator monitored tier-by-tier progress via background polling tasks, inspected intermediate artifacts (research JSON, angle selection, design brief, jury verdicts), and read the final committee verdict + 30/30 visual critic score. Total pipeline time: 25 minutes. Result: a production-ready infographic with committee composite score 8.56.
A worker terminal was directed to use a knowledge graph (graphify) as its primary reasoning surface to identify pipeline bottlenecks. The worker ran graph queries to surface god nodes, cross-file dependencies, and prior audit findings — producing a 100-line optimization plan that identified $30/run savings from CLAUDE.md trimming, $5.69/run from committee-chair fusion, and a $165 cache-read tax that no top-down code review would have surfaced. The graph-derived insights validated 2 of the 5 optimization proposals independently.
Control terminals on remote machines via WebSocket. Token-based auth + TLS. Discover other Claws instances on your LAN automatically.
Available today via SSH tunnel:
ssh -L /tmp/remote-claws.sock:/remote/workspace/.claws/claws.sock user@remote
# Then connect locally via raw socket or MCP
echo '{"id":1,"cmd":"list"}' | nc -U /tmp/remote-claws.sock
Register Claws as an MCP server in any project. Every Claude Code session instantly gets 8 terminal control tools — no client library, no imports, no socket code. Just register and go.
// .claude/settings.json (add to ANY project)
{
"mcpServers": {
"claws": {
"command": "node",
"args": ["/path/to/claws/mcp_server.js"],
"env": { "CLAWS_SOCKET": ".claws/claws.sock" }
}
}
}
Tools injected: claws_list claws_create claws_send claws_exec claws_read_log claws_poll claws_close claws_worker
Zero dependencies. Pure Node.js. Works on macOS and Linux. Your AI writes code in one terminal, tests in another, deploys in a third — all through native MCP tool calls.
12-chapter course from first install to production fleet orchestration. Every feature, pattern, and edge case. 830 lines of detailed walkthroughs.
Every command, parameter, response field, and edge case. The API reference for builders integrating Claws.
Full JSON socket protocol — 8 commands, request/response schemas, error codes, interactive session transcript.
7 battle-tested mission prompt templates for AI orchestration: workers, fleets, pair programming, debugging.
Dev setup, workflow, code style, protocol change process. High-priority: TypeScript rewrite, Windows support, Node client.
Socket permissions, pty log sensitivity, trust model, planned WebSocket auth. Read before shared deployments.
Stop manually running commands your AI asks you to run. Stop pasting output back into chat. Give your AI direct terminal control — it reads, writes, monitors, and reacts on its own.