feat: 添加 Web 前端及服务端 SSE 流式支持,扩展多模型兼容

后端:
  - server: 实现完整的 HTTP 会话管理(CRUD)+ SSE 事件流推送,
    支持双通道架构(POST 发消息 + GET SSE 接收流式响应)
  - runtime: ContentBlock 新增 Thinking / RedactedThinking 变体,
    支持思考过程和已编辑思考的序列化/反序列化
  - api: 注册 GLM 系列模型(glm-4/5 等)到模型注册表,
    扩展 XAI/OpenAI 兼容提供商的请求构建逻辑

  前端:
  - 基于 Ant Design X 构建完整聊天界面:Bubble.List 消息列表、
    Sender 输入框、Conversations 会话管理、Think 思考过程折叠、
    ThoughtChain 工具调用链展示
  - XMarkdown 集成:代码高亮、Mermaid 图表、LaTeX 公式、
    自定义脚注、流式渲染(incomplete 占位符)
  - SSE Hook 对接服务端事件流,手动管理 AssistantBuffer 累积 delta
  - 深色/浅色主题切换,会话侧边栏(新建/切换/删除)
This commit is contained in:
fengmengqi 2026-04-10 16:29:27 +08:00
parent 0782159ecd
commit 4a04faf926
56 changed files with 8611 additions and 172 deletions

5
.claw.json Normal file
View File

@ -0,0 +1,5 @@
{
"permissions": {
"defaultMode": "dontAsk"
}
}

3
.env Normal file
View File

@ -0,0 +1,3 @@
ANTHROPIC_API_KEY="9494feba6f7c45f48c3dfc35a85ffd89.2WUCscxcSp92ETNg"
ANTHROPIC_BASE_URL="https://open.bigmodel.cn/api/anthropic"
CLAW_MODEL="glm-5"

3
.gitignore vendored
View File

@ -1,3 +1,6 @@
target/
.omx/
.clawd-agents/
# Claw Code local artifacts
.claw/
.claude/

113
CLAUDE.md Normal file
View File

@ -0,0 +1,113 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
Claw Code is a local coding-agent CLI tool written in safe Rust. It is a clean-room implementation inspired by Claude Code, providing an interactive REPL, one-shot prompts, workspace-aware tools, local agent workflows, and plugin support. The project name and references throughout use "claw" / "Claw Code".
## Build & Run Commands
```bash
# Build release binary (produces target/release/claw)
cargo build --release -p claw-cli
# Run from source (interactive REPL)
cargo run --bin claw --
# Run one-shot prompt
cargo run --bin claw -- prompt "summarize this workspace"
# Install locally
cargo install --path crates/claw-cli --locked
# Run the HTTP server binary
cargo run --bin claw-server
```
## Verification Commands
```bash
# Format check
cargo fmt
# Lint (workspace-level clippy with deny warnings)
cargo clippy --workspace --all-targets -- -D warnings
# Run all tests
cargo test --workspace
# Run tests for a specific crate
cargo test -p api
cargo test -p runtime
# Run a single test by name
cargo test -p <crate> -- <test_name>
# Integration tests in crates/api/tests/ use mock TCP servers (no network needed)
# One smoke test is #[ignore] — run with: cargo test -p api -- --ignored
```
## Workspace Architecture
Cargo workspace with `resolver = "2"`. All crates live under `crates/`.
### Crate Dependency Graph
```
claw-cli ──→ api, runtime, tools, commands, plugins, compat-harness
server ──→ api, runtime, tools, plugins, commands
tools ──→ api, runtime, plugins
commands ──→ runtime, plugins
api ──→ runtime
runtime ──→ lsp, plugins
plugins ──→ (standalone, serde only)
lsp ──→ (standalone)
compat-harness ──→ commands, tools, runtime
```
### Core Crates
- **`claw-cli`** — User-facing binary (`claw`). REPL loop with markdown rendering (pulldown-cmark + syntect), argument parsing, OAuth flow. Entry point: `crates/claw-cli/src/main.rs`.
- **`runtime`** — Session management, conversation runtime, permissions, system prompt construction, context compaction, MCP stdio management, and hook execution. Key types: `Session` (versioned message history), `ConversationRuntime<C, T>` (generic over `ApiClient` + `ToolExecutor` traits), `PermissionMode`, `McpServerManager`, `HookRunner`.
- **`api`** — HTTP client for LLM providers with SSE streaming. `ClawApiClient` (Anthropic-compatible), `OpenAiCompatClient`, and `Provider` trait. `ProviderKind` enum distinguishes ClawApi, Xai, OpenAi. Request/response types: `MessageRequest`, `StreamEvent`, `ToolDefinition`.
- **`tools`** — Built-in tool definitions and dispatch. `GlobalToolRegistry` is a lazy-static singleton. Tools: Read, Write, Edit, Glob, Grep, Bash, LSP, Task*, Cron*, Worktree*. Each tool has a `ToolSpec` with JSON schema.
- **`commands`** — Slash command registry and handlers (`/help`, `/config`, `/compact`, `/resume`, `/plugins`, `/agents`, `/doctor`, etc.). `SlashCommandSpec` defines each command's name, aliases, description, and category.
- **`plugins`** — Plugin discovery and lifecycle. `PluginManager` loads builtin, bundled, and external (from `~/.claw/plugins/`) plugins. Plugins can provide additional tools via `PluginTool`.
- **`server`** — Axum-based HTTP server (`claw-server`). REST endpoints for session CRUD + SSE event streaming. `AppState` holds shared session store.
- **`lsp`** — Language Server Protocol types and process management for code intelligence features.
- **`compat-harness`** — Extracts command/tool/bootstrap-plan manifests from upstream TypeScript source files (for compatibility tracking). Uses `CLAUDE_CODE_UPSTREAM` env var to locate the upstream repo.
## Key Architectural Patterns
- **Trait-based abstraction**: `ApiClient`, `ToolExecutor`, `Provider` traits enable swappable implementations. `ConversationRuntime` is generic over client and executor.
- **Static registries**: `GlobalToolRegistry` and slash command specs use lazy-static initialization with compile-time definitions.
- **SSE streaming**: API responses stream through `MessageStream` (async iterator) to the terminal renderer or server SSE endpoints.
- **Permission model**: `PermissionMode` enum — ReadOnly, WorkspaceWrite, DangerFullAccess. Configurable via `.claw.json` (`permissions.defaultMode`).
- **Hook system**: Pre/post tool execution hooks via `HookRunner` in the runtime crate.
## Configuration & Environment
- `.claw.json` — Project-level config (permissions, etc.)
- `.claw/` — Project directory for hooks, plugins, local settings
- `~/.claw/` — User-level config directory
- `.env` — API keys (gitignored): `ANTHROPIC_API_KEY`, `ANTHROPIC_BASE_URL`, `XAI_API_KEY`, `XAI_BASE_URL`
- `CLAW.md` — Workspace instructions loaded into the system prompt (analogous to CLAUDE.md but for claw itself)
## Lint Rules
- `unsafe_code` is **forbidden** at workspace level
- Clippy `all` + `pedantic` lints are warnings; some pedantic lints are allowed (`module_name_repetitions`, `missing_panics_doc`, `missing_errors_doc`)
- CI runs: `cargo check --workspace`, `cargo test --workspace`, `cargo build --release` on Ubuntu and macOS
## Language
Code comments, commit messages, and documentation are primarily in Chinese (中文). UI strings and exported symbol names are in English.

15
CLAW.md Normal file
View File

@ -0,0 +1,15 @@
# CLAW.md
This file provides guidance to Claw Code (clawcode.dev) when working with code in this repository.
## Detected stack
- Languages: Rust.
- Frameworks: none detected from the supported starter markers.
## Verification
- Run Rust verification from the repo root: `cargo fmt`, `cargo clippy --workspace --all-targets -- -D warnings`, `cargo test --workspace`
## Working agreement
- Prefer small, reviewable changes and keep generated bootstrap files aligned with actual repo workflows.
- Keep shared defaults in `.claw.json`; reserve `.claw/settings.local.json` for machine-local overrides.
- Do not overwrite existing `CLAW.md` content automatically; update it intentionally when repo workflows change.

135
Cargo.lock generated
View File

@ -17,6 +17,15 @@ dependencies = [
"memchr",
]
[[package]]
name = "android_system_properties"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "819e7219dbd41043ac279b19830f2efc897156490d7fd6ea916720117ee66311"
dependencies = [
"libc",
]
[[package]]
name = "api"
version = "0.1.0"
@ -56,6 +65,12 @@ version = "1.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1505bd5d3d116872e7271a6d4e16d81d0c8570876c8de68093a09ac269d8aac0"
[[package]]
name = "autocfg"
version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c08606f8c3cbf4ce6ec8e28fb0014a2c086708fe954eaa885384a6165172e7e8"
[[package]]
name = "axum"
version = "0.8.8"
@ -178,6 +193,19 @@ version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "613afe47fcd5fac7ccf1db93babcb082c5994d996f20b8b159f2ad1658eb5724"
[[package]]
name = "chrono"
version = "0.4.44"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c673075a2e0e5f4a1dde27ce9dee1ea4558c7ffe648f576438a20ca1d2acc4b0"
dependencies = [
"iana-time-zone",
"js-sys",
"num-traits",
"wasm-bindgen",
"windows-link",
]
[[package]]
name = "claw-cli"
version = "0.1.0"
@ -186,6 +214,7 @@ dependencies = [
"commands",
"compat-harness",
"crossterm",
"dotenvy",
"plugins",
"pulldown-cmark",
"runtime",
@ -223,6 +252,12 @@ dependencies = [
"tools",
]
[[package]]
name = "core-foundation-sys"
version = "0.8.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "773648b94d0e5d620f64f280777445740e61fe701025087ec8b57f45c791888b"
[[package]]
name = "cpufeatures"
version = "0.2.17"
@ -306,6 +341,12 @@ dependencies = [
"syn",
]
[[package]]
name = "dotenvy"
version = "0.15.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1aaf95b3e5c8f23aa320147307562d361db0ae0d51242340f558153b4eb2439b"
[[package]]
name = "endian-type"
version = "0.1.2"
@ -619,6 +660,30 @@ dependencies = [
"tracing",
]
[[package]]
name = "iana-time-zone"
version = "0.1.65"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e31bc9ad994ba00e440a8aa5c9ef0ec67d5cb5e5cb0cc7f8b744a35b389cc470"
dependencies = [
"android_system_properties",
"core-foundation-sys",
"iana-time-zone-haiku",
"js-sys",
"log",
"wasm-bindgen",
"windows-core",
]
[[package]]
name = "iana-time-zone-haiku"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f31827a206f56af32e590ba56d5d2d085f558508192593743f16b2306495269f"
dependencies = [
"cc",
]
[[package]]
name = "icu_collections"
version = "2.1.1"
@ -907,6 +972,15 @@ version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c6673768db2d862beb9b39a78fdcb1a69439615d5794a1be50caa9bc92c81967"
[[package]]
name = "num-traits"
version = "0.2.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "071dfc062690e90b734c0b2273ce72ad0ffa95f0c74596bc250dcfd960262841"
dependencies = [
"autocfg",
]
[[package]]
name = "once_cell"
version = "1.21.4"
@ -1490,13 +1564,21 @@ dependencies = [
name = "server"
version = "0.1.0"
dependencies = [
"api",
"async-stream",
"axum",
"chrono",
"commands",
"dotenvy",
"plugins",
"reqwest",
"runtime",
"serde",
"serde_json",
"tokio",
"tools",
"tower",
"tower-http",
]
[[package]]
@ -2078,12 +2160,65 @@ version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
[[package]]
name = "windows-core"
version = "0.62.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b8e83a14d34d0623b51dce9581199302a221863196a1dde71a7663a4c2be9deb"
dependencies = [
"windows-implement",
"windows-interface",
"windows-link",
"windows-result",
"windows-strings",
]
[[package]]
name = "windows-implement"
version = "0.60.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "053e2e040ab57b9dc951b72c264860db7eb3b0200ba345b4e4c3b14f67855ddf"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "windows-interface"
version = "0.59.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3f316c4a2570ba26bbec722032c4099d8c8bc095efccdc15688708623367e358"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "windows-link"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f0805222e57f7521d6a62e36fa9163bc891acd422f971defe97d64e70d0a4fe5"
[[package]]
name = "windows-result"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7781fa89eaf60850ac3d2da7af8e5242a5ea78d1a11c49bf2910bb5a73853eb5"
dependencies = [
"windows-link",
]
[[package]]
name = "windows-strings"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7837d08f69c77cf6b07689544538e017c1bfcf57e34b4c0ff58e6c2cd3b37091"
dependencies = [
"windows-link",
]
[[package]]
name = "windows-sys"
version = "0.52.0"

View File

@ -19,12 +19,11 @@ Rust 工作区是当前主要的产品界面。`claw` 二进制文件在单个
- Cargo
- 你想使用的模型的提供商凭据
### 身份验证
兼容 Anthropic 的模型:
你可以通过环境变量或在项目根目录创建 **`.env`** 文件来配置 API 密钥:
**配置 Claude (推荐):**
```bash
export ANTHROPIC_API_KEY="..."
ANTHROPIC_API_KEY="..."
# 使用兼容的端点时可选
export ANTHROPIC_BASE_URL="https://api.anthropic.com"
```

53
crates/api/README.md Normal file
View File

@ -0,0 +1,53 @@
# API 模块 (api)
本模块提供了与大型语言模型 (LLM) 服务提供商(主要是 Anthropic 的 Claude 和兼容 OpenAI 的服务)进行交互的高层抽象和客户端。
## 概览
`api` 模块负责以下职责:
- 标准化与不同 AI 提供商的通信。
- 通过服务器发送事件 (SSE) 处理流式响应。
- 管理身份验证源API 密钥、OAuth 令牌)。
- 提供消息、工具和使用情况跟踪的共享数据结构。
## 关键特性
- **提供商抽象 (Provider Abstraction)**:支持多种 AI 后端,包括:
- `ClawApiClient`: Claude 模型的主要提供商。
- `OpenAiCompatClient`: 支持兼容 OpenAI 的 API如本地模型、专门的提供商
- **流式支持 (Streaming Support)**:健壮的 SSE 解析实现 (`SseParser`),用于处理实时的内容生成。
- **工具集成 (Tool Integration)**:为 `ToolDefinition`、`ToolChoice` 和 `ToolResultContentBlock` 提供强类型定义,支持智能代理 (Agentic) 工作流。
- **身份验证管理 (Auth Management)**:用于解析启动身份验证源和管理 OAuth 令牌的实用工具。
- **模型智能 (Model Intelligence)**:解析模型别名和计算最大标记 (Token) 限制的元数据及辅助函数。
## 实现逻辑
### 核心模块
- **`client.rs`**: 定义了 `ProviderClient` 特性 (Trait) 和基础客户端逻辑。它使用 `reqwest` 处理 HTTP 请求,并管理消息流的生命周期。
- **`types.rs`**: 包含 API 的核心数据模型,如 `InputMessage`、`OutputContentBlock` 以及 `MessageRequest`/`MessageResponse`。
- **`sse.rs`**: 实现了一个状态化的 SSE 解析器,能够处理分段的数据块并发出类型化的 `StreamEvent`
- **`providers/`**: 包含针对不同 LLM 端点的特定逻辑,将它们的独特格式映射到本模块使用的共享类型。
### 数据流
1. 构建包含模型详情、消息和工具定义的 `MessageRequest`
2. `ApiClient` 将此请求转换为提供商特定的 HTTP 请求。
3. 如果启用了流式传输,客户端返回一个 `MessageStream`,该流使用 `SseParser` 来产生 `StreamEvent`
4. 最终响应包含用于跟踪 Token 消耗的 `Usage` 信息。
## 使用示例
```rust
use api::{ApiClient, MessageRequest, InputMessage};
// 示例初始化(已简化)
let client = ApiClient::new(auth_source);
let request = MessageRequest {
model: "claude-3-5-sonnet-20241022".to_string(),
messages: vec![InputMessage::user("你好,世界!")],
..Default::default()
};
let stream = client.create_message_stream(request).await?;
```

View File

@ -634,7 +634,7 @@ struct ApiErrorEnvelope {
#[derive(Debug, Deserialize)]
struct ApiErrorBody {
#[serde(rename = "type")]
#[serde(alias = "code", rename = "type")]
error_type: String,
message: String,
}

View File

@ -138,6 +138,69 @@ const MODEL_REGISTRY: &[(&str, ProviderMetadata)] = &[
default_base_url: openai_compat::DEFAULT_XAI_BASE_URL,
},
),
(
"glm-4-plus",
ProviderMetadata {
provider: ProviderKind::ClawApi,
auth_env: "ANTHROPIC_API_KEY",
base_url_env: "ANTHROPIC_BASE_URL",
default_base_url: claw_provider::DEFAULT_BASE_URL,
},
),
(
"glm-4-0520",
ProviderMetadata {
provider: ProviderKind::ClawApi,
auth_env: "ANTHROPIC_API_KEY",
base_url_env: "ANTHROPIC_BASE_URL",
default_base_url: claw_provider::DEFAULT_BASE_URL,
},
),
(
"glm-4",
ProviderMetadata {
provider: ProviderKind::ClawApi,
auth_env: "ANTHROPIC_API_KEY",
base_url_env: "ANTHROPIC_BASE_URL",
default_base_url: claw_provider::DEFAULT_BASE_URL,
},
),
(
"glm-4-air",
ProviderMetadata {
provider: ProviderKind::ClawApi,
auth_env: "ANTHROPIC_API_KEY",
base_url_env: "ANTHROPIC_BASE_URL",
default_base_url: claw_provider::DEFAULT_BASE_URL,
},
),
(
"glm-4-flash",
ProviderMetadata {
provider: ProviderKind::ClawApi,
auth_env: "ANTHROPIC_API_KEY",
base_url_env: "ANTHROPIC_BASE_URL",
default_base_url: claw_provider::DEFAULT_BASE_URL,
},
),
(
"glm-5",
ProviderMetadata {
provider: ProviderKind::ClawApi,
auth_env: "ANTHROPIC_API_KEY",
base_url_env: "ANTHROPIC_BASE_URL",
default_base_url: claw_provider::DEFAULT_BASE_URL,
},
),
(
"glm-5.1",
ProviderMetadata {
provider: ProviderKind::ClawApi,
auth_env: "ANTHROPIC_API_KEY",
base_url_env: "ANTHROPIC_BASE_URL",
default_base_url: claw_provider::DEFAULT_BASE_URL,
},
),
];
#[must_use]

View File

@ -251,7 +251,7 @@ impl MessageStream {
}
if self.done {
self.pending.extend(self.state.finish()?);
self.pending.extend(self.state.finish());
if let Some(event) = self.pending.pop_front() {
return Ok(Some(event));
}
@ -261,7 +261,7 @@ impl MessageStream {
match self.response.chunk().await? {
Some(chunk) => {
for parsed in self.parser.push(&chunk)? {
self.pending.extend(self.state.ingest_chunk(parsed)?);
self.pending.extend(self.state.ingest_chunk(parsed));
}
}
None => {
@ -297,6 +297,7 @@ impl OpenAiSseParser {
}
#[derive(Debug)]
#[allow(clippy::struct_excessive_bools)]
struct StreamState {
model: String,
message_started: bool,
@ -322,7 +323,7 @@ impl StreamState {
}
}
fn ingest_chunk(&mut self, chunk: ChatCompletionChunk) -> Result<Vec<StreamEvent>, ApiError> {
fn ingest_chunk(&mut self, chunk: ChatCompletionChunk) -> Vec<StreamEvent> {
let mut events = Vec::new();
if !self.message_started {
self.message_started = true;
@ -377,7 +378,7 @@ impl StreamState {
state.apply(tool_call);
let block_index = state.block_index();
if !state.started {
if let Some(start_event) = state.start_event()? {
if let Some(start_event) = state.start_event() {
state.started = true;
events.push(StreamEvent::ContentBlockStart(start_event));
} else {
@ -410,12 +411,12 @@ impl StreamState {
}
}
Ok(events)
events
}
fn finish(&mut self) -> Result<Vec<StreamEvent>, ApiError> {
fn finish(&mut self) -> Vec<StreamEvent> {
if self.finished {
return Ok(Vec::new());
return Vec::new();
}
self.finished = true;
@ -429,7 +430,7 @@ impl StreamState {
for state in self.tool_calls.values_mut() {
if !state.started {
if let Some(start_event) = state.start_event()? {
if let Some(start_event) = state.start_event() {
state.started = true;
events.push(StreamEvent::ContentBlockStart(start_event));
if let Some(delta_event) = state.delta_event() {
@ -464,7 +465,7 @@ impl StreamState {
}));
events.push(StreamEvent::MessageStop(MessageStopEvent {}));
}
Ok(events)
events
}
}
@ -497,22 +498,20 @@ impl ToolCallState {
self.openai_index + 1
}
fn start_event(&self) -> Result<Option<ContentBlockStartEvent>, ApiError> {
let Some(name) = self.name.clone() else {
return Ok(None);
};
fn start_event(&self) -> Option<ContentBlockStartEvent> {
let name = self.name.clone()?;
let id = self
.id
.clone()
.unwrap_or_else(|| format!("tool_call_{}", self.openai_index));
Ok(Some(ContentBlockStartEvent {
Some(ContentBlockStartEvent {
index: self.block_index(),
content_block: OutputContentBlock::ToolUse {
id,
name,
input: json!({}),
},
}))
})
}
fn delta_event(&mut self) -> Option<ContentBlockDeltaEvent> {
@ -678,6 +677,14 @@ fn translate_message(message: &InputMessage) -> Vec<Value> {
}
})),
InputContentBlock::ToolResult { .. } => {}
InputContentBlock::Thinking { thinking, .. } => {
text.push_str("<thinking>\n");
text.push_str(thinking);
text.push_str("\n</thinking>\n");
}
InputContentBlock::RedactedThinking { .. } => {
text.push_str("<thinking>\n<redacted>\n</thinking>\n");
}
}
}
if text.is_empty() && tool_calls.is_empty() {
@ -708,7 +715,9 @@ fn translate_message(message: &InputMessage) -> Vec<Value> {
"content": flatten_tool_result_content(content),
"is_error": is_error,
})),
InputContentBlock::ToolUse { .. } => None,
InputContentBlock::ToolUse { .. }
| InputContentBlock::Thinking { .. }
| InputContentBlock::RedactedThinking { .. } => None,
})
.collect(),
}

View File

@ -1,5 +1,7 @@
use crate::error::ApiError;
use crate::types::StreamEvent;
use serde_json::Value;
use reqwest::StatusCode;
#[derive(Debug, Default)]
pub struct SseParser {
@ -95,9 +97,75 @@ pub fn parse_frame(frame: &str) -> Result<Option<StreamEvent>, ApiError> {
return Ok(None);
}
serde_json::from_str::<StreamEvent>(&payload)
.map(Some)
.map_err(ApiError::from)
if matches!(event_name, Some("error")) {
return Err(parse_error_event(&payload));
}
// Some "Anthropic-compatible" gateways put the event type in the SSE `event:` field,
// and omit the `{ "type": ... }` discriminator from the JSON `data:` payload.
// Our Rust enums are tagged with `#[serde(tag = "type")]`, so we synthesize it here.
match serde_json::from_str::<StreamEvent>(&payload) {
Ok(event) => Ok(Some(event)),
Err(error) => {
// Best-effort: if we have an SSE event name and the payload is a JSON object
// without a `type` field, inject it and retry.
let Some(event_name) = event_name else {
return Err(ApiError::from(error));
};
let Ok(Value::Object(mut object)) = serde_json::from_str::<Value>(&payload) else {
return Err(ApiError::from(error));
};
if object
.get("type")
.and_then(Value::as_str)
.is_some_and(|value| value == "error")
{
return Err(parse_error_object(&object, payload));
}
if object.contains_key("type") {
return Err(ApiError::from(error));
}
object.insert("type".to_string(), Value::String(event_name.to_string()));
serde_json::from_value::<StreamEvent>(Value::Object(object))
.map(Some)
.map_err(ApiError::from)
}
}
}
fn parse_error_event(payload: &str) -> ApiError {
match serde_json::from_str::<Value>(payload) {
Ok(Value::Object(object)) => parse_error_object(&object, payload.to_string()),
_ => ApiError::Api {
status: StatusCode::BAD_GATEWAY,
error_type: Some("stream_error".to_string()),
message: Some(payload.to_string()),
body: payload.to_string(),
retryable: false,
},
}
}
fn parse_error_object(object: &serde_json::Map<String, Value>, body: String) -> ApiError {
let nested = object.get("error").and_then(Value::as_object);
let error_type = nested
.and_then(|error| error.get("type"))
.or_else(|| object.get("type"))
.and_then(Value::as_str)
.map(ToOwned::to_owned);
let message = nested
.and_then(|error| error.get("message"))
.or_else(|| object.get("message"))
.and_then(Value::as_str)
.map(ToOwned::to_owned);
ApiError::Api {
status: StatusCode::BAD_GATEWAY,
error_type,
message,
body,
retryable: false,
}
}
#[cfg(test)]
@ -195,6 +263,26 @@ mod tests {
assert_eq!(event, None);
}
#[test]
fn parses_event_name_when_payload_omits_type() {
let frame = concat!("event: message_stop\n", "data: {}\n\n");
let event = parse_frame(frame).expect("frame should parse");
assert_eq!(event, Some(StreamEvent::MessageStop(crate::types::MessageStopEvent {})));
}
#[test]
fn surfaces_stream_error_events() {
let frame = concat!(
"event: error\n",
"data: {\"error\":{\"type\":\"invalid_request_error\",\"message\":\"bad input\"}}\n\n"
);
let error = parse_frame(frame).expect_err("error frame should surface");
assert_eq!(
error.to_string(),
"api returned 502 Bad Gateway (invalid_request_error): bad input"
);
}
#[test]
fn parses_split_json_across_data_lines() {
let frame = concat!(

View File

@ -75,6 +75,14 @@ pub enum InputContentBlock {
#[serde(default, skip_serializing_if = "std::ops::Not::not")]
is_error: bool,
},
Thinking {
thinking: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
signature: Option<String>,
},
RedactedThinking {
data: Value,
},
}
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]

View File

@ -22,6 +22,7 @@ serde_json.workspace = true
syntect = "5"
tokio = { version = "1", features = ["rt-multi-thread", "time"] }
tools = { path = "../tools" }
dotenvy = "0.15"
[lints]
workspace = true

58
crates/claw-cli/README.md Normal file
View File

@ -0,0 +1,58 @@
# Claw CLI 模块 (claw-cli)
本模块实现了 Claw 应用程序的主要命令行界面 (CLI)。它提供了交互式的 REPL 环境和非交互式的命令执行功能。
## 概览
`claw-cli` 模块是整个项目的“胶水”,负责编排 `runtime`、`api`、`tools` 和 `plugins` 模块之间的交互。它捕获用户输入,管理应用程序状态,并以用户友好的格式渲染 AI 响应。
## 关键特性
- **交互式 REPL**:一个功能齐全的 Read-Eval-Print Loop用于与 AI 对话,支持:
- 通过 `rustyline` 实现多行输入和命令历史。
- 实时的 Markdown 流式显示和代码语法高亮。
- 为耗时的工具调用显示动态的加载动画 (Spinner) 和进度指示器。
- **子命令**
- `prompt`:运行单次 Prompt 并退出(单次模式)。
- `login`/`logout`:处理与 Claw 平台的 OAuth 身份验证。
- `init`:初始化新项目/仓库以配合 Claw 使用。
- `resume`:恢复并继续之前的对话会话。
- `agents`/`skills`:管理并发现可用的智能体和技能。
- **权限管理**:对工具执行权限的细粒度控制:
- `read-only`:安全的分析模式。
- `workspace-write`:允许在当前工作区内进行修改。
- `danger-full-access`:高级任务的无限制访问。
- **OAuth 流程**:集成了本地 HTTP 服务器,无缝处理用户机器上的 OAuth 回调重定向。
## 实现逻辑
### 核心模块
- **`main.rs`**: 主要入口点。负责解析命令行参数、初始化环境并调度到适当的操作REPL 或子命令)。
- **`render/`**: 包含 `TerminalRenderer` 和 Markdown 流渲染逻辑。使用 `syntect` 进行语法高亮,并使用 `crossterm` 进行终端操作。
- **`input/`**: 处理用户输入捕获,包括对多行 Prompt 和斜杠命令 (Slash Command) 的特殊处理。
- **`init.rs`**: 处理项目级初始化和仓库设置。
### 交互循环流程
1. CLI 初始化 `ConversationRuntime` 并加载项目上下文。
2. 使用 `rustyline` 进入捕获用户输入的循环。
3. 检查用户输入是否为“斜杠命令”(如 `/compact`、`/model`)。
4. 普通 Prompt 通过 `runtime` 发送给 AI。
5. AI 事件(文本增量、工具调用)逐步渲染到终端。
6. 会话定期保存,以便将来可以恢复。
## 使用方法
主二进制程序名为 `claw`
```bash
# 启动交互式 REPL
claw
# 运行单次 Prompt
claw prompt "解释一下这个项目的架构"
# 登录服务
claw login
```

View File

@ -59,6 +59,7 @@ const INTERNAL_PROGRESS_HEARTBEAT_INTERVAL: Duration = Duration::from_secs(3);
type AllowedToolSet = BTreeSet<String>;
fn main() {
dotenvy::dotenv().ok();
if let Err(error) = run() {
eprintln!("{}", render_cli_error(&error.to_string()));
std::process::exit(1);
@ -171,7 +172,7 @@ impl CliOutputFormat {
#[allow(clippy::too_many_lines)]
fn parse_args(args: &[String]) -> Result<CliAction, String> {
let mut model = DEFAULT_MODEL.to_string();
let mut model = env::var("CLAW_MODEL").unwrap_or_else(|_| DEFAULT_MODEL.to_string());
let mut output_format = CliOutputFormat::Text;
let mut permission_mode = default_permission_mode();
let mut wants_version = false;
@ -2504,6 +2505,12 @@ fn render_export_text(session: &Session) -> String {
for block in &message.blocks {
match block {
ContentBlock::Text { text } => lines.push(text.clone()),
ContentBlock::Thinking { thinking, .. } => {
lines.push(format!("[thinking] {thinking}"));
}
ContentBlock::RedactedThinking { .. } => {
lines.push("[thinking] <redacted>".to_string());
}
ContentBlock::ToolUse { id, name, input } => {
lines.push(format!("[tool_use id={id} name={name}] {input}"));
}
@ -3158,8 +3165,18 @@ impl ApiClient for DefaultRuntimeClient {
input.push_str(&partial_json);
}
}
ContentBlockDelta::ThinkingDelta { .. }
| ContentBlockDelta::SignatureDelta { .. } => {}
ContentBlockDelta::ThinkingDelta { thinking } => {
if !thinking.is_empty() {
if self.emit_output {
// Use a dimmed style for thinking blocks
write!(out, "\x1b[2m{thinking}\x1b[0m")
.and_then(|()| out.flush())
.map_err(|error| RuntimeError::new(error.to_string()))?;
}
events.push(AssistantEvent::ThinkingDelta(thinking));
}
}
ContentBlockDelta::SignatureDelta { .. } => {}
},
ApiStreamEvent::ContentBlockStop(_) => {
if let Some(rendered) = markdown_stream.flush(&renderer) {
@ -3237,7 +3254,10 @@ fn final_assistant_text(summary: &runtime::TurnSummary) -> String {
.iter()
.filter_map(|block| match block {
ContentBlock::Text { text } => Some(text.as_str()),
_ => None,
ContentBlock::Thinking { thinking, .. } => Some(thinking.as_str()),
ContentBlock::RedactedThinking { .. }
| ContentBlock::ToolUse { .. }
| ContentBlock::ToolResult { .. } => None,
})
.collect::<Vec<_>>()
.join("")
@ -3256,7 +3276,10 @@ fn collect_tool_uses(summary: &runtime::TurnSummary) -> Vec<serde_json::Value> {
"name": name,
"input": input,
})),
_ => None,
ContentBlock::Thinking { .. }
| ContentBlock::RedactedThinking { .. }
| ContentBlock::Text { .. }
| ContentBlock::ToolResult { .. } => None,
})
.collect()
}
@ -3278,7 +3301,10 @@ fn collect_tool_results(summary: &runtime::TurnSummary) -> Vec<serde_json::Value
"output": output,
"is_error": is_error,
})),
_ => None,
ContentBlock::Thinking { .. }
| ContentBlock::RedactedThinking { .. }
| ContentBlock::Text { .. }
| ContentBlock::ToolUse { .. } => None,
})
.collect()
}
@ -3819,7 +3845,16 @@ fn push_output_block(
};
*pending_tool = Some((id, name, initial_input));
}
OutputContentBlock::Thinking { .. } | OutputContentBlock::RedactedThinking { .. } => {}
OutputContentBlock::Thinking { thinking, .. } => {
if !thinking.is_empty() {
// Dimmed style for thinking
write!(out, "\x1b[2m{thinking}\x1b[0m")
.and_then(|()| out.flush())
.map_err(|error| RuntimeError::new(error.to_string()))?;
events.push(AssistantEvent::ThinkingDelta(thinking));
}
}
OutputContentBlock::RedactedThinking { .. } => {}
}
Ok(())
}
@ -3928,6 +3963,16 @@ fn convert_messages(messages: &[ConversationMessage]) -> Vec<InputMessage> {
.iter()
.map(|block| match block {
ContentBlock::Text { text } => InputContentBlock::Text { text: text.clone() },
ContentBlock::Thinking {
thinking,
signature,
} => InputContentBlock::Thinking {
thinking: thinking.clone(),
signature: signature.clone(),
},
ContentBlock::RedactedThinking { data } => InputContentBlock::RedactedThinking {
data: serde_json::from_str(&data.render()).unwrap_or(serde_json::Value::Null),
},
ContentBlock::ToolUse { id, name, input } => InputContentBlock::ToolUse {
id: id.clone(),
name: name.clone(),

55
crates/commands/README.md Normal file
View File

@ -0,0 +1,55 @@
# 命令模块 (commands)
本模块负责定义和管理 Claw 交互界面中使用的“斜杠命令”(Slash Commands),并提供相关的解析和执行逻辑。
## 概览
`commands` 模块的主要职责包括:
- 定义所有可用的斜杠命令及其元数据(别名、说明、类别等)。
- 提供命令注册表 (`CommandRegistry`),用于在 CLI 中发现和分发命令。
- 实现复杂的管理命令,如插件管理 (`/plugins`)、智能体查看 (`/agents`) 和技能查看 (`/skills`)。
- 提供命令建议功能,支持基于编辑距离 (Levenshtein distance) 的模糊匹配。
## 关键特性
- **斜杠命令规范 (SlashCommandSpec)**每个命令都包含详尽的元数据包括所属类别核心、工作区、会话、Git、自动化以及是否支持在恢复会话时执行。
- **命令分类**
- **核心 (Core)**`/help`, `/status`, `/model`, `/permissions`, `/cost` 等。
- **工作区 (Workspace)**`/config`, `/memory`, `/diff`, `/teleport` 等。
- **会话 (Session)**`/clear`, `/resume`, `/export`, `/session` 等。
- **Git 交互**`/branch`, `/commit`, `/pr`, `/issue` 等。
- **自动化 (Automation)**`/plugins`, `/agents`, `/skills`, `/ultraplan` 等。
- **模糊匹配与建议**:当用户输入错误的命令时,系统会自动推荐最接近的合法命令。
- **插件集成**`/plugins` 命令允许用户动态安装、启用、禁用或卸载插件,并能通知运行时重新加载环境。
## 实现逻辑
### 核心模块
- **`lib.rs`**: 包含了绝大部分逻辑。
- **`SlashCommand` 枚举**: 定义了所有命令的强类型表示。
- **`SlashCommandSpec` 结构体**: 存储命令的静态配置信息。
- **`handle_plugins_slash_command`**: 处理复杂的插件管理工作流。
- **`suggest_slash_commands`**: 实现基于 Levenshtein 距离的建议算法。
### 工作流程
1. 用户在 REPL 中输入以 `/` 开头的字符串。
2. `claw-cli` 调用 `SlashCommand::parse` 进行解析。
3. 解析后的命令被分发到相应的处理器。
4. 处理结果(通常包含要显示给用户的消息,以及可选的会话更新或运行时重新加载请求)返回给 CLI。
## 使用示例 (内部)
```rust
use commands::{SlashCommand, suggest_slash_commands};
// 解析命令
if let Some(cmd) = SlashCommand::parse("/model sonnet") {
// 处理模型切换逻辑
}
// 获取建议
let suggestions = suggest_slash_commands("hpel", 3);
// 返回 ["/help"]
```

View File

@ -603,7 +603,7 @@ pub fn suggest_slash_commands(input: &str, limit: usize) -> Vec<String> {
})
.collect::<Vec<_>>();
ranked.sort_by(|left, right| left.cmp(right));
ranked.sort();
ranked.dedup_by(|left, right| left.2 == right.2);
ranked
.into_iter()
@ -842,7 +842,7 @@ pub fn handle_branch_slash_command(
Ok(if trimmed.is_empty() {
"Branch\n Result no branches found".to_string()
} else {
format!("Branch\n Result listed\n\n{}", trimmed)
format!("Branch\n Result listed\n\n{trimmed}")
})
}
Some("create") => {
@ -882,7 +882,7 @@ pub fn handle_worktree_slash_command(
Ok(if trimmed.is_empty() {
"Worktree\n Result no worktrees found".to_string()
} else {
format!("Worktree\n Result listed\n\n{}", trimmed)
format!("Worktree\n Result listed\n\n{trimmed}")
})
}
Some("add") => {

View File

@ -0,0 +1,48 @@
# 兼容性测试套件模块 (compat-harness)
本模块提供了一套工具,专门用于分析和提取上游引用实现(如原始的 `claude-code` TypeScript 源码)中的元数据,以确保 Rust 版本的实现与其保持功能兼容。
## 概览
`compat-harness` 的主要职责是:
- 定位上游仓库的源码路径。
- 从 TypeScript 源码文件中提取命令 (`commands`)、工具 (`tools`) 和启动阶段 (`bootstrap phases`) 的定义。
- 自动生成功能清单 (`ExtractedManifest`),供运行时或测试使用,以验证 Rust 版本的覆盖率。
## 关键特性
- **上游路径解析 (UpstreamPaths)**:能够自动识别多种常见的上游仓库目录结构,并支持通过环境变量 `CLAUDE_CODE_UPSTREAM` 进行覆盖。
- **静态代码分析**:通过解析 TypeScript 源码,识别特定的代码模式(如 `export const INTERNAL_ONLY_COMMANDS` 或基于 `feature()` 的功能开关)。
- **清单提取 (Manifest Extraction)**
- **命令提取**:识别内置命令、仅限内部使用的命令以及受功能开关控制的命令。
- **工具提取**:识别基础工具和条件加载的工具。
- **启动计划提取**:分析 CLI 入口文件,重建启动时的各个阶段(如 `FastPathVersion`, `MainRuntime` 等)。
## 实现逻辑
### 核心模块
- **`lib.rs`**: 包含了核心的提取逻辑。
- **`UpstreamPaths` 结构体**: 封装了寻找 `commands.ts`、`tools.ts` 和 `cli.tsx` 的逻辑。
- **`extract_commands` & `extract_tools`**: 使用字符串解析技术,识别 TypeScript 的 `import` 和赋值操作,提取符号名称。
- **`extract_bootstrap_plan`**: 搜索特定的标志性字符串(如 `--version``daemon-worker`),从而推断出上游程序的启动流程。
### 工作流程
1. 模块根据预设路径或环境变量寻找上游 `claude-code` 仓库。
2. 读取关键的 `.ts``.tsx` 文件内容。
3. 执行正则表达式风格的行解析,提取出所有定义的命令和工具名称。
4. 将提取结果组织成 `ExtractedManifest` 对象。
## 使用示例 (内部测试)
```rust
use compat_harness::{UpstreamPaths, extract_manifest};
// 指定工作区目录,自动寻找上游路径
let paths = UpstreamPaths::from_workspace_dir("path/to/workspace");
// 提取功能清单
if let Ok(manifest) = extract_manifest(&paths) {
println!("上游发现 {} 个工具", manifest.tools.entries().len());
}
```

56
crates/lsp/README.md Normal file
View File

@ -0,0 +1,56 @@
# LSP 模块 (lsp)
本模块实现了语言服务协议 (Language Server Protocol, LSP) 的客户端功能,允许系统通过集成的编程语言服务器获取代码的语义信息、错误诊断和符号导航。
## 概览
`lsp` 模块的主要职责是:
- 管理多个 LSP 服务器的生命周期(启动、初始化、关闭)。
- 与服务器进行异步 JSON-RPC 通信。
- 提供跨语言的代码智能功能,如:
- **转到定义 (Go to Definition)**
- **查找引用 (Find References)**
- **工作区诊断 (Workspace Diagnostics)**
- 为 AI 提示词 (Prompt) 提供上下文增强,将代码中的实时错误和符号关系反馈给 LLM。
## 关键特性
- **LspManager**: 核心管理类,负责协调不同语言的服务器配置和文档状态。
- **上下文增强 (Context Enrichment)**:定义了 `LspContextEnrichment` 结构,能够将复杂的 LSP 响应(如诊断信息和定义)转换为易于 AI 理解的 Markdown 格式。
- **多服务器支持**:支持根据文件扩展名将请求路由到不同的语言服务器(如 `rust-analyzer`, `pyright` 等)。
- **同步机制**:处理文档的 `didOpen`、`didChange` 和 `didSave` 消息,确保服务器拥有最新的代码视图。
## 实现逻辑
### 核心模块
- **`manager.rs`**: 实现了 `LspManager`。它维护一个服务器池,并提供高层 API 来执行跨服务器的请求。
- **`client.rs`**: 实现底层的 LSP 客户端逻辑,处理基于 `tokio` 的异步 I/O 和 JSON-RPC 消息的分帧与解析。
- **`types.rs`**: 定义了本模块使用的专用数据类型,并对 `lsp-types` 库中的类型进行了简化和包装,以便于内部使用。
- **`error.rs`**: 定义了 LSP 相关的错误处理。
### 工作流程
1. 系统根据配置初始化 `LspManager`
2. 当打开一个文件时,`LspManager` 启动相应的服务器并发送 `initialize` 请求。
3. `LspManager` 跟踪文档的打开状态,并在内容变化时同步到服务器。
4. 当需要对某个符号进行分析时,调用 `go_to_definition` 等方法,模块负责发送请求并解析返回的 `Location`
5. 诊断信息异步通过 `textDocument/publishDiagnostics` 通知到达,模块会缓存这些信息供后续查询。
## 使用示例 (内部)
```rust
use lsp::{LspManager, LspServerConfig};
// 配置并初始化管理器
let configs = vec![LspServerConfig {
name: "rust-analyzer".to_string(),
command: "rust-analyzer".to_string(),
..Default::default()
}];
let manager = LspManager::new(configs)?;
// 获取某个位置的上下文增强信息
let enrichment = manager.context_enrichment(&file_path, position).await?;
println!("{}", enrichment.render_prompt_section());
```

View File

@ -15,11 +15,13 @@ use tokio::sync::{oneshot, Mutex};
use crate::error::LspError;
use crate::types::{LspServerConfig, SymbolLocation};
type PendingRequestMap = BTreeMap<i64, oneshot::Sender<Result<Value, LspError>>>;
pub(crate) struct LspClient {
config: LspServerConfig,
writer: Mutex<BufWriter<ChildStdin>>,
child: Mutex<Child>,
pending_requests: Arc<Mutex<BTreeMap<i64, oneshot::Sender<Result<Value, LspError>>>>>,
pending_requests: Arc<Mutex<PendingRequestMap>>,
diagnostics: Arc<Mutex<BTreeMap<String, Vec<Diagnostic>>>>,
open_documents: Mutex<BTreeMap<PathBuf, i32>>,
next_request_id: AtomicI64,
@ -59,7 +61,7 @@ impl LspClient {
client.spawn_reader(stdout);
if let Some(stderr) = stderr {
client.spawn_stderr_drain(stderr);
Self::spawn_stderr_drain(stderr);
}
client.initialize().await?;
Ok(client)
@ -282,8 +284,8 @@ impl LspClient {
if let Err(error) = result {
let mut pending = pending_requests.lock().await;
let drained = pending
.iter()
.map(|(id, _)| *id)
.keys()
.copied()
.collect::<Vec<_>>();
for id in drained {
if let Some(sender) = pending.remove(&id) {
@ -294,7 +296,7 @@ impl LspClient {
});
}
fn spawn_stderr_drain<R>(&self, stderr: R)
fn spawn_stderr_drain<R>(stderr: R)
where
R: AsyncRead + Unpin + Send + 'static,
{

59
crates/plugins/README.md Normal file
View File

@ -0,0 +1,59 @@
# 插件模块 (plugins)
本模块实现了 Claw 的插件系统,允许通过外部扩展来增强 AI 的功能,包括自定义工具、命令以及在工具执行前后运行的钩子 (Hooks)。
## 概览
`plugins` 模块的主要职责是:
- 定义插件的清单格式 (`plugin.json`) 和元数据结构。
- 管理插件的完整生命周期:安装、加载、初始化、启用/禁用、更新和卸载。
- 提供插件类型的抽象:
- **Builtin (内置)**:编译在程序内部的插件。
- **Bundled (绑定)**:随应用程序分发但作为独立文件存在的插件。
- **External (外部)**:用户自行安装或从远程仓库下载的插件。
- 实现插件隔离与执行机制,支持插件定义的自定义工具。
## 关键特性
- **插件清单 (PluginManifest)**:每个插件必须包含一个 `plugin.json`,详细说明其名称、版本、所需权限、钩子、生命周期脚本以及它所暴露的工具。
- **自定义工具 (PluginTool)**:插件可以定义全新的工具供 AI 调用。这些工具在执行时被作为独立的外部进程启动。
- **钩子系统 (Hooks)**:支持 `PreToolUse``PostToolUse` 钩子,允许插件在 AI 调用任何工具之前或之后执行特定的逻辑。
- **生命周期管理**:提供 `Init``Shutdown` 阶段,允许插件在加载时进行环境准备,在卸载或关闭时进行清理。
- **权限模型**:强制要求插件声明权限(`read`, `write`, `execute`),并为定义的工具指定安全级别(`read-only`, `workspace-write`, `danger-full-access`)。
## 实现逻辑
### 核心模块
- **`lib.rs`**: 包含了插件定义的各种结构体Manifest, Metadata, Tool, Permission 等)以及插件特性的定义。
- **`manager.rs`**: 实现了 `PluginManager`,负责插件在磁盘上的组织、注册表的维护以及安装/更新逻辑。
- **`hooks.rs`**: 实现了钩子执行器 (`HookRunner`),负责在正确的时机触发插件定义的脚本。
### 插件加载与执行流程
1. `PluginManager` 扫描指定的目录(内置、绑定及外部安装目录)。
2. 读取并验证每个插件的 `plugin.json`
3. 如果插件被启用,则初始化该插件并将其定义的工具注册到系统的全局工具注册表中。
4. 当 AI 调用插件工具时,系统根据清单中定义的命令行信息启动一个子进程,并通过标准输入/环境变量传递参数。
5. 钩子逻辑会在工具执行的生命周期内被自动触发。
## 使用示例 (插件定义样例)
```json
{
"name": "my-custom-plugin",
"version": "1.0.0",
"description": "一个演示插件",
"permissions": ["read", "execute"],
"tools": [
{
"name": "custom_search",
"description": "执行自定义搜索",
"inputSchema": { "type": "object", "properties": { "query": { "type": "string" } } },
"command": "python3",
"args": ["search_script.py"],
"requiredPermission": "read-only"
}
]
}
```

View File

@ -1,4 +1,5 @@
use std::ffi::OsStr;
#[cfg(not(windows))]
use std::path::Path;
use std::process::Command;

60
crates/runtime/README.md Normal file
View File

@ -0,0 +1,60 @@
# 运行时模块 (runtime)
本模块是 Claw 的核心引擎,负责协调 AI 模型、工具执行、会话管理和权限控制之间的所有交互。
## 概览
`runtime` 模块是整个系统的“中枢神经”,其主要职责包括:
- **对话驱动**:管理“用户-助手-工具”的循环迭代。
- **会话持久化**:负责会话的加载、保存及历史记录的压缩 (Compaction)。
- **MCP 客户端**:实现模型上下文协议 (Model Context Protocol),支持与外部 MCP 服务器通信。
- **安全沙箱与权限**:实施基于策略的工具执行权限检查。
- **上下文构建**:动态生成系统提示词 (System Prompt),集成工作区上下文。
- **消耗统计**:精确跟踪 Token 使用情况和 Token 缓存状态。
## 关键特性
- **ConversationRuntime**:核心驱动类,支持流式响应处理和多轮工具调用迭代。
- **权限引擎 (Permissions)**:提供多种模式(`ReadOnly`, `WorkspaceWrite`, `DangerFullAccess`),并支持交互式权限确认。
- **会话压缩 (Compaction)**:当对话历史过长影响性能或成本时,自动将旧消息总结为摘要,保持上下文精简。
- **钩子集成 (Hooks)**:在工具执行的前后触发插件定义的钩子,支持干预工具输入或处理执行结果。
- **沙箱执行 (Sandbox)**:为 Bash 等敏感工具提供受限的执行环境。
## 实现逻辑
### 核心子模块
- **`conversation.rs`**: 定义了核心的 `ConversationRuntime``ApiClient`/`ToolExecutor` 特性。
- **`mcp_stdio.rs` / `mcp_client.rs`**: 实现了完整的 MCP 规范,支持通过标准输入/输出与外部工具服务器交互。
- **`session.rs`**: 定义了消息模型 (`ConversationMessage`)、内容块 (`ContentBlock`) 和会话序列化逻辑。
- **`permissions.rs`**: 实现了权限审核逻辑和提示器接口。
- **`compact.rs`**: 包含了基于 LLM 的会话摘要生成和历史裁剪算法。
- **`config.rs`**: 负责加载和合并多层级的配置文件。
### 对话循环流程 (run_turn)
1. 将用户输入推入 `Session`
2. 调用 `ApiClient` 发起流式请求。
3. 监听 `AssistantEvent`,解析文本内容和工具调用请求。
4. **工具权限审核**:针对每个 `ToolUse`,根据 `PermissionPolicy` 决定是允许、拒绝还是询问用户。
5. **执行工具**:若允许,则通过 `ToolExecutor`(或 MCP 客户端)执行工具,并运行相关的 `Pre/Post Hooks`
6. 将工具结果反馈给 AI进入下一轮迭代直到 AI 给出最终回复。
## 使用示例 (内部)
```rust
use runtime::{ConversationRuntime, Session, PermissionPolicy, PermissionMode};
// 初始化运行时
let mut runtime = ConversationRuntime::new(
Session::new(),
api_client,
tool_executor,
PermissionPolicy::new(PermissionMode::WorkspaceWrite),
system_prompt,
);
// 运行一轮对话
let summary = runtime.run_turn("帮我重构 src/lib.rs", Some(&mut cli_prompter))?;
println!("共迭代 {} 次,消耗 {} tokens", summary.iterations, summary.usage.total_tokens());
```

View File

@ -160,7 +160,9 @@ fn summarize_messages(messages: &[ConversationMessage]) -> String {
.filter_map(|block| match block {
ContentBlock::ToolUse { name, .. } => Some(name.as_str()),
ContentBlock::ToolResult { tool_name, .. } => Some(tool_name.as_str()),
ContentBlock::Text { .. } => None,
ContentBlock::Text { .. }
| ContentBlock::Thinking { .. }
| ContentBlock::RedactedThinking { .. } => None,
})
.collect::<Vec<_>>();
tool_names.sort_unstable();
@ -275,6 +277,8 @@ fn summarize_block(block: &ContentBlock) -> String {
"tool_result {tool_name}: {}{output}",
if *is_error { "error " } else { "" }
),
ContentBlock::Thinking { thinking, .. } => format!("thinking: {thinking}"),
ContentBlock::RedactedThinking { .. } => "thinking: <redacted>".to_string(),
};
truncate_summary(&raw, 160)
}
@ -324,6 +328,8 @@ fn collect_key_files(messages: &[ConversationMessage]) -> Vec<String> {
.flat_map(|message| message.blocks.iter())
.map(|block| match block {
ContentBlock::Text { text } => text.as_str(),
ContentBlock::Thinking { thinking, .. } => thinking.as_str(),
ContentBlock::RedactedThinking { .. } => "",
ContentBlock::ToolUse { input, .. } => input.as_str(),
ContentBlock::ToolResult { output, .. } => output.as_str(),
})
@ -348,7 +354,9 @@ fn first_text_block(message: &ConversationMessage) -> Option<&str> {
ContentBlock::Text { text } if !text.trim().is_empty() => Some(text.as_str()),
ContentBlock::ToolUse { .. }
| ContentBlock::ToolResult { .. }
| ContentBlock::Text { .. } => None,
| ContentBlock::Text { .. }
| ContentBlock::Thinking { .. }
| ContentBlock::RedactedThinking { .. } => None,
})
}
@ -394,6 +402,8 @@ fn estimate_message_tokens(message: &ConversationMessage) -> usize {
.iter()
.map(|block| match block {
ContentBlock::Text { text } => text.len() / 4 + 1,
ContentBlock::Thinking { thinking, .. } => thinking.len() / 4 + 1,
ContentBlock::RedactedThinking { .. } => 1,
ContentBlock::ToolUse { name, input, .. } => (name.len() + input.len()) / 4 + 1,
ContentBlock::ToolResult {
tool_name, output, ..

View File

@ -19,6 +19,7 @@ pub struct ApiRequest {
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum AssistantEvent {
TextDelta(String),
ThinkingDelta(String),
ToolUse {
id: String,
name: String,
@ -122,6 +123,7 @@ where
)
}
#[allow(clippy::needless_pass_by_value)]
#[must_use]
pub fn new_with_features(
session: Session,
@ -299,6 +301,16 @@ fn build_assistant_message(
for event in events {
match event {
AssistantEvent::TextDelta(delta) => text.push_str(&delta),
AssistantEvent::ThinkingDelta(delta) => {
if let Some(ContentBlock::Thinking { thinking, .. }) = blocks.last_mut() {
thinking.push_str(&delta);
} else {
blocks.push(ContentBlock::Thinking {
thinking: delta,
signature: None,
});
}
}
AssistantEvent::ToolUse { id, name, input } => {
flush_text_block(&mut text, &mut blocks);
blocks.push(ContentBlock::ToolUse { id, name, input });

View File

@ -74,7 +74,7 @@ impl HookRunner {
#[must_use]
pub fn run_pre_tool_use(&self, tool_name: &str, tool_input: &str) -> HookRunResult {
self.run_commands(
Self::run_commands(
HookEvent::PreToolUse,
self.config.pre_tool_use(),
tool_name,
@ -92,7 +92,7 @@ impl HookRunner {
tool_output: &str,
is_error: bool,
) -> HookRunResult {
self.run_commands(
Self::run_commands(
HookEvent::PostToolUse,
self.config.post_tool_use(),
tool_name,
@ -103,7 +103,6 @@ impl HookRunner {
}
fn run_commands(
&self,
event: HookEvent,
commands: &[String],
tool_name: &str,
@ -238,7 +237,7 @@ fn format_hook_warning(command: &str, code: i32, stdout: Option<&str>, stderr: &
fn shell_command(command: &str) -> CommandWithStdin {
#[cfg(windows)]
let mut command_builder = {
let command_builder = {
let mut command_builder = Command::new("cmd");
command_builder.arg("/C").arg(command);
CommandWithStdin::new(command_builder)

View File

@ -1,7 +1,8 @@
use std::collections::BTreeMap;
use std::fmt::{Display, Formatter};
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, PartialEq, Eq)]
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub enum JsonValue {
Null,
Bool(bool),

View File

@ -892,9 +892,12 @@ mod tests {
]
.join("\n");
fs::write(&script_path, script).expect("write script");
let mut permissions = fs::metadata(&script_path).expect("metadata").permissions();
permissions.set_mode(0o755);
fs::set_permissions(&script_path, permissions).expect("chmod");
#[cfg(unix)]
{
let mut permissions = fs::metadata(&script_path).expect("metadata").permissions();
permissions.set_mode(0o755);
fs::set_permissions(&script_path, permissions).expect("chmod");
}
script_path
}
@ -1018,9 +1021,12 @@ mod tests {
]
.join("\n");
fs::write(&script_path, script).expect("write script");
let mut permissions = fs::metadata(&script_path).expect("metadata").permissions();
permissions.set_mode(0o755);
fs::set_permissions(&script_path, permissions).expect("chmod");
#[cfg(unix)]
{
let mut permissions = fs::metadata(&script_path).expect("metadata").permissions();
permissions.set_mode(0o755);
fs::set_permissions(&script_path, permissions).expect("chmod");
}
script_path
}
@ -1122,9 +1128,12 @@ mod tests {
]
.join("\n");
fs::write(&script_path, script).expect("write script");
let mut permissions = fs::metadata(&script_path).expect("metadata").permissions();
permissions.set_mode(0o755);
fs::set_permissions(&script_path, permissions).expect("chmod");
#[cfg(unix)]
{
let mut permissions = fs::metadata(&script_path).expect("metadata").permissions();
permissions.set_mode(0o755);
fs::set_permissions(&script_path, permissions).expect("chmod");
}
script_path
}

View File

@ -20,6 +20,14 @@ pub enum MessageRole {
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum ContentBlock {
Thinking {
thinking: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
signature: Option<String>,
},
RedactedThinking {
data: JsonValue,
},
Text {
text: String,
},
@ -261,6 +269,26 @@ impl ContentBlock {
object.insert("type".to_string(), JsonValue::String("text".to_string()));
object.insert("text".to_string(), JsonValue::String(text.clone()));
}
Self::Thinking { thinking, signature } => {
object.insert("type".to_string(), JsonValue::String("thinking".to_string()));
object.insert(
"thinking".to_string(),
JsonValue::String(thinking.clone()),
);
if let Some(signature) = signature {
object.insert(
"signature".to_string(),
JsonValue::String(signature.clone()),
);
}
}
Self::RedactedThinking { data } => {
object.insert(
"type".to_string(),
JsonValue::String("redacted_thinking".to_string()),
);
object.insert("data".to_string(), data.clone());
}
Self::ToolUse { id, name, input } => {
object.insert(
"type".to_string(),
@ -312,6 +340,13 @@ impl ContentBlock {
name: required_string(object, "name")?,
input: required_string(object, "input")?,
}),
"thinking" => Ok(Self::Thinking {
thinking: required_string(object, "thinking")?,
signature: object.get("signature").and_then(JsonValue::as_str).map(ToOwned::to_owned),
}),
"redacted_thinking" => Ok(Self::RedactedThinking {
data: object.get("data").cloned().ok_or_else(|| SessionError::Format("missing data".to_string()))?,
}),
"tool_result" => Ok(Self::ToolResult {
tool_use_id: required_string(object, "tool_use_id")?,
tool_name: required_string(object, "tool_name")?,

View File

@ -0,0 +1,64 @@
# Rusty Claude CLI 模块 (rusty-claude-cli)
本模块提供了 Claw 命令行界面的另一个功能完整的实现。它集成了对话、工具执行、插件扩展以及身份验证等核心功能。
## 概览
`rusty-claude-cli` 是一个全功能的 CLI 应用程序,其主要职责包括:
- **用户交互**:提供交互式 REPL 和非交互式命令执行(`prompt` 子命令)。
- **环境初始化**:处理项目初始化 (`init`) 和配置加载。
- **身份验证**:通过本地回环服务器处理 OAuth 登录流程。
- **状态渲染**:实现丰富的终端 UI 效果,如 Markdown 渲染、语法高亮和动态加载动画 (Spinner)。
- **会话管理**:支持从保存的文件中恢复会话并执行追加的斜杠命令。
## 与 `claw-cli` 的关系
虽然 `rusty-claude-cli``claw-cli` 都生成名为 `claw` 的二进制文件,但 `rusty-claude-cli` 包含更复杂的集成逻辑:
- 它直接引用了几乎所有的核心 crate`runtime`, `api`, `tools`, `plugins`, `commands`)。
- 它的 `main.rs` 实现非常庞大,包含了大量的业务编排逻辑。
- 它可以作为一个独立的、集成度极高的 CLI 参考实现。
## 关键特性
- **多功能子命令**
- `prompt`:快速运行单次推理。
- `login`/`logout`OAuth 身份验证管理。
- `init`:项目环境自举。
- `bootstrap-plan`:查看系统的启动阶段。
- `dump-manifests`:从上游源码中提取并显示功能清单。
- **增强的 REPL**
- 支持多行输入和历史记录。
- 集成了斜杠命令处理引擎。
- 提供详细的消耗统计和权限模式切换报告。
- **灵活的权限控制**:支持通过命令行参数 `--permission-mode` 或环境变量动态调整权限级别。
## 实现逻辑
### 核心子模块
- **`main.rs`**: 程序的入口,包含了复杂的参数解析逻辑和 REPL 循环。
- **`render.rs`**: 封装了 `TerminalRenderer``Spinner`,负责所有的终端输出美化。
- **`input.rs`**: 处理从标准输入读取数据及命令解析。
- **`init.rs`**: 专注于仓库的初始化和 `.claw.md` 文件的生成。
- **`app.rs`**: 可能包含应用程序级别的高层状态管理(取决于具体实现)。
### 工作流程
1. 程序启动,解析命令行参数。
2. 根据参数决定是执行单次任务还是进入 REPL 模式。
3. 在 REPL 模式下,初始化 `ConversationRuntime`
4. 进入循环:读取用户输入 -> 处理斜杠命令或发送给 AI -> 渲染响应 -> 执行工具 -> 循环。
5. 会话数据根据需要保存或恢复。
## 使用示例
```bash
# 启动交互模式
cargo run -p rusty-claude-cli --bin claw
# 直接运行 Prompt
cargo run -p rusty-claude-cli --bin claw prompt "检查代码中的内存泄漏"
# 恢复之前的会话并执行压缩
cargo run -p rusty-claude-cli --bin claw --resume session.json /compact
```

View File

@ -35,7 +35,7 @@ use runtime::{
};
use serde_json::json;
use tools::{execute_tool, mvp_tool_specs, ToolSpec};
use plugins::{self, PluginManager, PluginManagerConfig};
use plugins::{self};
const DEFAULT_MODEL: &str = "claude-opus-4-6";
const DEFAULT_MAX_TOKENS: u32 = 32;
@ -1870,6 +1870,12 @@ fn render_export_text(session: &Session) -> String {
for block in &message.blocks {
match block {
ContentBlock::Text { text } => lines.push(text.clone()),
ContentBlock::Thinking { thinking, .. } => {
lines.push(format!("[thinking] {thinking}"));
}
ContentBlock::RedactedThinking { .. } => {
lines.push("[thinking] <redacted>".to_string());
}
ContentBlock::ToolUse { id, name, input } => {
lines.push(format!("[tool_use id={id} name={name}] {input}"));
}
@ -2352,6 +2358,16 @@ fn convert_messages(messages: &[ConversationMessage]) -> Vec<InputMessage> {
.iter()
.map(|block| match block {
ContentBlock::Text { text } => InputContentBlock::Text { text: text.clone() },
ContentBlock::Thinking {
thinking,
signature,
} => InputContentBlock::Thinking {
thinking: thinking.clone(),
signature: signature.clone(),
},
ContentBlock::RedactedThinking { data } => InputContentBlock::RedactedThinking {
data: serde_json::from_str(&data.render()).unwrap_or(serde_json::Value::Null),
},
ContentBlock::ToolUse { id, name, input } => InputContentBlock::ToolUse {
id: id.clone(),
name: name.clone(),

View File

@ -5,6 +5,10 @@ edition.workspace = true
license.workspace = true
publish.workspace = true
[[bin]]
name = "claw-server"
path = "src/main.rs"
[dependencies]
async-stream = "0.3"
axum = "0.8"
@ -12,6 +16,14 @@ runtime = { path = "../runtime" }
serde = { version = "1", features = ["derive"] }
serde_json.workspace = true
tokio = { version = "1", features = ["macros", "rt-multi-thread", "sync", "net", "time"] }
tower = "0.5"
tower-http = { version = "0.6", features = ["cors"] }
api = { path = "../api" }
tools = { path = "../tools" }
plugins = { path = "../plugins" }
commands = { path = "../commands" }
dotenvy = "0.15"
chrono = "0.4"
[dev-dependencies]
reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls", "stream"] }

57
crates/server/README.md Normal file
View File

@ -0,0 +1,57 @@
# 服务模块 (server)
本模块提供了一个基于 HTTP 的 RESTful API 和 Server-Sent Events (SSE) 流接口,允许通过网络远程管理和与 Claw 会话进行交互。
## 概览
`server` 模块将 `runtime` 的核心功能封装为 Web 服务,其主要职责包括:
- **会话管理**:提供创建、列出和获取会话详情的端点。
- **消息分发**:接收用户消息并将其路由到相应的会话实例。
- **实时流推送**:通过 SSE 接口实时推送会话事件(如 AI 响应消息、状态快照)。
- **状态维护**:在内存中管理多个活跃会话的生命周期。
## 关键特性
- **RESTful API**:使用 `axum` 框架实现,遵循现代 Web 服务标准。
- **事件流 (SSE)**:支持 `text/event-stream`,允许客户端实时订阅会话更新。
- **并发处理**:利用 `tokio``broadcast` 频道,支持多个客户端同时监听同一会话的事件。
- **快照机制**:在建立连接时发送当前会话的完整快照,确保客户端能够同步历史状态。
## 实现逻辑
### 核心接口 (API Routes)
- `POST /sessions`: 创建一个新的对话会话。
- `GET /sessions`: 列出所有活跃会话的简要信息。
- `GET /sessions/{id}`: 获取指定会话的完整详细信息。
- `POST /sessions/{id}/message`: 向指定会话发送一条新消息。
- `GET /sessions/{id}/events`: 建立 SSE 连接,订阅该会话的实时事件流。
### 核心结构
- **`AppState`**: 存储全局状态,包括 `SessionStore` (由 `RwLock` 保护的哈希表) 和会话 ID 分配器。
- **`Session`**: 封装了 `runtime::Session` 实例,并包含一个用于广播事件的 `broadcast::Sender`
- **`SessionEvent`**: 定义了流中传输的事件类型,包括 `Snapshot` (快照) 和 `Message` (新消息)。
### 工作流程
1. 启动服务并初始化 `AppState`
2. 客户端通过 `POST /sessions` 开启一个新会话。
3. 客户端连接 `GET /sessions/{id}/events` 以监听响应。
4. 客户端通过 `POST /sessions/{id}/message` 发送 Prompt。
5. 服务端将消息存入 `runtime::Session`并触发广播。SSE 流将该消息及后续的 AI 响应实时推送回客户端。
## 使用示例 (内部)
```rust
use server::{app, AppState};
use axum::Router;
// 创建应用路由
let state = AppState::new();
let router = app(state);
// 启动服务(示例)
let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await.unwrap();
axum::serve(listener, router).await.unwrap();
```

File diff suppressed because it is too large Load Diff

74
crates/server/src/main.rs Normal file
View File

@ -0,0 +1,74 @@
use std::env;
use std::net::SocketAddr;
use server::{app, AppState};
use tokio::net::TcpListener;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// 尝试加载 .env
let _ = dotenvy::dotenv();
let host = env::var("SERVER_HOST").unwrap_or_else(|_| "127.0.0.1".to_string());
let port = env::var("SERVER_PORT")
.unwrap_or_else(|_| "3000".to_string())
.parse::<u16>()?;
let addr = format!("{host}:{port}").parse::<SocketAddr>()?;
// 解析模型(支持别名,如 "opus" -> "claude-opus-4-6"
let raw_model = env::var("CLAW_MODEL").unwrap_or_else(|_| "claude-opus-4-6".to_string());
let model = api::resolve_model_alias(&raw_model).to_string();
// 动态日期
let today = chrono::Local::now().format("%Y-%m-%d").to_string();
// 构建系统提示词(从项目目录加载或使用默认值)
let cwd = env::current_dir()?;
let system_prompt =
runtime::load_system_prompt(&cwd, &today, env::consts::OS, "unknown")
.unwrap_or_else(|_| {
vec![
"You are a helpful AI assistant running inside the Claw web interface."
.to_string(),
]
});
// 解析权限模式(从环境变量或配置,默认 WorkspaceWrite
let permission_mode = match env::var("CLAW_PERMISSION_MODE")
.unwrap_or_default()
.as_str()
{
"danger" | "DangerFullAccess" => runtime::PermissionMode::DangerFullAccess,
"readonly" | "ReadOnly" => runtime::PermissionMode::ReadOnly,
_ => {
// 尝试从 .claw.json 配置读取
let loader = runtime::ConfigLoader::default_for(&cwd);
loader
.load()
.ok()
.and_then(|config| config.permission_mode())
.map(|resolved| match resolved {
runtime::ResolvedPermissionMode::ReadOnly => runtime::PermissionMode::ReadOnly,
runtime::ResolvedPermissionMode::WorkspaceWrite => runtime::PermissionMode::WorkspaceWrite,
runtime::ResolvedPermissionMode::DangerFullAccess => runtime::PermissionMode::DangerFullAccess,
})
.unwrap_or(runtime::PermissionMode::WorkspaceWrite)
}
};
// 初始化应用状态
let state = AppState::new(model.clone(), system_prompt, permission_mode, cwd)?;
// 构建路由
let router = app(state);
println!("Claw Server started");
println!(" address: http://{addr}");
println!(" pid: {}", std::process::id());
println!(" model: {model}");
println!(" tip: curl -X POST http://{addr}/sessions");
let listener = TcpListener::bind(addr).await?;
axum::serve(listener, router).await?;
Ok(())
}

51
crates/tools/README.md Normal file
View File

@ -0,0 +1,51 @@
# 工具规范模块 (tools)
本模块定义了 AI 代理可以使用的所有内置工具的规范 (Schema)、权限要求以及分发逻辑。
## 概览
`tools` 模块充当了 AI 认知能力与物理操作之间的桥梁,其主要职责包括:
- **工具定义**:使用 JSON Schema 定义每个工具的输入参数结构,以便 AI 正确调用。
- **权限映射**:为每个工具分配安全等级(如只读、工作区写入、完全访问)。
- **工具注册表 (GlobalToolRegistry)**:统一管理内置工具和由插件提供的动态工具。
- **分发执行**:将 AI 生成的 JSON 调用分发到 `runtime` 模块中的具体实现。
## 关键特性
- **内置工具集 (MVP Tools)**
- **系统交互**`bash`, `PowerShell`, `REPL`
- **文件操作**`read_file`, `write_file`, `edit_file`
- **搜索与发现**`glob_search`, `grep_search`, `ToolSearch`
- **网络与辅助**`WebSearch`, `WebFetch`, `Sleep`
- **高级调度**`Agent`(启动子代理), `Skill`(加载专用技能), `TodoWrite`(任务管理)。
- **名称归一化**:支持工具别名(例如将 `grep` 映射为 `grep_search`),提高 AI 调用的稳健性。
- **插件集成**:允许 `plugins` 模块注册自定义工具,并确保它们与内置工具不发生命名冲突。
## 实现逻辑
### 核心结构
- **`ToolSpec`**: 核心配置结构存储工具的元数据名称、描述、Schema、权限
- **`GlobalToolRegistry`**: 负责维护工具列表,并提供 `definitions` 方法生成供 LLM 使用的工具 API 声明。
- **`execute_tool`**: 顶级分发函数,负责将反序列化后的输入传递给底层的执行函数。
### 工作流程
1. 系统初始化时,根据用户配置和加载的插件,构建 `GlobalToolRegistry`
2. 将工具定义转换为 AI 模型可理解的格式(由 `api` 模块处理)。
3. 当接收到 AI 的工具调用请求时,`runtime::ConversationRuntime` 调用 `ToolExecutor`
4. `ToolExecutor` 委托给本模块的 `execute_tool` 函数。
5. 本模块验证输入格式,并调用 `runtime` 提供的底层文件或进程操作 API。
## 使用示例 (工具定义)
```rust
use tools::{ToolSpec, mvp_tool_specs};
use serde_json::json;
// 获取所有 MVP 工具的规范
let specs = mvp_tool_specs();
for spec in specs {
println!("工具: {}, 权限级别: {:?}", spec.name, spec.required_permission);
}
```

View File

@ -1628,7 +1628,7 @@ fn build_agent_runtime(
.clone()
.unwrap_or_else(|| DEFAULT_AGENT_MODEL.to_string());
let allowed_tools = job.allowed_tools.clone();
let api_client = ProviderRuntimeClient::new(model, allowed_tools.clone())?;
let api_client = ProviderRuntimeClient::new(&model, allowed_tools.clone())?;
let tool_executor = SubagentToolExecutor::new(allowed_tools);
Ok(ConversationRuntime::new(
Session::new(),
@ -1805,8 +1805,8 @@ struct ProviderRuntimeClient {
}
impl ProviderRuntimeClient {
fn new(model: String, allowed_tools: BTreeSet<String>) -> Result<Self, String> {
let model = resolve_model_alias(&model).to_string();
fn new(model: &str, allowed_tools: BTreeSet<String>) -> Result<Self, String> {
let model = resolve_model_alias(model).clone();
let client = ProviderClient::from_model(&model).map_err(|error| error.to_string())?;
Ok(Self {
runtime: tokio::runtime::Runtime::new().map_err(|error| error.to_string())?,
@ -1818,6 +1818,7 @@ impl ProviderRuntimeClient {
}
impl ApiClient for ProviderRuntimeClient {
#[allow(clippy::too_many_lines)]
fn stream(&mut self, request: ApiRequest) -> Result<Vec<AssistantEvent>, RuntimeError> {
let tools = tool_specs_for_allowed_tools(Some(&self.allowed_tools))
.into_iter()
@ -1991,6 +1992,13 @@ fn convert_messages(messages: &[ConversationMessage]) -> Vec<InputMessage> {
}],
is_error: *is_error,
},
ContentBlock::Thinking { thinking, signature } => InputContentBlock::Thinking {
thinking: thinking.clone(),
signature: signature.clone(),
},
ContentBlock::RedactedThinking { data } => InputContentBlock::RedactedThinking {
data: serde_json::from_str(&data.render()).unwrap_or(serde_json::Value::Null),
},
})
.collect::<Vec<_>>();
(!content.is_empty()).then(|| InputMessage {
@ -2061,7 +2069,10 @@ fn final_assistant_text(summary: &runtime::TurnSummary) -> String {
.iter()
.filter_map(|block| match block {
ContentBlock::Text { text } => Some(text.as_str()),
_ => None,
ContentBlock::Thinking { .. }
| ContentBlock::RedactedThinking { .. }
| ContentBlock::ToolUse { .. }
| ContentBlock::ToolResult { .. } => None,
})
.collect::<Vec<_>>()
.join("")

3
frontend/.gitignore vendored Normal file
View File

@ -0,0 +1,3 @@
node_modules
dist
*.local

50
frontend/README.md Normal file
View File

@ -0,0 +1,50 @@
# Claw Code Frontend
Claw Code 的 Web 前端界面,使用 [Ant Design X](https://x.ant.design/) 构建。
## 技术栈
- React 19 + TypeScript
- Ant Design X 2.5Bubble / Sender / Conversations / Think / ThoughtChain / XMarkdown
- Vite 6
## 开发
```bash
npm install
npm run dev
```
前端通过 Vite 代理连接后端,默认代理到 `http://localhost:3000`
先启动后端:
```bash
# 项目根目录
cargo run --bin claw-server
```
## 构建
```bash
npm run build
```
产出目录:`dist/`。
## 项目结构
```
src/
main.tsx # 入口
App.tsx # XProvider 主题 + 布局 + 状态管理
api.ts # REST API 客户端
types.ts # 类型定义
hooks/
useSSE.ts # SSE 事件流
components/
ChatView.tsx # 聊天区Bubble.List + Sender + XMarkdown
SessionSidebar.tsx # 会话侧边栏Conversations
ToolChain.tsx # 工具调用链ThoughtChain
WelcomeScreen.tsx # 欢迎页
```

12
frontend/index.html Normal file
View File

@ -0,0 +1,12 @@
<!doctype html>
<html lang="zh-CN">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Claw Code</title>
</head>
<body>
<div id="root"></div>
<script type="module" src="/src/main.tsx"></script>
</body>
</html>

4910
frontend/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

29
frontend/package.json Normal file
View File

@ -0,0 +1,29 @@
{
"name": "claw-frontend",
"private": true,
"version": "0.1.0",
"type": "module",
"scripts": {
"dev": "vite",
"build": "tsc -b && vite build",
"preview": "vite preview"
},
"dependencies": {
"@ant-design/icons": "^6.0.0",
"@ant-design/x": "^2.5.0",
"@ant-design/x-markdown": "^2.5.0",
"@ant-design/x-sdk": "^2.5.0",
"@antv/infographic": "^0.2.16",
"antd": "^6.1.1",
"marked-emoji": "^2.0.3",
"react": "^19.1.0",
"react-dom": "^19.1.0"
},
"devDependencies": {
"@types/react": "^19.1.0",
"@types/react-dom": "^19.1.0",
"@vitejs/plugin-react": "^4.4.1",
"typescript": "~5.8.3",
"vite": "^6.3.2"
}
}

View File

@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 100 100"><text y=".9em" font-size="90">🐾</text></svg>

After

Width:  |  Height:  |  Size: 110 B

343
frontend/src/App.tsx Normal file
View File

@ -0,0 +1,343 @@
import React, { useState, useCallback, useRef } from 'react';
import { XProvider } from '@ant-design/x';
import zhCN_X from '@ant-design/x/locale/zh_CN';
import { theme } from 'antd';
import zhCN from 'antd/locale/zh_CN';
import SessionSidebar from './components/SessionSidebar';
import ChatView from './components/ChatView';
import type { ChatDisplayMessage } from './components/ChatView';
import type { ContentBlock, ConversationMessage, SessionEvent, TokenUsage } from './types';
import { useSSE } from './hooks/useSSE';
import * as api from './api';
// 将服务端消息格式tool 消息独立)合并为前端展示格式
// 服务端: user → assistant(text+tool_use) → tool(tool_result) → tool(tool_result) → assistant(text+tool_use) → ...
// 前端: user → assistant(text+tool_use+tool_result) → assistant(text+tool_use+tool_result) → ...
function mergeMessages(raw: ConversationMessage[]): ChatDisplayMessage[] {
const result: ChatDisplayMessage[] = [];
let assistantIdx = -1; // 上一个 assistant 消息在 result 中的索引
for (let i = 0; i < raw.length; i++) {
const m = raw[i];
if (m.role === 'assistant') {
result.push({
key: `msg-${i}`,
role: 'assistant',
blocks: [...m.blocks],
streaming: false,
});
assistantIdx = result.length - 1;
} else if (m.role === 'user') {
result.push({
key: `msg-${i}`,
role: 'user',
blocks: [...m.blocks],
streaming: false,
});
assistantIdx = -1;
} else if (m.role === 'tool') {
// 将 tool_result blocks 合并到上一个 assistant 消息
if (assistantIdx >= 0) {
result[assistantIdx].blocks = [
...result[assistantIdx].blocks,
...m.blocks,
];
}
}
}
return result;
}
// 助手消息的累积缓冲区
interface AssistantBuffer {
text: string;
thinking: string;
toolCalls: Map<string, { id: string; name: string; input: string; output?: string; isError?: boolean }>;
}
function blocksFromBuffer(buffer: AssistantBuffer, _streaming: boolean): ContentBlock[] {
const blocks: ContentBlock[] = [];
if (buffer.thinking) {
blocks.push({ type: 'thinking', thinking: buffer.thinking });
}
if (buffer.text) {
blocks.push({ type: 'text', text: buffer.text });
}
for (const tool of buffer.toolCalls.values()) {
blocks.push({ type: 'tool_use', id: tool.id, name: tool.name, input: tool.input });
if (tool.output !== undefined) {
blocks.push({ type: 'tool_result', tool_use_id: tool.id, tool_name: tool.name, output: tool.output, is_error: tool.isError ?? false });
}
}
return blocks;
}
const App: React.FC = () => {
const [isDark, setIsDark] = useState(() => {
const saved = localStorage.getItem('claw-theme');
if (saved) return saved === 'dark';
return window.matchMedia('(prefers-color-scheme: dark)').matches;
});
const [activeSessionId, setActiveSessionId] = useState<string | null>(null);
const [messages, setMessages] = useState<ChatDisplayMessage[]>([]);
const [isStreaming, setIsStreaming] = useState(false);
const [_usage, setUsage] = useState<TokenUsage | null>(null);
// 助手消息缓冲区
const bufferRef = useRef<AssistantBuffer | null>(null);
const msgCounterRef = useRef(0);
const toggleTheme = useCallback(() => {
setIsDark((prev) => {
const next = !prev;
localStorage.setItem('claw-theme', next ? 'dark' : 'light');
return next;
});
}, []);
// 处理 SSE 事件
const handleEvent = useCallback((event: SessionEvent) => {
switch (event.type) {
case 'snapshot': {
// 初始化完整消息状态(合并 tool 消息到 assistant
setMessages(mergeMessages(event.messages));
setIsStreaming(false);
bufferRef.current = null;
break;
}
case 'message_delta': {
// 累积文本 delta
if (!bufferRef.current) return;
bufferRef.current.text += event.text;
setMessages((prev) =>
updateLastAssistant(prev, bufferRef.current!)
);
break;
}
case 'thinking_delta': {
if (!bufferRef.current) return;
bufferRef.current.thinking += event.thinking;
setMessages((prev) =>
updateLastAssistant(prev, bufferRef.current!)
);
break;
}
case 'tool_use_start': {
if (!bufferRef.current) return;
bufferRef.current.toolCalls.set(event.tool_use_id, {
id: event.tool_use_id,
name: event.tool_name,
input: event.input,
});
setMessages((prev) =>
updateLastAssistant(prev, bufferRef.current!)
);
break;
}
case 'tool_result': {
if (!bufferRef.current) return;
const existing = bufferRef.current.toolCalls.get(event.tool_use_id);
if (existing) {
existing.output = event.output;
existing.isError = event.is_error;
} else {
bufferRef.current.toolCalls.set(event.tool_use_id, {
id: event.tool_use_id,
name: event.tool_name,
input: '',
output: event.output,
isError: event.is_error,
});
}
setMessages((prev) =>
updateLastAssistant(prev, bufferRef.current!)
);
break;
}
case 'usage': {
setUsage(event.usage);
break;
}
case 'turn_complete': {
setIsStreaming(false);
setUsage(event.usage);
// 标记最后一条助手消息为非流式
setMessages((prev) => {
if (prev.length === 0) return prev;
const last = prev[prev.length - 1];
if (last.role !== 'assistant') return prev;
return [
...prev.slice(0, -1),
{ ...last, streaming: false },
];
});
bufferRef.current = null;
break;
}
case 'message': {
// 忽略完整 message 事件,因为 delta 已经处理了流式组装
break;
}
}
}, []);
// SSE 连接
useSSE(activeSessionId, handleEvent);
// 新建会话
const handleNewSession = useCallback(async () => {
try {
const res = await api.createSession();
setActiveSessionId(res.session_id);
setMessages([]);
setUsage(null);
setIsStreaming(false);
bufferRef.current = null;
} catch (err) {
console.error('创建会话失败:', err);
}
}, []);
// 切换会话
const handleSessionChange = useCallback(async (id: string) => {
try {
const details = await api.getSession(id);
setActiveSessionId(id);
setMessages(mergeMessages(details.messages));
setUsage(null);
setIsStreaming(false);
bufferRef.current = null;
} catch (err) {
console.error('加载会话失败:', err);
}
}, []);
// 删除会话
const handleDeleteSession = useCallback(async (id: string) => {
try {
await api.deleteSession(id);
if (activeSessionId === id) {
setActiveSessionId(null);
setMessages([]);
setUsage(null);
}
} catch (err) {
console.error('删除会话失败:', err);
}
}, [activeSessionId]);
// 发送消息
const handleSend = useCallback(async (message: string) => {
if (!activeSessionId || isStreaming) return;
// 添加用户消息
const userKey = `user-${++msgCounterRef.current}`;
const assistantKey = `assistant-${msgCounterRef.current}`;
// 初始化助手消息缓冲区
bufferRef.current = {
text: '',
thinking: '',
toolCalls: new Map(),
};
const userMsg: ChatDisplayMessage = {
key: userKey,
role: 'user',
blocks: [{ type: 'text', text: message }],
};
const assistantMsg: ChatDisplayMessage = {
key: assistantKey,
role: 'assistant',
blocks: [],
streaming: true,
};
setMessages((prev) => [...prev, userMsg, assistantMsg]);
setIsStreaming(true);
try {
await api.sendMessage(activeSessionId, message);
} catch (err) {
console.error('发送消息失败:', err);
setIsStreaming(false);
}
}, [activeSessionId, isStreaming]);
// 取消(中止)
const handleCancel = useCallback(() => {
setIsStreaming(false);
setMessages((prev) => {
if (prev.length === 0) return prev;
const last = prev[prev.length - 1];
if (last.role !== 'assistant') return prev;
return [
...prev.slice(0, -1),
{ ...last, streaming: false },
];
});
bufferRef.current = null;
}, []);
return (
<XProvider
locale={{ ...zhCN_X, ...zhCN }}
theme={{
algorithm: isDark ? theme.darkAlgorithm : theme.defaultAlgorithm,
token: { colorPrimary: '#1677ff' },
}}
>
<div style={{
width: '100%',
height: '100vh',
display: 'flex',
overflow: 'hidden',
background: isDark ? '#141414' : '#fff',
}}>
<SessionSidebar
activeSessionId={activeSessionId}
onSessionChange={handleSessionChange}
onNewSession={handleNewSession}
onDeleteSession={handleDeleteSession}
isDark={isDark}
onToggleTheme={toggleTheme}
/>
<ChatView
messages={messages}
isStreaming={isStreaming}
hasActiveSession={activeSessionId !== null}
onSend={handleSend}
onCancel={handleCancel}
/>
</div>
</XProvider>
);
};
// 更新最后一条助手消息
function updateLastAssistant(
prev: ChatDisplayMessage[],
buffer: AssistantBuffer,
): ChatDisplayMessage[] {
if (prev.length === 0) return prev;
const last = prev[prev.length - 1];
if (last.role !== 'assistant') return prev;
return [
...prev.slice(0, -1),
{
...last,
blocks: blocksFromBuffer(buffer, true),
},
];
}
export default App;

57
frontend/src/api.ts Normal file
View File

@ -0,0 +1,57 @@
import type {
CreateSessionResponse,
ListSessionsResponse,
SessionDetailsResponse,
UsageResponse,
CompactResponse,
} from './types';
const BASE = '/sessions';
async function request<T>(url: string, init?: RequestInit): Promise<T> {
const res = await fetch(url, {
headers: { 'Content-Type': 'application/json' },
...init,
});
if (!res.ok) {
const body = await res.json().catch(() => ({ error: res.statusText }));
throw new Error(body.error || res.statusText);
}
if (res.status === 202 || res.status === 204) return undefined as T;
const contentLength = res.headers.get('content-length');
if (contentLength === '0') return undefined as T;
const text = await res.text();
if (!text.trim()) return undefined as T;
return JSON.parse(text) as T;
}
export async function createSession(): Promise<CreateSessionResponse> {
return request<CreateSessionResponse>(BASE, { method: 'POST' });
}
export async function listSessions(): Promise<ListSessionsResponse> {
return request<ListSessionsResponse>(BASE);
}
export async function getSession(id: string): Promise<SessionDetailsResponse> {
return request<SessionDetailsResponse>(`${BASE}/${id}`);
}
export async function deleteSession(id: string): Promise<void> {
return request<void>(`${BASE}/${id}`, { method: 'DELETE' });
}
export async function sendMessage(sessionId: string, message: string): Promise<void> {
return request<void>(`${BASE}/${sessionId}/message`, {
method: 'POST',
body: JSON.stringify({ message }),
});
}
export async function compactSession(sessionId: string): Promise<CompactResponse> {
return request<CompactResponse>(`${BASE}/${sessionId}/compact`, { method: 'POST' });
}
export async function getUsage(sessionId: string): Promise<UsageResponse> {
return request<UsageResponse>(`${BASE}/${sessionId}/usage`);
}

View File

@ -0,0 +1,475 @@
import React, { useCallback } from 'react';
import { Bubble, Sender, Think, ThoughtChain, Actions, CodeHighlighter, Mermaid, Sources } from '@ant-design/x';
import { UserOutlined, RobotOutlined, GlobalOutlined } from '@ant-design/icons';
import { theme, Skeleton, Spin, Popover } from 'antd';
import { XMarkdown } from '@ant-design/x-markdown';
// 助手气泡 body 撑满可用宽度,避免 Mermaid 等内容宽度受文本行长度影响
const bubbleStyle = document.createElement('style');
bubbleStyle.textContent = '.ant-bubble-start > .ant-bubble-body { width: 80%; }';
document.head.appendChild(bubbleStyle);
import type { ComponentProps, Token } from '@ant-design/x-markdown';
import Latex from '@ant-design/x-markdown/plugins/latex';
import '@ant-design/x-markdown/themes/light.css';
import '@ant-design/x-markdown/themes/dark.css';
import type { ContentBlock } from '../types';
import ToolChain from './ToolChain';
import WelcomeScreen from './WelcomeScreen';
// ── XMarkdown 插件配置 ────────────────────────────────────────────────
// LaTeX 数学公式插件:解析 $...$ / $$...$$ / \(...\) / \[...\]
// 自定义脚注插件:解析 [^1] 语法 → <footnote> 标签
const footnoteExtension = {
name: 'footnote',
level: 'inline' as const,
start(src: string) {
const idx = src.indexOf('[^');
return idx !== -1 ? idx : undefined;
},
tokenizer(src: string) {
const match = src.match(/^\[\^(\d+)\]/);
if (!match) return;
return {
type: 'footnote',
raw: match[0],
text: match[1],
renderType: 'component' as const,
};
},
renderer(token: Token) {
return `<footnote data-key="${token.text}">${token.text}</footnote>`;
},
};
const xMarkdownConfig = { extensions: [...Latex(), footnoteExtension] };
// ── XMarkdown components 映射 ─────────────────────────────────────────
// Infographic 渲染器(动态加载 @antv/infographic
const InfographicBlock: React.FC<{ content: string }> = ({ content }) => {
const containerRef = React.useRef<HTMLDivElement>(null);
const instanceRef = React.useRef<{ render: (spec: string) => void; destroy: () => void } | null>(null);
const [loading, setLoading] = React.useState(true);
const [error, setError] = React.useState(false);
React.useEffect(() => {
if (!containerRef.current) return;
let mounted = true;
import('@antv/infographic')
.then(({ Infographic }) => {
if (!mounted || !containerRef.current) return;
instanceRef.current = new Infographic({ container: containerRef.current });
instanceRef.current.render(content);
setLoading(false);
})
.catch(() => {
if (mounted) { setLoading(false); setError(true); }
});
return () => { mounted = false; instanceRef.current?.destroy(); };
}, [content]);
if (error) {
return (
<div style={{ padding: 12, border: '1px solid #ff4d4f', borderRadius: 8, color: '#ff4d4f', fontSize: 13 }}>
Infographic @antv/infographic
</div>
);
}
return (
<div style={{ position: 'relative', border: '1px solid var(--ant-color-border-secondary)', borderRadius: 8, padding: 16 }}>
{loading && (
<div style={{ display: 'flex', alignItems: 'center', justifyContent: 'center', minHeight: 200 }}>
<Spin tip="渲染信息图..." />
</div>
)}
<div ref={containerRef} style={{ display: loading ? 'none' : 'block' }} />
</div>
);
};
// 完整的代码块渲染器
const CodeBlock: React.FC<ComponentProps> = ({ children, lang, block, streamStatus, ...rest }) => {
// 行内 code
if (!block) {
return <code {...rest}>{children}</code>;
}
const content = String(children).replace(/\n$/, '');
// Mermaid 图表:直接渲染,让 Mermaid 组件展示内置的渲染动画
if (lang === 'mermaid') {
if (!content) return null;
return <Mermaid>{content}</Mermaid>;
}
// Infographic 信息图
if (lang === 'infographic') {
if (!content) return null;
return <InfographicBlock content={content} />;
}
// 普通代码块:语法高亮
return (
<CodeHighlighter lang={lang} header={lang || undefined}>
{content}
</CodeHighlighter>
);
};
// 流式渲染:图片未闭合 → 骨架屏
const IncompleteImage = () => <Skeleton.Image active style={{ width: 60, height: 60 }} />;
// 流式渲染:链接未闭合 → 显示已有文本
const IncompleteLink: React.FC<ComponentProps> = (props) => {
const text = decodeURIComponent(String(props['data-raw'] || ''));
const match = text.match(/^\[([^\]]*)\]/);
const displayText = match ? match[1] : text.slice(1);
return <a style={{ pointerEvents: 'none' }} href="#">{displayText}</a>;
};
// 流式渲染:表格未闭合 → 骨架屏
const IncompleteTable = () => <Skeleton.Node active style={{ width: 160 }} />;
// 流式渲染HTML 未闭合 → 骨架屏
const IncompleteHtml = () => <Skeleton.Node active style={{ width: 383, height: 120 }} />;
// 流式渲染:强调语法未闭合 → 显示已有文本
const IncompleteEmphasis: React.FC<ComponentProps> = (props) => {
const text = decodeURIComponent(String(props['data-raw'] || ''));
const match = text.match(/^([*_]{1,3})([^*_]*)/);
if (!match || !match[2]) return null;
const [, symbols, content] = match;
const level = symbols.length;
if (level === 1) return <em>{content}</em>;
if (level === 2) return <strong>{content}</strong>;
return <em><strong>{content}</strong></em>;
};
// 流式渲染:行内代码未闭合 → 显示已有文本
const IncompleteInlineCode: React.FC<ComponentProps> = (props) => {
const rawData = String(props['data-raw'] || '');
if (!rawData) return null;
return <code>{decodeURIComponent(rawData).slice(1)}</code>;
};
// Markdown 中嵌入的 <think /> 标签渲染(根据 streamStatus 自动切换状态)
const ThinkInMarkdown: React.FC<ComponentProps> = React.memo((props) => {
const isDone = props.streamStatus === 'done';
return (
<Think
title={isDone ? '思考完成' : '思考中...'}
loading={!isDone}
defaultExpanded={!isDone}
>
{props.children}
</Think>
);
});
// <sup> 引用 → Sources 内联组件(搜索增强场景)
const SupComponent: React.FC<ComponentProps> = React.memo((props) => {
const key = parseInt(String(props.children) || '0', 10);
return (
<Sources
activeKey={key}
title={props.children}
items={[{ key, title: `来源 ${key}`, url: '#' }]}
inline
/>
);
});
// 自定义脚注 [^1] → 可点击引用标记
const FootnoteComponent: React.FC<ComponentProps> = React.memo((props) => {
const key = String(props['data-key'] || props.children);
return (
<Popover content={`脚注 ${key}`} trigger="hover">
<sup
style={{
display: 'inline-flex',
alignItems: 'center',
justifyContent: 'center',
width: 18,
height: 18,
borderRadius: '50%',
background: 'var(--ant-color-fill-quaternary)',
fontSize: 11,
cursor: 'pointer',
marginLeft: 2,
transition: 'background 0.2s',
}}
>
{key}
</sup>
</Popover>
);
});
const xMarkdownComponents = {
code: CodeBlock,
think: ThinkInMarkdown,
sup: SupComponent,
footnote: FootnoteComponent,
'incomplete-image': IncompleteImage,
'incomplete-link': IncompleteLink,
'incomplete-table': IncompleteTable,
'incomplete-html': IncompleteHtml,
'incomplete-emphasis': IncompleteEmphasis,
'incomplete-inline-code': IncompleteInlineCode,
};
interface ChatViewProps {
messages: ChatDisplayMessage[];
isStreaming: boolean;
hasActiveSession: boolean;
onSend: (message: string) => void;
onCancel: () => void;
}
export interface ChatDisplayMessage {
key: string;
role: 'user' | 'assistant';
blocks: ContentBlock[];
streaming?: boolean;
}
const ChatView: React.FC<ChatViewProps> = ({
messages,
isStreaming,
hasActiveSession,
onSend,
onCancel,
}) => {
const [inputValue, setInputValue] = React.useState('');
const handleSubmit = useCallback((msg: string) => {
const trimmed = msg.trim();
if (!trimmed) return;
onSend(trimmed);
setInputValue('');
}, [onSend]);
if (!hasActiveSession) {
return <WelcomeScreen onSelect={handleSubmit} />;
}
// 用 key 索引消息,供 contentRender 查找
const msgMap = new Map(messages.map((m) => [m.key, m]));
const items = messages.map((msg) => {
const textContent = msg.blocks
.filter((b): b is Extract<ContentBlock, { type: 'text' }> => b.type === 'text')
.map((b) => b.text)
.join('\n');
return {
key: msg.key,
role: msg.role,
content: msg.role === 'user' ? textContent : '',
loading: msg.role === 'assistant' && msg.streaming && !textContent,
};
});
// 使用 v2 的 role单数配置
const role = {
user: {
placement: 'end' as const,
variant: 'filled' as const,
shape: 'round' as const,
avatar: <UserOutlined />,
// styles: { content: { width: '80%' } },
},
assistant: {
placement: 'start' as const,
variant: 'borderless' as const,
avatar: <RobotOutlined />,
streaming: true,
styles: { content: { width: '80%' } },
header: (_content: unknown, { status }: { status?: string }) => {
if (status === 'loading' || status === 'updating') {
return (
<ThoughtChain.Item
style={{ marginBottom: 8 }}
status="loading"
variant="solid"
icon={<GlobalOutlined />}
title="模型运行中"
/>
);
}
if (status === 'success') {
return (
<ThoughtChain.Item
style={{ marginBottom: 8 }}
status="success"
variant="solid"
icon={<GlobalOutlined />}
title="执行完成"
/>
);
}
return null;
},
footer: (_content: string, { key, status }: { key?: string | number; status?: string }) => {
if (status === 'updating' || status === 'loading') return null;
const msg = msgMap.get(String(key));
if (!msg || msg.role !== 'assistant') return null;
const textBlocks = msg.blocks
.filter((b): b is Extract<ContentBlock, { type: 'text' }> => b.type === 'text')
.map((b) => b.text)
.join('\n');
return (
<div style={{ display: 'flex' }}>
<Actions
items={[
{ key: 'copy', actionRender: <Actions.Copy text={textBlocks} /> },
]}
/>
</div>
);
},
contentRender: (_content: unknown, { key }: { key?: string | number }) => {
const msg = msgMap.get(String(key));
if (!msg) return '';
return <AssistantContent blocks={msg.blocks} streaming={msg.streaming} />;
},
},
};
return (
<div style={{
height: '100%',
width: 'calc(100% - 280px)',
display: 'flex',
flexDirection: 'column',
}}>
<div style={{ flex: 1, overflow: 'hidden', display: 'flex', justifyContent: 'center' }}>
<Bubble.List
items={items}
role={role}
autoScroll
style={{ height: '100%' }}
/>
</div>
<div style={{ padding: '0 16px 16px' }}>
<Sender
value={inputValue}
onChange={setInputValue}
onSubmit={handleSubmit}
onCancel={onCancel}
loading={isStreaming}
placeholder="输入消息..."
/>
</div>
</div>
);
};
// ── 助手消息内容渲染 ──────────────────────────────────────────────────
const AssistantContent: React.FC<{ blocks: ContentBlock[]; streaming?: boolean }> = ({
blocks,
streaming,
}) => {
const { theme: antdTheme } = theme.useToken();
const mdClassName = antdTheme.id === 0 ? 'x-markdown-light' : 'x-markdown-dark';
const elements: React.ReactNode[] = [];
// 收集工具调用
const toolCalls = new Map<string, {
id: string; name: string; input: string;
output?: string; isError?: boolean;
}>();
let firstToolIndex = -1;
for (let i = 0; i < blocks.length; i++) {
const block = blocks[i];
if (block.type === 'tool_use') {
if (firstToolIndex === -1) firstToolIndex = i;
toolCalls.set(block.id, { id: block.id, name: block.name, input: block.input });
} else if (block.type === 'tool_result') {
if (firstToolIndex === -1) firstToolIndex = i;
const existing = toolCalls.get(block.tool_use_id);
if (existing) {
existing.output = block.output;
existing.isError = block.is_error;
} else {
toolCalls.set(block.tool_use_id, {
id: block.tool_use_id, name: block.tool_name, input: '',
output: block.output, isError: block.is_error,
});
}
}
}
for (let i = 0; i < blocks.length; i++) {
const block = blocks[i];
switch (block.type) {
case 'thinking':
elements.push(
<Think
key={`think-${i}`}
loading={streaming}
blink={streaming}
title={streaming ? '思考中...' : '思考过程'}
defaultExpanded={false}
>
<div style={{ whiteSpace: 'pre-wrap', fontSize: 13, opacity: 0.85 }}>
{block.thinking}
</div>
</Think>
);
break;
case 'text':
elements.push(
<XMarkdown
key={`text-${i}`}
className={mdClassName}
components={xMarkdownComponents}
config={xMarkdownConfig}
paragraphTag="div"
openLinksInNewTab
streaming={{
hasNextChunk: !!streaming,
enableAnimation: !!streaming,
tail: false,
animationConfig: { fadeDuration: 400 },
}}
>
{block.text}
</XMarkdown>
);
break;
case 'redacted_thinking':
elements.push(
<Think key={`redacted-${i}`} title="已编辑的思考" defaultExpanded={false}>
<span style={{ opacity: 0.5 }}>[]</span>
</Think>
);
break;
case 'tool_use':
if (i === firstToolIndex) {
elements.push(
<ToolChain key="tool-chain" tools={Array.from(toolCalls.values())} />
);
}
break;
case 'tool_result':
// 由 ToolChain 统一渲染
break;
}
}
return <div style={{ display: 'flex', flexDirection: 'column', gap: 12 }}>{elements}</div>;
};
export default ChatView;

View File

@ -0,0 +1,116 @@
import React, { useEffect, useState } from 'react';
import { Conversations } from '@ant-design/x';
import { DeleteOutlined, PlusOutlined, BulbOutlined, BulbFilled } from '@ant-design/icons';
import type { SessionSummary } from '../types';
import * as api from '../api';
interface SessionSidebarProps {
activeSessionId: string | null;
onSessionChange: (id: string) => void;
onNewSession: () => void;
onDeleteSession: (id: string) => void;
isDark: boolean;
onToggleTheme: () => void;
}
const SessionSidebar: React.FC<SessionSidebarProps> = ({
activeSessionId,
onSessionChange,
onNewSession,
onDeleteSession,
isDark,
onToggleTheme,
}) => {
const [sessions, setSessions] = useState<SessionSummary[]>([]);
const fetchSessions = async () => {
try {
const res = await api.listSessions();
setSessions(res.sessions);
} catch {
// 忽略
}
};
useEffect(() => {
fetchSessions();
}, [activeSessionId]);
const items = sessions.map((s) => ({
key: s.id,
label: `会话 ${s.id.replace('session-', '')}`,
timestamp: s.created_at,
}));
return (
<div style={{
width: 280,
height: '100%',
display: 'flex',
flexDirection: 'column',
padding: '0 12px',
boxSizing: 'border-box',
background: isDark ? 'rgba(255,255,255,0.04)' : 'rgba(0,0,0,0.02)',
}}>
{/* Logo */}
<div style={{
display: 'flex',
alignItems: 'center',
gap: 8,
padding: '24px 24px',
boxSizing: 'border-box',
}}>
<span style={{ fontSize: 24 }}>🐾</span>
<span style={{ fontWeight: 'bold', fontSize: 16 }}>Claw Code</span>
</div>
{/* 会话列表 */}
<div style={{ flex: 1, overflow: 'auto', marginTop: 12, padding: 0 }}>
<Conversations
items={items}
activeKey={activeSessionId || undefined}
onActiveChange={(key) => onSessionChange(key)}
menu={(conversation) => ({
items: [
{
key: 'delete',
label: '删除',
icon: <DeleteOutlined />,
danger: true,
},
],
onClick: (info) => {
if (info.key === 'delete') {
onDeleteSession(conversation.key);
}
},
})}
creation={{
onClick: onNewSession,
label: '新建会话',
icon: <PlusOutlined />,
}}
/>
</div>
{/* 主题切换 */}
<div style={{
padding: '8px 16px',
borderTop: '1px solid rgba(0,0,0,0.06)',
display: 'flex',
alignItems: 'center',
gap: 8,
cursor: 'pointer',
fontSize: 14,
userSelect: 'none',
}}
onClick={onToggleTheme}
>
{isDark ? <BulbFilled /> : <BulbOutlined />}
<span>{isDark ? '浅色模式' : '深色模式'}</span>
</div>
</div>
);
};
export default SessionSidebar;

View File

@ -0,0 +1,85 @@
import React from 'react';
import { ThoughtChain } from '@ant-design/x';
interface ToolCall {
id: string;
name: string;
input: string;
output?: string;
isError?: boolean;
}
interface ToolChainProps {
tools: ToolCall[];
}
const ToolChain: React.FC<ToolChainProps> = ({ tools }) => {
const items = tools.map((tool) => {
const hasResult = tool.output !== undefined;
let status: 'loading' | 'success' | 'error' = 'loading';
if (hasResult) {
status = tool.isError ? 'error' : 'success';
}
return {
key: tool.id,
status,
title: tool.name,
description: hasResult ? (tool.isError ? '执行出错' : '执行完成') : '执行中...',
collapsible: true,
content: (
<div style={{ fontSize: 13 }}>
{tool.input && (
<div style={{ marginBottom: 8 }}>
<div style={{ fontWeight: 500, marginBottom: 4 }}></div>
<pre style={{
margin: 0,
padding: 8,
borderRadius: 6,
background: 'rgba(0,0,0,0.04)',
overflow: 'auto',
maxHeight: 200,
fontSize: 12,
}}>
{tryFormatJSON(tool.input)}
</pre>
</div>
)}
{tool.output !== undefined && (
<div>
<div style={{ fontWeight: 500, marginBottom: 4 }}>
{tool.isError ? '错误' : '输出'}
</div>
<pre style={{
margin: 0,
padding: 8,
borderRadius: 6,
background: tool.isError ? 'rgba(255,0,0,0.04)' : 'rgba(0,0,0,0.04)',
overflow: 'auto',
maxHeight: 300,
fontSize: 12,
}}>
{tool.output}
</pre>
</div>
)}
</div>
),
};
});
if (items.length === 0) return null;
return <ThoughtChain items={items} />;
};
function tryFormatJSON(str: string): string {
try {
return JSON.stringify(JSON.parse(str), null, 2);
} catch {
return str;
}
}
export default ToolChain;
export type { ToolCall };

View File

@ -0,0 +1,41 @@
import React from 'react';
import { Welcome, Prompts } from '@ant-design/x';
const examplePrompts = [
{ key: '1', label: '总结当前工作区', description: '分析项目结构和代码' },
{ key: '2', label: '帮我写一个测试', description: '为指定模块生成测试用例' },
{ key: '3', label: '查找 Bug', description: '检查代码中的潜在问题' },
];
interface WelcomeScreenProps {
onSelect: (prompt: string) => void;
}
const WelcomeScreen: React.FC<WelcomeScreenProps> = ({ onSelect }) => {
return (
<div style={{
flex: 1,
display: 'flex',
flexDirection: 'column',
alignItems: 'center',
justifyContent: 'center',
padding: 48,
}}>
<Welcome
icon="🐾"
title="Claw Code"
description="本地编码助手,连接你的 Claw Server"
style={{ marginBottom: 32 }}
/>
<Prompts
title="试试这些"
items={examplePrompts}
onItemClick={(info) => {
onSelect(String(info.data.label));
}}
/>
</div>
);
};
export default WelcomeScreen;

View File

@ -0,0 +1,50 @@
import { useEffect, useRef, useCallback } from 'react';
import type { SessionEvent } from '../types';
export function useSSE(
sessionId: string | null,
onEvent: (event: SessionEvent) => void,
): void {
const onEventRef = useRef(onEvent);
onEventRef.current = onEvent;
const stableCallback = useCallback((event: SessionEvent) => {
onEventRef.current(event);
}, []);
useEffect(() => {
if (!sessionId) return;
const es = new EventSource(`/sessions/${sessionId}/events`);
const eventTypes: SessionEvent['type'][] = [
'snapshot',
'message',
'message_delta',
'thinking_delta',
'tool_use_start',
'tool_result',
'usage',
'turn_complete',
];
for (const type of eventTypes) {
es.addEventListener(type, (e: MessageEvent) => {
try {
const data = JSON.parse(e.data) as SessionEvent;
stableCallback(data);
} catch {
// 忽略解析错误
}
});
}
es.onerror = () => {
// EventSource 会自动重连
};
return () => {
es.close();
};
}, [sessionId, stableCallback]);
}

15
frontend/src/main.tsx Normal file
View File

@ -0,0 +1,15 @@
import React from 'react';
import ReactDOM from 'react-dom/client';
import App from './App';
// 全局样式:消除默认 margin/padding防止 100vh 溢出产生页面滚动条
document.body.style.margin = '0';
document.body.style.padding = '0';
document.body.style.overflow = 'hidden';
document.documentElement.style.overflow = 'hidden';
ReactDOM.createRoot(document.getElementById('root')!).render(
<React.StrictMode>
<App />
</React.StrictMode>,
);

160
frontend/src/types.ts Normal file
View File

@ -0,0 +1,160 @@
// 镜像 crates/runtime/src/session.rs 和 crates/server/src/lib.rs 的类型
// ── ContentBlock ──────────────────────────────────────────────────────
export interface TextBlock {
type: 'text';
text: string;
}
export interface ThinkingBlock {
type: 'thinking';
thinking: string;
signature?: string;
}
export interface RedactedThinkingBlock {
type: 'redacted_thinking';
data: unknown;
}
export interface ToolUseBlock {
type: 'tool_use';
id: string;
name: string;
input: string;
}
export interface ToolResultBlock {
type: 'tool_result';
tool_use_id: string;
tool_name: string;
output: string;
is_error: boolean;
}
export type ContentBlock =
| TextBlock
| ThinkingBlock
| RedactedThinkingBlock
| ToolUseBlock
| ToolResultBlock;
// ── MessageRole ───────────────────────────────────────────────────────
export type MessageRole = 'system' | 'user' | 'assistant' | 'tool';
// ── ConversationMessage ───────────────────────────────────────────────
export interface TokenUsage {
input_tokens: number;
output_tokens: number;
cache_creation_input_tokens: number;
cache_read_input_tokens: number;
}
export interface ConversationMessage {
role: MessageRole;
blocks: ContentBlock[];
usage?: TokenUsage;
}
// ── SSE SessionEvent ──────────────────────────────────────────────────
export interface SnapshotEvent {
type: 'snapshot';
session_id: string;
messages: ConversationMessage[];
}
export interface MessageEvent {
type: 'message';
session_id: string;
message: ConversationMessage;
}
export interface MessageDeltaEvent {
type: 'message_delta';
session_id: string;
text: string;
}
export interface ToolUseStartEvent {
type: 'tool_use_start';
session_id: string;
tool_use_id: string;
tool_name: string;
input: string;
}
export interface ToolResultEvent {
type: 'tool_result';
session_id: string;
tool_use_id: string;
tool_name: string;
output: string;
is_error: boolean;
}
export interface ThinkingDeltaEvent {
type: 'thinking_delta';
session_id: string;
thinking: string;
}
export interface UsageEvent {
type: 'usage';
session_id: string;
usage: TokenUsage;
}
export interface TurnCompleteEvent {
type: 'turn_complete';
session_id: string;
usage: TokenUsage;
iterations: number;
}
export type SessionEvent =
| SnapshotEvent
| MessageEvent
| MessageDeltaEvent
| ToolUseStartEvent
| ToolResultEvent
| ThinkingDeltaEvent
| UsageEvent
| TurnCompleteEvent;
// ── REST API 响应类型 ─────────────────────────────────────────────────
export interface SessionSummary {
id: string;
created_at: number;
message_count: number;
}
export interface CreateSessionResponse {
session_id: string;
}
export interface SessionDetailsResponse {
id: string;
created_at: number;
messages: ConversationMessage[];
}
export interface UsageResponse {
session_id: string;
usage: TokenUsage;
turns: number;
}
export interface CompactResponse {
session_id: string;
summary: string;
removed_message_count: number;
}
export interface ListSessionsResponse {
sessions: SessionSummary[];
}

1
frontend/src/vite-env.d.ts vendored Normal file
View File

@ -0,0 +1 @@
/// <reference types="vite/client" />

18
frontend/tsconfig.json Normal file
View File

@ -0,0 +1,18 @@
{
"compilerOptions": {
"target": "ES2022",
"module": "ESNext",
"lib": ["ES2023", "DOM", "DOM.Iterable"],
"moduleResolution": "bundler",
"strict": true,
"noUnusedLocals": true,
"noUnusedParameters": true,
"noFallthroughCasesInSwitch": true,
"isolatedModules": true,
"moduleDetection": "force",
"jsx": "react-jsx",
"skipLibCheck": true,
"noEmit": true
},
"include": ["src"]
}

14
frontend/vite.config.ts Normal file
View File

@ -0,0 +1,14 @@
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react';
export default defineConfig({
plugins: [react()],
server: {
proxy: {
'/sessions': {
target: 'http://localhost:3000',
changeOrigin: true,
},
},
},
});