feat: Claw Code 重命名及架构升级
品牌重塑: - Claude Code → Claw Code - .claude → .claw 配置目录 - CLAUDE_* → CLAW_* 环境变量 新增功能: - 多 Provider 架构 (ClawApi/Xai/OpenAI) - 插件系统 (生命周期/钩子/工具扩展) - LSP 集成 (诊断/代码智能) - Hook 系统 (PreToolUse/PostToolUse) - 独立 CLI (claw-cli) - HTTP Server (Axum/SSE) - Slash Commands 扩展 (branch/worktree/commit/pr/plugin等) 优化改进: - Compaction 支持增量压缩 - 全局工具注册表 - 配置文件统一为 .claw.json
This commit is contained in:
parent
ef48f58ef1
commit
05a778bad9
3
.gitignore
vendored
3
.gitignore
vendored
@ -2,3 +2,6 @@ __pycache__/
|
||||
archive/
|
||||
.omx/
|
||||
.clawd-agents/
|
||||
# Claude Code local artifacts
|
||||
.claude/settings.local.json
|
||||
.claude/sessions/
|
||||
|
||||
21
CLAW.md
Normal file
21
CLAW.md
Normal file
@ -0,0 +1,21 @@
|
||||
# CLAW.md
|
||||
|
||||
此文件为 Claw Code 在此仓库中处理代码时提供指导。
|
||||
|
||||
## 检测到的技术栈
|
||||
- 语言:Rust。
|
||||
- 框架:未从支持的启动标记中检测到任何框架。
|
||||
|
||||
## 验证
|
||||
- 在 `rust/` 目录下运行 Rust 验证:`cargo fmt`、`cargo clippy --workspace --all-targets -- -D warnings`、`cargo test --workspace`
|
||||
- `src/` 和 `tests/` 均已存在;当行为发生变化时,请同时更新这两个部分。
|
||||
|
||||
## 仓库结构
|
||||
- `rust/` 包含 Rust 工作区以及活跃的 CLI/运行时实现。
|
||||
- `src/` 包含应与生成的指导和测试保持一致的源文件。
|
||||
- `tests/` 包含应随代码更改而一同审查的验证部分。
|
||||
|
||||
## 工作协议
|
||||
- 优先采用小而易于审查的更改,并保持生成的引导文件与实际仓库工作流对齐。
|
||||
- 将共享的默认值保留在 `.claw.json` 中;`.claw/settings.local.json` 预留给机器本地的覆盖设置。
|
||||
- 不要自动覆盖现有的 `CLAW.md` 内容;当仓库工作流更改时,请有目的地更新它。
|
||||
214
PARITY.md
Normal file
214
PARITY.md
Normal file
@ -0,0 +1,214 @@
|
||||
# 差异化缺口分析 (PARITY GAP ANALYSIS)
|
||||
|
||||
范围:对位于 `/home/bellman/Workspace/claw-code/src/` 的原始 TypeScript 源码与 `rust/crates/` 下的 Rust 移植版进行只读对比。
|
||||
|
||||
方法:仅对比功能表面、注册表、入口点和运行时流水线。未复制任何 TypeScript 源码。
|
||||
|
||||
## 主要摘要
|
||||
|
||||
Rust 移植版在以下方面具有良好的基础:
|
||||
- Anthropic API/OAuth 基础
|
||||
- 本地对话/会话状态
|
||||
- 核心工具循环
|
||||
- MCP stdio/引导支持
|
||||
- CLAW.md 发现
|
||||
- 一套小巧但可用的内置工具
|
||||
|
||||
它与 TypeScript CLI **尚未实现功能对等**。
|
||||
|
||||
最大的缺口:
|
||||
- **插件 (plugins)**:在 Rust 中实际上不存在
|
||||
- **挂钩 (hooks)**:Rust 中已解析但未执行
|
||||
- **CLI 广度**:Rust 中的功能范围要窄得多
|
||||
- **技能 (skills)**:Rust 中仅支持本地文件,没有 TS 的注册表/捆绑流水线
|
||||
- **助手编排 (assistant orchestration)**:缺乏 TS 中感知挂钩的编排以及远程/结构化传输
|
||||
- **服务 (services)**:除了核心 API/OAuth/MCP 之外,Rust 中大部分服务都缺失
|
||||
|
||||
---
|
||||
|
||||
## tools/ (工具)
|
||||
|
||||
### TS 已存在
|
||||
证据:
|
||||
- `src/tools/` 包含广泛的工具家族,包括 `AgentTool`、`AskUserQuestionTool`、`BashTool`、`ConfigTool`、`FileReadTool`、`FileWriteTool`、`GlobTool`、`GrepTool`、`LSPTool`、`ListMcpResourcesTool`、`MCPTool`、`McpAuthTool`、`ReadMcpResourceTool`、`RemoteTriggerTool`、`ScheduleCronTool`、`SkillTool`、`Task*`、`Team*`、`TodoWriteTool`、`ToolSearchTool`、`WebFetchTool`、`WebSearchTool`。
|
||||
- 工具执行/编排分散在 `src/services/tools/StreamingToolExecutor.ts`、`src/services/tools/toolExecution.ts`、`src/services/tools/toolHooks.ts` 和 `src/services/tools/toolOrchestration.ts` 中。
|
||||
|
||||
### Rust 已存在
|
||||
证据:
|
||||
- 工具注册表通过 `mvp_tool_specs()` 集中在 `rust/crates/tools/src/lib.rs` 中。
|
||||
- 当前内置工具包括 shell/文件/搜索/网页/待办事项/技能/代理/配置/笔记本/REPL/PowerShell 原语。
|
||||
- 运行时执行通过 `rust/crates/tools/src/lib.rs` 和 `rust/crates/runtime/src/conversation.rs` 连接。
|
||||
|
||||
### Rust 中缺失或损坏
|
||||
- 重大 TS 工具在 Rust 中没有等效项,例如 `AskUserQuestionTool`、`LSPTool`、`ListMcpResourcesTool`、`MCPTool`、`McpAuthTool`、`ReadMcpResourceTool`、`RemoteTriggerTool`、`ScheduleCronTool`、`Task*`、`Team*` 以及多个工作流/系统工具。
|
||||
- Rust 工具表面仍明确是一个 MVP(最小可行产品)注册表,而非对等注册表。
|
||||
- Rust 缺乏 TS 的分层工具编排拆分。
|
||||
|
||||
**状态:** 仅部分核心功能。
|
||||
|
||||
---
|
||||
|
||||
## hooks/ (挂钩)
|
||||
|
||||
### TS 已存在
|
||||
证据:
|
||||
- `src/commands/hooks/` 下的挂钩命令表面。
|
||||
- `src/services/tools/toolHooks.ts` 和 `src/services/tools/toolExecution.ts` 中的运行时挂钩机制。
|
||||
- TS 支持 `PreToolUse`、`PostToolUse` 以及通过设置配置并记录在 `src/skills/bundled/updateConfig.ts` 中的更广泛的挂钩驱动行为。
|
||||
|
||||
### Rust 已存在
|
||||
证据:
|
||||
- 挂钩配置在 `rust/crates/runtime/src/config.rs` 中被解析和合并。
|
||||
- 挂钩配置可以通过 `rust/crates/commands/src/lib.rs` 和 `rust/crates/claw-cli/src/main.rs` 中的 Rust 配置报告进行检查。
|
||||
- `rust/crates/runtime/src/prompt.rs` 中的提示指南提到了挂钩。
|
||||
|
||||
### Rust 中缺失或损坏
|
||||
- `rust/crates/runtime/src/conversation.rs` 中没有实际的挂钩执行流水线。
|
||||
- 没有 `PreToolUse`/`PostToolUse` 的变异/拒绝/重写/结果挂钩行为。
|
||||
- 没有 Rust 的 `/hooks` 对等命令。
|
||||
|
||||
**状态:** 仅配置支持;运行时行为缺失。
|
||||
|
||||
---
|
||||
|
||||
## plugins/ (插件)
|
||||
|
||||
### TS 已存在
|
||||
证据:
|
||||
- `src/plugins/builtinPlugins.ts` 和 `src/plugins/bundled/index.ts` 中的内置插件脚手架。
|
||||
- `src/services/plugins/PluginInstallationManager.ts` 和 `src/services/plugins/pluginOperations.ts` 中的插件生命周期/服务。
|
||||
- `src/commands/plugin/` 和 `src/commands/reload-plugins/` 下的 CLI/插件命令表面。
|
||||
|
||||
### Rust 已存在
|
||||
证据:
|
||||
- `rust/crates/` 下没有出现专门的插件子系统。
|
||||
- 仓库范围内对插件的 Rust 引用除了文本/帮助提到外,实际上不存在。
|
||||
|
||||
### Rust 中缺失或损坏
|
||||
- 没有插件加载器。
|
||||
- 没有市场安装/更新/启用/禁用流程。
|
||||
- 没有 `/plugin` 或 `/reload-plugins` 对等命令。
|
||||
- 没有插件提供的挂钩/工具/命令/MCP 扩展路径。
|
||||
|
||||
**状态:** 缺失。
|
||||
|
||||
---
|
||||
|
||||
## skills/ 与 CLAW.md 发现
|
||||
|
||||
### TS 已存在
|
||||
证据:
|
||||
- `src/skills/loadSkillsDir.ts`、`src/skills/bundledSkills.ts` 和 `src/skills/mcpSkillBuilders.ts` 中的技能加载/注册流水线。
|
||||
- `src/skills/bundled/` 下的捆绑技能。
|
||||
- `src/commands/skills/` 下的技能命令表面。
|
||||
|
||||
### Rust 已存在
|
||||
证据:
|
||||
- `rust/crates/tools/src/lib.rs` 中的 `Skill` 工具可以解析并读取本地 `SKILL.md` 文件。
|
||||
- `rust/crates/runtime/src/prompt.rs` 中实现了 `CLAW.md` 发现。
|
||||
- Rust 通过 `rust/crates/commands/src/lib.rs` 和 `rust/crates/claw-cli/src/main.rs` 支持 `/memory` 和 `/init`。
|
||||
|
||||
### Rust 中缺失或损坏
|
||||
- 没有等效的捆绑技能注册表。
|
||||
- 没有 `/skills` 命令。
|
||||
- 没有 MCP 技能构建器流水线。
|
||||
- 没有类似 TS 的实时技能发现/重载/更改处理。
|
||||
- 没有围绕技能的会话记忆 (session-memory) / 团队记忆 (team-memory) 集成。
|
||||
|
||||
**状态:** 仅支持基础的本地技能加载。
|
||||
|
||||
---
|
||||
|
||||
## cli/ (命令行界面)
|
||||
|
||||
### TS 已存在
|
||||
证据:
|
||||
- `src/commands/` 下庞大的命令表面,包括 `agents`、`hooks`、`mcp`、`memory`、`model`、`permissions`、`plan`、`plugin`、`resume`、`review`、`skills`、`tasks` 等。
|
||||
- `src/cli/structuredIO.ts`、`src/cli/remoteIO.ts` 和 `src/cli/transports/*` 中的结构化/远程传输栈。
|
||||
- `src/cli/handlers/*` 中的 CLI 处理程序拆分。
|
||||
|
||||
### Rust 已存在
|
||||
证据:
|
||||
- `rust/crates/commands/src/lib.rs` 中共享的斜杠命令注册表。
|
||||
- Rust 斜杠命令目前涵盖 `help`、`status`、`compact`、`model`、`permissions`、`clear`、`cost`、`resume`、`config`、`memory`、`init`、`diff`、`version`、`export`、`session`。
|
||||
- 主要的 CLI/REPL/提示词处理位于 `rust/crates/claw-cli/src/main.rs`。
|
||||
|
||||
### Rust 中缺失或损坏
|
||||
- 缺失重大的 TS 命令家族:`/agents`、`/hooks`、`/mcp`、`/plugin`、`/skills`、`/plan`、`/review`、`/tasks` 等等。
|
||||
- Rust 没有等效于 TS 结构化 IO / 远程传输层的实现。
|
||||
- 没有针对 auth/plugins/MCP/agents 的 TS 风格处理程序分解。
|
||||
- JSON 提示模式在此分支上有所改进,但仍未达到纯净的传输对等:实证验证显示,支持工具的 JSON 输出在最终 JSON 对象之前可能会发出人类可读的工具结果行。
|
||||
|
||||
**状态:** 功能性本地 CLI 核心,范围比 TS 窄得多。
|
||||
|
||||
---
|
||||
|
||||
## assistant/ (助手/代理循环、流式、工具调用)
|
||||
|
||||
### TS 已存在
|
||||
证据:
|
||||
- `src/assistant/sessionHistory.ts` 中的助手/会话表面。
|
||||
- `src/services/tools/StreamingToolExecutor.ts`、`src/services/tools/toolExecution.ts`、`src/services/tools/toolOrchestration.ts` 中的工具编排。
|
||||
- `src/cli/structuredIO.ts` 和 `src/cli/remoteIO.ts` 中的远程/结构化流式层。
|
||||
|
||||
### Rust 已存在
|
||||
证据:
|
||||
- `rust/crates/runtime/src/conversation.rs` 中的核心循环。
|
||||
- `rust/crates/claw-cli/src/main.rs` 中的流式/工具事件转换。
|
||||
- `rust/crates/runtime/src/session.rs` 中的会话持久化。
|
||||
|
||||
### Rust 中缺失或损坏
|
||||
- 没有类似 TS 的挂钩感知编排层。
|
||||
- 没有 TS 结构化/远程助手传输栈。
|
||||
- 没有更丰富的 TS 助手/会话历史/后台任务集成。
|
||||
- JSON 输出路径在此分支上不再仅限单次交互,但输出纯净度仍落后于 TS 传输预期。
|
||||
|
||||
**状态:** 强大的核心循环,缺失编排层。
|
||||
|
||||
---
|
||||
|
||||
## services/ (服务:API 客户端、认证、模型、MCP)
|
||||
|
||||
### TS 已存在
|
||||
证据:
|
||||
- `src/services/api/*` 下的 API 服务。
|
||||
- `src/services/oauth/*` 下的 OAuth 服务。
|
||||
- `src/services/mcp/*` 下的 MCP 服务。
|
||||
- `src/services/*` 下的分析、提示建议、会话记忆、插件操作、设置同步、策略限制、团队记忆同步、通知器、语音等附加服务层。
|
||||
|
||||
### Rust 已存在
|
||||
证据:
|
||||
- `rust/crates/api/src/{client,error,sse,types}.rs` 中的核心 Anthropic API 客户端。
|
||||
- `rust/crates/runtime/src/oauth.rs` 中的 OAuth 支持。
|
||||
- `rust/crates/runtime/src/{config,mcp,mcp_client,mcp_stdio}.rs` 中的 MCP 配置/引导/客户端支持。
|
||||
- `rust/crates/runtime/src/usage.rs` 中的用量统计。
|
||||
- `rust/crates/runtime/src/remote.rs` 中的远程上游代理支持。
|
||||
|
||||
### Rust 中缺失或损坏
|
||||
- 除了核心消息传递/认证/MCP 之外,大部分 TS 服务生态系统都缺失。
|
||||
- 没有等效于 TS 的插件服务层。
|
||||
- 没有等效于 TS 的分析/设置同步/策略限制/团队记忆子系统。
|
||||
- 没有 TS 风格的 MCP 连接器/UI 层。
|
||||
- 模型/提供商的用户体验(ergonomics)仍比 TS 薄弱。
|
||||
|
||||
**状态:** 核心基础已存在;更广泛的服务生态系统缺失。
|
||||
|
||||
---
|
||||
|
||||
## 正在处理的分支中的关键缺陷状态
|
||||
|
||||
### 已修复
|
||||
- **已启用提示词模式工具 (Prompt mode tools)**
|
||||
- `rust/crates/claw-cli/src/main.rs` 现在使用 `LiveCli::new(model, true, ...)` 构建提示词模式。
|
||||
- **默认权限模式 = DangerFullAccess**
|
||||
- `rust/crates/claw-cli/src/main.rs` 中的运行时默认值现在解析为 `DangerFullAccess`。
|
||||
- `rust/crates/claw-cli/src/args.rs` 中的 Clap 默认值也使用 `DangerFullAccess`。
|
||||
- `rust/crates/claw-cli/src/init.rs` 中的初始化模板写入了 `dontAsk`。
|
||||
- **流式 `{}` 工具输入前缀 Bug**
|
||||
- `rust/crates/claw-cli/src/main.rs` 现在仅针对流式工具输入剥离初始空对象,同时保留非流式响应中的合法 `{}`。
|
||||
- **无限最大迭代次数 (max_iterations)**
|
||||
- 在 `rust/crates/runtime/src/conversation.rs` 中通过 `usize::MAX` 进行了验证。
|
||||
|
||||
### 剩余的显著对等问题
|
||||
- **JSON 提示输出纯净度**
|
||||
- 支持工具的 JSON 模式现在可以循环,但实证验证显示,当工具启动时,在 JSON 之前仍会出现人类可读的工具结果输出。
|
||||
157
README.md
157
README.md
@ -1,7 +1,7 @@
|
||||
# Rewriting Project Claw Code
|
||||
# 重写项目:Claw Code
|
||||
|
||||
<p align="center">
|
||||
<strong>⭐ The fastest repo in history to surpass 50K stars, reaching the milestone in just 2 hours after publication ⭐</strong>
|
||||
<strong>⭐ 历史上最快超过 5 万颗星的仓库,发布后仅 2 小时即达成里程碑 ⭐</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
@ -19,7 +19,7 @@
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<strong>Better Harness Tools, not merely storing the archive of leaked Claude Code</strong>
|
||||
<strong>更好的 Harness 工具,而不仅仅是存储泄露的 Claw Code 存档</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
@ -27,63 +27,86 @@
|
||||
</p>
|
||||
|
||||
> [!IMPORTANT]
|
||||
> **Rust port is now in progress** on the [`dev/rust`](https://github.com/instructkr/claw-code/tree/dev/rust) branch and is expected to be merged into main today. The Rust implementation aims to deliver a faster, memory-safe harness runtime. Stay tuned — this will be the definitive version of the project.
|
||||
> **Rust 移植工作目前正在 [`dev/rust`](https://github.com/instructkr/claw-code/tree/dev/rust) 分支上进行**,预计今天将合并到主分支。Rust 实现旨在提供更快、内存安全的 harness 运行时。敬请期待——这将是该项目的最终版本。
|
||||
|
||||
> If you find this work useful, consider [sponsoring @instructkr on GitHub](https://github.com/sponsors/instructkr) to support continued open-source harness engineering research.
|
||||
> 如果你觉得这项工作有用,请考虑在 GitHub 上 [赞助 @instructkr](https://github.com/sponsors/instructkr) 以支持持续的开源 harness 工程研究。
|
||||
|
||||
---
|
||||
|
||||
## Backstory
|
||||
## Rust 移植
|
||||
|
||||
At 4 AM on March 31, 2026, I woke up to my phone blowing up with notifications. The Claude Code source had been exposed, and the entire dev community was in a frenzy. My girlfriend in Korea was genuinely worried I might face legal action from Anthropic just for having the code on my machine — so I did what any engineer would do under pressure: I sat down, ported the core features to Python from scratch, and pushed it before the sun came up.
|
||||
`rust/` 目录下的 Rust 工作区是该项目的当前系统语言移植版本。
|
||||
|
||||
The whole thing was orchestrated end-to-end using [oh-my-codex (OmX)](https://github.com/Yeachan-Heo/oh-my-codex) by [@bellman_ych](https://x.com/bellman_ych) — a workflow layer built on top of OpenAI's Codex ([@OpenAIDevs](https://x.com/OpenAIDevs)). I used `$team` mode for parallel code review and `$ralph` mode for persistent execution loops with architect-level verification. The entire porting session — from reading the original harness structure to producing a working Python tree with tests — was driven through OmX orchestration.
|
||||
它目前包括:
|
||||
|
||||
The result is a clean-room Python rewrite that captures the architectural patterns of Claude Code's agent harness without copying any proprietary source. I'm now actively collaborating with [@bellman_ych](https://x.com/bellman_ych) — the creator of OmX himself — to push this further. The basic Python foundation is already in place and functional, but we're just getting started. **Stay tuned — a much more capable version is on the way.**
|
||||
- `crates/api-client` — 具有提供商抽象、OAuth 和流式支持的 API 客户端
|
||||
- `crates/runtime` — 会话状态、压缩、MCP 编排、提示词构建
|
||||
- `crates/tools` — 工具清单定义和执行框架
|
||||
- `crates/commands` — 斜杠命令、技能发现和配置检查
|
||||
- `crates/plugins` — 插件模型、挂钩管道和内置插件
|
||||
- `crates/compat-harness` — 用于上游编辑器集成的兼容层
|
||||
- `crates/claw-cli` — 交互式 REPL、Markdown 渲染以及项目引导/初始化流程
|
||||
|
||||
运行 Rust 构建:
|
||||
|
||||
```bash
|
||||
cd rust
|
||||
cargo build --release
|
||||
```
|
||||
|
||||
## 背景故事
|
||||
|
||||
2026年3月31日凌晨4点,我被手机弹出的漫天通知震醒。Claw Code 的源代码被曝光了,整个开发者社区都陷入了疯狂。我在韩国的女朋友真的很担心我仅仅因为机器上有这些代码而面临原作者的法律诉讼——于是我在压力之下做了任何工程师都会做的事:我坐下来,从头开始将核心功能移植到 Python,并在日出前提交了代码。
|
||||
|
||||
整个过程是使用 [@bellman_ych](https://x.com/bellman_ych) 开发的 [oh-my-codex (OmX)](https://github.com/Yeachan-Heo/oh-my-codex) 端到端编排的——这是一个构建在 OpenAI Codex ([@OpenAIDevs](https://x.com/OpenAIDevs)) 之上的工作流层。我使用了 `$team` 模式进行并行代码审查,并使用了 `$ralph` 模式进行带有架构级验证的持久执行循环。整个移植过程——从阅读原始 harness 结构到生成具有测试的可用 Python 树——都是通过 OmX 编排驱动的。
|
||||
|
||||
结果是一个净室(clean-room)Python 重写版本,它捕捉了 Claw Code 代理 harness 的架构模式,而没有复制任何专有源代码。我现在正与 OmX 的创始人 [@bellman_ych](https://x.com/bellman_ych) 本人积极合作,以进一步推进这项工作。基本的 Python 基础已经就绪并可以运行,但我们才刚刚开始。 **敬请期待——一个更强大的版本正在路上。**
|
||||
|
||||
Rust 移植版本是同时使用 [oh-my-codex (OmX)](https://github.com/Yeachan-Heo/oh-my-codex) 和 [oh-my-opencode (OmO)](https://github.com/code-yeongyu/oh-my-openagent) 开发的:OmX 驱动了脚手架搭建、编排和架构方向,而 OmO 则用于后期的实现加速和验证支持。
|
||||
|
||||
https://github.com/instructkr/claw-code
|
||||
|
||||

|
||||
|
||||
## The Creators Featured in Wall Street Journal For Avid Claude Code Fans
|
||||
## 《华尔街日报》对 Claw Code 狂热粉丝创建者的报道
|
||||
|
||||
I've been deeply interested in **harness engineering** — studying how agent systems wire tools, orchestrate tasks, and manage runtime context. This isn't a sudden thing. The Wall Street Journal featured my work earlier this month, documenting how I've been one of the most active power users exploring these systems:
|
||||
我对 **harness 工程** 深感兴趣——研究代理系统如何连接工具、编排任务以及管理运行时上下文。这并非突发奇想。《华尔街日报》在本月早些时候报道了我的工作,记录了我如何成为探索这些系统最活跃的超级用户之一:
|
||||
|
||||
> AI startup worker Sigrid Jin, who attended the Seoul dinner, single-handedly used 25 billion of Claude Code tokens last year. At the time, usage limits were looser, allowing early enthusiasts to reach tens of billions of tokens at a very low cost.
|
||||
> AI 创业公司员工 Sigrid Jin,去年凭一己之力使用了 250 亿个 Claw Code token。当时,使用限制较为宽松,使得早期发烧友能够以极低的成本触及数百亿个 token。
|
||||
>
|
||||
> Despite his countless hours with Claude Code, Jin isn't faithful to any one AI lab. The tools available have different strengths and weaknesses, he said. Codex is better at reasoning, while Claude Code generates cleaner, more shareable code.
|
||||
> 尽管 Jin 在 Claw Code 上花费了无数个小时,但他并不忠于任何一家 AI 实验室。他说,现有的工具各有所长。Codex 擅长推理,而 Claw Code 生成的代码更干净、更易于分享。
|
||||
>
|
||||
> Jin flew to San Francisco in February for Claude Code's first birthday party, where attendees waited in line to compare notes with Cherny. The crowd included a practicing cardiologist from Belgium who had built an app to help patients navigate care, and a California lawyer who made a tool for automating building permit approvals using Claude Code.
|
||||
> Jin 于 2 月份飞往旧金山参加 Claw Code 的一周年派对,与会者排队与 Cherny 交流心得。人群中包括一名来自比利时的执业心脏病专家(他开发了一个帮助患者就医的应用程序),以及一名加州律师(他使用 Claw Code 制作了一个自动批准建筑许可的工具)。
|
||||
>
|
||||
> "It was basically like a sharing party," Jin said. "There were lawyers, there were doctors, there were dentists. They did not have software engineering backgrounds."
|
||||
> “这基本上就像是一个分享派对,”Jin 说。“有律师,有医生,有牙医。他们并没有软件工程背景。”
|
||||
>
|
||||
> — *The Wall Street Journal*, March 21, 2026, [*"The Trillion Dollar Race to Automate Our Entire Lives"*](https://lnkd.in/gs9td3qd)
|
||||
> — *《华尔街日报》*,2026年3月21日,[*“这场价值万亿美元的竞赛,旨在自动化我们的整个生活”*](https://lnkd.in/gs9td3qd)
|
||||
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Porting Status
|
||||
## 移植状态
|
||||
|
||||
The main source tree is now Python-first.
|
||||
主源码树现在以 Python 为主。
|
||||
|
||||
- `src/` contains the active Python porting workspace
|
||||
- `tests/` verifies the current Python workspace
|
||||
- the exposed snapshot is no longer part of the tracked repository state
|
||||
- `src/` 包含活跃的 Python 移植工作区
|
||||
- `tests/` 验证当前的 Python 工作区
|
||||
- 曝光的快照不再是受追踪的仓库状态的一部分
|
||||
|
||||
The current Python workspace is not yet a complete one-to-one replacement for the original system, but the primary implementation surface is now Python.
|
||||
目前的 Python 工作区尚未完全替代原始系统,但主要的实现界面现在是 Python。
|
||||
|
||||
## Why this rewrite exists
|
||||
## 为什么进行这次重写
|
||||
|
||||
I originally studied the exposed codebase to understand its harness, tool wiring, and agent workflow. After spending more time with the legal and ethical questions—and after reading the essay linked below—I did not want the exposed snapshot itself to remain the main tracked source tree.
|
||||
我最初研究曝光的代码库是为了了解其 harness、工具连接和代理工作流。在深入思考法律和伦理问题——并在阅读了下面链接的论文后——我不希望曝光的快照本身继续作为主要追踪的源码树。
|
||||
|
||||
This repository now focuses on Python porting work instead.
|
||||
因此,本仓库现在专注于 Python 移植工作。
|
||||
|
||||
## Repository Layout
|
||||
## 仓库布局
|
||||
|
||||
```text
|
||||
.
|
||||
├── src/ # Python porting workspace
|
||||
├── src/ # Python 移植工作区
|
||||
│ ├── __init__.py
|
||||
│ ├── commands.py
|
||||
│ ├── main.py
|
||||
@ -92,100 +115,114 @@ This repository now focuses on Python porting work instead.
|
||||
│ ├── query_engine.py
|
||||
│ ├── task.py
|
||||
│ └── tools.py
|
||||
├── tests/ # Python verification
|
||||
├── assets/omx/ # OmX workflow screenshots
|
||||
├── rust/ # Rust 移植 (claw CLI)
|
||||
│ ├── crates/api/ # API 客户端 + 流式处理
|
||||
│ ├── crates/runtime/ # 会话、工具、MCP、配置
|
||||
│ ├── crates/claw-cli/ # 交互式 CLI 二进制文件
|
||||
│ ├── crates/plugins/ # 插件系统
|
||||
│ ├── crates/commands/ # 斜杠命令
|
||||
│ ├── crates/server/ # HTTP/SSE 服务器 (axum)
|
||||
│ ├── crates/lsp/ # LSP 客户端集成
|
||||
│ └── crates/tools/ # 工具规范
|
||||
├── tests/ # Python 验证
|
||||
├── assets/omx/ # OmX 工作流截图
|
||||
├── 2026-03-09-is-legal-the-same-as-legitimate-ai-reimplementation-and-the-erosion-of-copyleft.md
|
||||
└── README.md
|
||||
```
|
||||
|
||||
## Python Workspace Overview
|
||||
## Python 工作区概览
|
||||
|
||||
The new Python `src/` tree currently provides:
|
||||
新的 Python `src/` 树目前提供:
|
||||
|
||||
- **`port_manifest.py`** — summarizes the current Python workspace structure
|
||||
- **`models.py`** — dataclasses for subsystems, modules, and backlog state
|
||||
- **`commands.py`** — Python-side command port metadata
|
||||
- **`tools.py`** — Python-side tool port metadata
|
||||
- **`query_engine.py`** — renders a Python porting summary from the active workspace
|
||||
- **`main.py`** — a CLI entrypoint for manifest and summary output
|
||||
- **`port_manifest.py`** — 总结当前 Python 工作区的结构
|
||||
- **`models.py`** — 用于子系统、模块和积压状态的数据类
|
||||
- **`commands.py`** — Python 侧的命令移植元数据
|
||||
- **`tools.py`** — Python 侧的工具移植元数据
|
||||
- **`query_engine.py`** — 从活跃工作区渲染 Python 移植摘要
|
||||
- **`main.py`** — 用于清单和摘要输出的 CLI 入口点
|
||||
|
||||
## Quickstart
|
||||
## 快速开始
|
||||
|
||||
Render the Python porting summary:
|
||||
渲染 Python 移植摘要:
|
||||
|
||||
```bash
|
||||
python3 -m src.main summary
|
||||
```
|
||||
|
||||
Print the current Python workspace manifest:
|
||||
打印当前 Python 工作区清单:
|
||||
|
||||
```bash
|
||||
python3 -m src.main manifest
|
||||
```
|
||||
|
||||
List the current Python modules:
|
||||
列出当前的 Python 模块:
|
||||
|
||||
```bash
|
||||
python3 -m src.main subsystems --limit 16
|
||||
```
|
||||
|
||||
Run verification:
|
||||
运行验证:
|
||||
|
||||
```bash
|
||||
python3 -m unittest discover -s tests -v
|
||||
```
|
||||
|
||||
Run the parity audit against the local ignored archive (when present):
|
||||
对本地忽略的存档运行一致性审计(如果存在):
|
||||
|
||||
```bash
|
||||
python3 -m src.main parity-audit
|
||||
```
|
||||
|
||||
Inspect mirrored command/tool inventories:
|
||||
检查镜像的命令/工具库:
|
||||
|
||||
```bash
|
||||
python3 -m src.main commands --limit 10
|
||||
python3 -m src.main tools --limit 10
|
||||
```
|
||||
|
||||
## Current Parity Checkpoint
|
||||
## 当前对比检查点
|
||||
|
||||
The port now mirrors the archived root-entry file surface, top-level subsystem names, and command/tool inventories much more closely than before. However, it is **not yet** a full runtime-equivalent replacement for the original TypeScript system; the Python tree still contains fewer executable runtime slices than the archived source.
|
||||
该移植版本现在比以前更紧密地镜像了存档的根入口文件表面、顶层子系统名称以及命令/工具清单。然而,它**尚未**完全替代原始的 TypeScript 系统;Python 树中可执行的运行时切片仍少于存档源码。
|
||||
|
||||
## 使用 `oh-my-codex` 和 `oh-my-opencode` 构建
|
||||
|
||||
## Built with `oh-my-codex`
|
||||
本仓库的移植、净室加固和验证工作流是在 Yeachan Heo 工具栈的 AI 辅助下完成的,其中 **oh-my-codex (OmX)** 是主要的脚手架和编排层。
|
||||
|
||||
The restructuring and documentation work on this repository was AI-assisted and orchestrated with Yeachan Heo's [oh-my-codex (OmX)](https://github.com/Yeachan-Heo/oh-my-codex), layered on top of Codex.
|
||||
- [**oh-my-codex (OmX)**](https://github.com/Yeachan-Heo/oh-my-codex) — 脚手架、编排、架构方向和核心移植工作流
|
||||
- [**oh-my-opencode (OmO)**](https://github.com/code-yeongyu/oh-my-openagent) — 实现加速、清理和验证支持
|
||||
|
||||
- **`$team` mode:** used for coordinated parallel review and architectural feedback
|
||||
- **`$ralph` mode:** used for persistent execution, verification, and completion discipline
|
||||
- **Codex-driven workflow:** used to turn the main `src/` tree into a Python-first porting workspace
|
||||
移植过程中使用的关键工作流模式:
|
||||
|
||||
### OmX workflow screenshots
|
||||
- **`$team` 模式:** 协调的并行审查和架构反馈
|
||||
- **`$ralph` 模式:** 持久执行、验证和完成纪律
|
||||
- **净室通过:** 跨 Rust 工作区的命名/品牌清理、QA 和发布验证
|
||||
- **手动和现场验证:** 构建、测试、手动 QA 以及发布前的真实 API 路径验证
|
||||
|
||||
### OmX 工作流截图
|
||||
|
||||

|
||||
|
||||
*Ralph/team orchestration view while the README and essay context were being reviewed in terminal panes.*
|
||||
*Ralph/team 编排视图,终端面板中正在审查 README 和论文背景。*
|
||||
|
||||

|
||||
|
||||
*Split-pane review and verification flow during the final README wording pass.*
|
||||
*最后一次 README 措辞审核期间的分屏审查和验证流程。*
|
||||
|
||||
## Community
|
||||
## 社区
|
||||
|
||||
<p align="center">
|
||||
<a href="https://instruct.kr/"><img src="assets/instructkr.png" alt="instructkr" width="400" /></a>
|
||||
</p>
|
||||
|
||||
Join the [**instructkr Discord**](https://instruct.kr/) — the best Korean language model community. Come chat about LLMs, harness engineering, agent workflows, and everything in between.
|
||||
加入 [**instructkr Discord**](https://instruct.kr/) —— 最佳韩国语言模型社区。来这里聊聊 LLM、harness 工程、代理工作流以及其中的一切。
|
||||
|
||||
[](https://instruct.kr/)
|
||||
|
||||
## Star History
|
||||
|
||||
See the chart at the top of this README.
|
||||
见 README 顶部的图表。
|
||||
|
||||
## Ownership / Affiliation Disclaimer
|
||||
## 所有权 / 附属免责声明
|
||||
|
||||
- This repository does **not** claim ownership of the original Claude Code source material.
|
||||
- This repository is **not affiliated with, endorsed by, or maintained by Anthropic**.
|
||||
- 本仓库**不**主张对原始 Claw Code 源码材料的所有权。
|
||||
- 本仓库**不隶属于、不被背书、也不由原作者维护**。
|
||||
|
||||
@ -1 +0,0 @@
|
||||
{"messages":[],"version":1}
|
||||
@ -1 +0,0 @@
|
||||
{"messages":[],"version":1}
|
||||
@ -1 +0,0 @@
|
||||
{"messages":[],"version":1}
|
||||
@ -1 +0,0 @@
|
||||
{"messages":[],"version":1}
|
||||
@ -1 +0,0 @@
|
||||
{"messages":[],"version":1}
|
||||
@ -1 +0,0 @@
|
||||
{"messages":[],"version":1}
|
||||
36
rust/.github/workflows/ci.yml
vendored
Normal file
36
rust/.github/workflows/ci.yml
vendored
Normal file
@ -0,0 +1,36 @@
|
||||
name: CI
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
pull_request:
|
||||
branches:
|
||||
- main
|
||||
|
||||
jobs:
|
||||
rust:
|
||||
name: ${{ matrix.os }}
|
||||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
os:
|
||||
- ubuntu-latest
|
||||
- macos-latest
|
||||
|
||||
steps:
|
||||
- name: Check out repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Install Rust toolchain
|
||||
uses: dtolnay/rust-toolchain@stable
|
||||
|
||||
- name: Run cargo check
|
||||
run: cargo check --workspace
|
||||
|
||||
- name: Run cargo test
|
||||
run: cargo test --workspace
|
||||
|
||||
- name: Run release build
|
||||
run: cargo build --release
|
||||
43
rust/CONTRIBUTING.md
Normal file
43
rust/CONTRIBUTING.md
Normal file
@ -0,0 +1,43 @@
|
||||
# 贡献指南
|
||||
|
||||
感谢你为 Claw Code 做出贡献。
|
||||
|
||||
## 开发设置
|
||||
|
||||
- 安装稳定的 Rust 工具链。
|
||||
- 在此 Rust 工作区的仓库根目录下进行开发。如果你从父仓库根目录开始,请先执行 `cd rust/`。
|
||||
|
||||
## 构建
|
||||
|
||||
```bash
|
||||
cargo build
|
||||
cargo build --release
|
||||
```
|
||||
|
||||
## 测试与验证
|
||||
|
||||
在开启 Pull Request 之前,请运行完整的 Rust 验证集:
|
||||
|
||||
```bash
|
||||
cargo fmt --all --check
|
||||
cargo clippy --workspace --all-targets -- -D warnings
|
||||
cargo check --workspace
|
||||
cargo test --workspace
|
||||
```
|
||||
|
||||
如果你更改了行为,请在同一个 Pull Request 中添加或更新相关的测试。
|
||||
|
||||
## 代码风格
|
||||
|
||||
- 遵循所修改 crate 中的现有模式,而不是引入新的风格。
|
||||
- 使用 `rustfmt` 格式化代码。
|
||||
- 确保你修改的工作区目标的 `clippy` 检查通过。
|
||||
- 优先采用针对性的 diff,而不是顺便进行的重构。
|
||||
|
||||
## Pull Request
|
||||
|
||||
- 从 `main` 分支拉取新分支。
|
||||
- 确保每个 Pull Request 的范围仅限于一个明确的更改。
|
||||
- 说明更改动机、实现摘要以及你运行的验证。
|
||||
- 在请求审查之前,确保本地检查已通过。
|
||||
- 如果审查反馈导致行为更改,请重新运行相关的验证命令。
|
||||
394
rust/Cargo.lock
generated
394
rust/Cargo.lock
generated
@ -28,12 +28,86 @@ dependencies = [
|
||||
"tokio",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "async-stream"
|
||||
version = "0.3.6"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "0b5a71a6f37880a80d1d7f19efd781e4b5de42c88f0722cc13bcb6cc2cfe8476"
|
||||
dependencies = [
|
||||
"async-stream-impl",
|
||||
"futures-core",
|
||||
"pin-project-lite",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "async-stream-impl"
|
||||
version = "0.3.6"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c7c24de15d275a1ecfd47a380fb4d5ec9bfe0933f309ed5e705b775596a3574d"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "atomic-waker"
|
||||
version = "1.1.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1505bd5d3d116872e7271a6d4e16d81d0c8570876c8de68093a09ac269d8aac0"
|
||||
|
||||
[[package]]
|
||||
name = "axum"
|
||||
version = "0.8.8"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8b52af3cb4058c895d37317bb27508dccc8e5f2d39454016b297bf4a400597b8"
|
||||
dependencies = [
|
||||
"axum-core",
|
||||
"bytes",
|
||||
"form_urlencoded",
|
||||
"futures-util",
|
||||
"http",
|
||||
"http-body",
|
||||
"http-body-util",
|
||||
"hyper",
|
||||
"hyper-util",
|
||||
"itoa",
|
||||
"matchit",
|
||||
"memchr",
|
||||
"mime",
|
||||
"percent-encoding",
|
||||
"pin-project-lite",
|
||||
"serde_core",
|
||||
"serde_json",
|
||||
"serde_path_to_error",
|
||||
"serde_urlencoded",
|
||||
"sync_wrapper",
|
||||
"tokio",
|
||||
"tower",
|
||||
"tower-layer",
|
||||
"tower-service",
|
||||
"tracing",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "axum-core"
|
||||
version = "0.5.6"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "08c78f31d7b1291f7ee735c1c6780ccde7785daae9a9206026862dab7d8792d1"
|
||||
dependencies = [
|
||||
"bytes",
|
||||
"futures-core",
|
||||
"http",
|
||||
"http-body",
|
||||
"http-body-util",
|
||||
"mime",
|
||||
"pin-project-lite",
|
||||
"sync_wrapper",
|
||||
"tower-layer",
|
||||
"tower-service",
|
||||
"tracing",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "base64"
|
||||
version = "0.22.1"
|
||||
@ -49,6 +123,12 @@ dependencies = [
|
||||
"serde",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "bitflags"
|
||||
version = "1.3.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "bef38d45163c2f1dde094a7dfd33ccf595c92905c8f8f4fdc18d06fb1037718a"
|
||||
|
||||
[[package]]
|
||||
name = "bitflags"
|
||||
version = "2.11.0"
|
||||
@ -98,11 +178,40 @@ version = "0.2.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "613afe47fcd5fac7ccf1db93babcb082c5994d996f20b8b159f2ad1658eb5724"
|
||||
|
||||
[[package]]
|
||||
name = "claw-cli"
|
||||
version = "0.1.0"
|
||||
dependencies = [
|
||||
"api",
|
||||
"commands",
|
||||
"compat-harness",
|
||||
"crossterm",
|
||||
"plugins",
|
||||
"pulldown-cmark",
|
||||
"runtime",
|
||||
"rustyline",
|
||||
"serde_json",
|
||||
"syntect",
|
||||
"tokio",
|
||||
"tools",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "clipboard-win"
|
||||
version = "5.4.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "bde03770d3df201d4fb868f2c9c59e66a3e4e2bd06692a0fe701e7103c7e84d4"
|
||||
dependencies = [
|
||||
"error-code",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "commands"
|
||||
version = "0.1.0"
|
||||
dependencies = [
|
||||
"plugins",
|
||||
"runtime",
|
||||
"serde_json",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@ -138,11 +247,11 @@ version = "0.28.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "829d955a0bb380ef178a640b91779e3987da38c9aea133b20614cfed8cdea9c6"
|
||||
dependencies = [
|
||||
"bitflags",
|
||||
"bitflags 2.11.0",
|
||||
"crossterm_winapi",
|
||||
"mio",
|
||||
"parking_lot",
|
||||
"rustix",
|
||||
"rustix 0.38.44",
|
||||
"signal-hook",
|
||||
"signal-hook-mio",
|
||||
"winapi",
|
||||
@ -197,6 +306,12 @@ dependencies = [
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "endian-type"
|
||||
version = "0.1.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c34f04666d835ff5d62e058c3995147c06f42fe86ff053337632bca83e42702d"
|
||||
|
||||
[[package]]
|
||||
name = "equivalent"
|
||||
version = "1.0.2"
|
||||
@ -213,6 +328,23 @@ dependencies = [
|
||||
"windows-sys 0.61.2",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "error-code"
|
||||
version = "3.3.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "dea2df4cf52843e0452895c455a1a2cfbb842a1e7329671acf418fdc53ed4c59"
|
||||
|
||||
[[package]]
|
||||
name = "fd-lock"
|
||||
version = "4.0.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "0ce92ff622d6dadf7349484f42c93271a0d49b7cc4d466a936405bacbe10aa78"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"rustix 1.1.4",
|
||||
"windows-sys 0.59.0",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "find-msvc-tools"
|
||||
version = "0.1.9"
|
||||
@ -229,6 +361,15 @@ dependencies = [
|
||||
"miniz_oxide",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "fluent-uri"
|
||||
version = "0.1.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "17c704e9dbe1ddd863da1e6ff3567795087b1eb201ce80d8fa81162e1516500d"
|
||||
dependencies = [
|
||||
"bitflags 1.3.2",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "fnv"
|
||||
version = "1.0.7"
|
||||
@ -266,6 +407,17 @@ version = "0.3.32"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "cecba35d7ad927e23624b22ad55235f2239cfa44fd10428eecbeba6d6a717718"
|
||||
|
||||
[[package]]
|
||||
name = "futures-macro"
|
||||
version = "0.3.32"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e835b70203e41293343137df5c0664546da5745f82ec9b84d40be8336958447b"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "futures-sink"
|
||||
version = "0.3.32"
|
||||
@ -286,6 +438,7 @@ checksum = "389ca41296e6190b48053de0321d02a77f32f8a5d2461dd38762c0593805c6d6"
|
||||
dependencies = [
|
||||
"futures-core",
|
||||
"futures-io",
|
||||
"futures-macro",
|
||||
"futures-sink",
|
||||
"futures-task",
|
||||
"memchr",
|
||||
@ -351,6 +504,15 @@ version = "0.16.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "841d1cc9bed7f9236f321df977030373f4a4163ae1a7dbfe1a51a2c1a51d9100"
|
||||
|
||||
[[package]]
|
||||
name = "home"
|
||||
version = "0.5.12"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "cc627f471c528ff0c4a49e1d5e60450c8f6461dd6d10ba9dcd3a61d3dff7728d"
|
||||
dependencies = [
|
||||
"windows-sys 0.61.2",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "http"
|
||||
version = "1.4.0"
|
||||
@ -390,6 +552,12 @@ version = "1.10.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "6dbf3de79e51f3d586ab4cb9d5c3e2c14aa28ed23d180cf89b4df0454a69cc87"
|
||||
|
||||
[[package]]
|
||||
name = "httpdate"
|
||||
version = "1.0.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "df3b46402a9d5adb4c86a0cf463f42e19994e3ee891101b1841f30a545cb49a9"
|
||||
|
||||
[[package]]
|
||||
name = "hyper"
|
||||
version = "1.9.0"
|
||||
@ -403,6 +571,7 @@ dependencies = [
|
||||
"http",
|
||||
"http-body",
|
||||
"httparse",
|
||||
"httpdate",
|
||||
"itoa",
|
||||
"pin-project-lite",
|
||||
"smallvec",
|
||||
@ -614,6 +783,12 @@ version = "0.4.15"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d26c52dbd32dccf2d10cac7725f8eae5296885fb5703b261f7d0a0739ec807ab"
|
||||
|
||||
[[package]]
|
||||
name = "linux-raw-sys"
|
||||
version = "0.12.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "32a66949e030da00e8c7d4434b251670a91556f4144941d37452769c25d58a53"
|
||||
|
||||
[[package]]
|
||||
name = "litemap"
|
||||
version = "0.8.1"
|
||||
@ -641,12 +816,48 @@ version = "0.1.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "112b39cec0b298b6c1999fee3e31427f74f676e4cb9879ed1a121b43661a4154"
|
||||
|
||||
[[package]]
|
||||
name = "lsp"
|
||||
version = "0.1.0"
|
||||
dependencies = [
|
||||
"lsp-types",
|
||||
"serde",
|
||||
"serde_json",
|
||||
"tokio",
|
||||
"url",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "lsp-types"
|
||||
version = "0.97.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "53353550a17c04ac46c585feb189c2db82154fc84b79c7a66c96c2c644f66071"
|
||||
dependencies = [
|
||||
"bitflags 1.3.2",
|
||||
"fluent-uri",
|
||||
"serde",
|
||||
"serde_json",
|
||||
"serde_repr",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "matchit"
|
||||
version = "0.8.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "47e1ffaa40ddd1f3ed91f717a33c8c0ee23fff369e3aa8772b9605cc1d22f4c3"
|
||||
|
||||
[[package]]
|
||||
name = "memchr"
|
||||
version = "2.8.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f8ca58f447f06ed17d5fc4043ce1b10dd205e060fb3ce5b979b8ed8e59ff3f79"
|
||||
|
||||
[[package]]
|
||||
name = "mime"
|
||||
version = "0.3.17"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "6877bb514081ee2a7ff5ef9de3281f14a4dd4bceac4c09388074a6b5df8a139a"
|
||||
|
||||
[[package]]
|
||||
name = "miniz_oxide"
|
||||
version = "0.8.9"
|
||||
@ -669,6 +880,27 @@ dependencies = [
|
||||
"windows-sys 0.61.2",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "nibble_vec"
|
||||
version = "0.1.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "77a5d83df9f36fe23f0c3648c6bbb8b0298bb5f1939c8f2704431371f4b84d43"
|
||||
dependencies = [
|
||||
"smallvec",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "nix"
|
||||
version = "0.29.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "71e2746dc3a24dd78b3cfcb7be93368c6de9963d30f43a6a73998a9cf4b17b46"
|
||||
dependencies = [
|
||||
"bitflags 2.11.0",
|
||||
"cfg-if",
|
||||
"cfg_aliases",
|
||||
"libc",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "num-conv"
|
||||
version = "0.2.1"
|
||||
@ -687,7 +919,7 @@ version = "6.5.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "336b9c63443aceef14bea841b899035ae3abe89b7c486aaf4c5bd8aafedac3f0"
|
||||
dependencies = [
|
||||
"bitflags",
|
||||
"bitflags 2.11.0",
|
||||
"libc",
|
||||
"once_cell",
|
||||
"onig_sys",
|
||||
@ -757,6 +989,14 @@ dependencies = [
|
||||
"time",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "plugins"
|
||||
version = "0.1.0"
|
||||
dependencies = [
|
||||
"serde",
|
||||
"serde_json",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "potential_utf"
|
||||
version = "0.1.4"
|
||||
@ -796,7 +1036,7 @@ version = "0.13.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "7c3a14896dfa883796f1cb410461aef38810ea05f2b2c33c5aded3649095fdad"
|
||||
dependencies = [
|
||||
"bitflags",
|
||||
"bitflags 2.11.0",
|
||||
"getopts",
|
||||
"memchr",
|
||||
"pulldown-cmark-escape",
|
||||
@ -888,6 +1128,16 @@ version = "5.3.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "69cdb34c158ceb288df11e18b4bd39de994f6657d83847bdffdbd7f346754b0f"
|
||||
|
||||
[[package]]
|
||||
name = "radix_trie"
|
||||
version = "0.2.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c069c179fcdc6a2fe24d8d18305cf085fdbd4f922c041943e203685d6a1c58fd"
|
||||
dependencies = [
|
||||
"endian-type",
|
||||
"nibble_vec",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "rand"
|
||||
version = "0.9.2"
|
||||
@ -923,7 +1173,7 @@ version = "0.5.18"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "ed2bf2547551a7053d6fdfafda3f938979645c44812fbfcda098faae3f1a362d"
|
||||
dependencies = [
|
||||
"bitflags",
|
||||
"bitflags 2.11.0",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@ -985,12 +1235,14 @@ dependencies = [
|
||||
"sync_wrapper",
|
||||
"tokio",
|
||||
"tokio-rustls",
|
||||
"tokio-util",
|
||||
"tower",
|
||||
"tower-http",
|
||||
"tower-service",
|
||||
"url",
|
||||
"wasm-bindgen",
|
||||
"wasm-bindgen-futures",
|
||||
"wasm-streams",
|
||||
"web-sys",
|
||||
"webpki-roots",
|
||||
]
|
||||
@ -1014,6 +1266,8 @@ name = "runtime"
|
||||
version = "0.1.0"
|
||||
dependencies = [
|
||||
"glob",
|
||||
"lsp",
|
||||
"plugins",
|
||||
"regex",
|
||||
"serde",
|
||||
"serde_json",
|
||||
@ -1034,11 +1288,24 @@ version = "0.38.44"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "fdb5bc1ae2baa591800df16c9ca78619bf65c0488b41b96ccec5d11220d8c154"
|
||||
dependencies = [
|
||||
"bitflags",
|
||||
"bitflags 2.11.0",
|
||||
"errno",
|
||||
"libc",
|
||||
"linux-raw-sys",
|
||||
"windows-sys 0.52.0",
|
||||
"linux-raw-sys 0.4.15",
|
||||
"windows-sys 0.59.0",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "rustix"
|
||||
version = "1.1.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b6fe4565b9518b83ef4f91bb47ce29620ca828bd32cb7e408f0062e9930ba190"
|
||||
dependencies = [
|
||||
"bitflags 2.11.0",
|
||||
"errno",
|
||||
"libc",
|
||||
"linux-raw-sys 0.12.1",
|
||||
"windows-sys 0.61.2",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@ -1098,6 +1365,28 @@ dependencies = [
|
||||
"tools",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "rustyline"
|
||||
version = "15.0.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2ee1e066dc922e513bda599c6ccb5f3bb2b0ea5870a579448f2622993f0a9a2f"
|
||||
dependencies = [
|
||||
"bitflags 2.11.0",
|
||||
"cfg-if",
|
||||
"clipboard-win",
|
||||
"fd-lock",
|
||||
"home",
|
||||
"libc",
|
||||
"log",
|
||||
"memchr",
|
||||
"nix",
|
||||
"radix_trie",
|
||||
"unicode-segmentation",
|
||||
"unicode-width",
|
||||
"utf8parse",
|
||||
"windows-sys 0.59.0",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "ryu"
|
||||
version = "1.0.23"
|
||||
@ -1162,6 +1451,28 @@ dependencies = [
|
||||
"zmij",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "serde_path_to_error"
|
||||
version = "0.1.20"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "10a9ff822e371bb5403e391ecd83e182e0e77ba7f6fe0160b795797109d1b457"
|
||||
dependencies = [
|
||||
"itoa",
|
||||
"serde",
|
||||
"serde_core",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "serde_repr"
|
||||
version = "0.1.20"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "175ee3e80ae9982737ca543e96133087cbd9a485eecc3bc4de9c1a37b47ea59c"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "serde_urlencoded"
|
||||
version = "0.7.1"
|
||||
@ -1174,6 +1485,19 @@ dependencies = [
|
||||
"serde",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "server"
|
||||
version = "0.1.0"
|
||||
dependencies = [
|
||||
"async-stream",
|
||||
"axum",
|
||||
"reqwest",
|
||||
"runtime",
|
||||
"serde",
|
||||
"serde_json",
|
||||
"tokio",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "sha2"
|
||||
version = "0.10.9"
|
||||
@ -1427,14 +1751,30 @@ dependencies = [
|
||||
"tokio",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tokio-util"
|
||||
version = "0.7.18"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9ae9cec805b01e8fc3fd2fe289f89149a9b66dd16786abd8b19cfa7b48cb0098"
|
||||
dependencies = [
|
||||
"bytes",
|
||||
"futures-core",
|
||||
"futures-sink",
|
||||
"pin-project-lite",
|
||||
"tokio",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tools"
|
||||
version = "0.1.0"
|
||||
dependencies = [
|
||||
"api",
|
||||
"plugins",
|
||||
"reqwest",
|
||||
"runtime",
|
||||
"serde",
|
||||
"serde_json",
|
||||
"tokio",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@ -1450,6 +1790,7 @@ dependencies = [
|
||||
"tokio",
|
||||
"tower-layer",
|
||||
"tower-service",
|
||||
"tracing",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@ -1458,7 +1799,7 @@ version = "0.6.8"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d4e6559d53cc268e5031cd8429d05415bc4cb4aefc4aa5d6cc35fbf5b924a1f8"
|
||||
dependencies = [
|
||||
"bitflags",
|
||||
"bitflags 2.11.0",
|
||||
"bytes",
|
||||
"futures-util",
|
||||
"http",
|
||||
@ -1488,6 +1829,7 @@ version = "0.1.44"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "63e71662fa4b2a2c3a26f570f037eb95bb1f85397f3cd8076caed2f026a6d100"
|
||||
dependencies = [
|
||||
"log",
|
||||
"pin-project-lite",
|
||||
"tracing-core",
|
||||
]
|
||||
@ -1525,6 +1867,12 @@ version = "1.0.24"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e6e4313cd5fcd3dad5cafa179702e2b244f760991f45397d14d4ebf38247da75"
|
||||
|
||||
[[package]]
|
||||
name = "unicode-segmentation"
|
||||
version = "1.13.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9629274872b2bfaf8d66f5f15725007f635594914870f65218920345aa11aa8c"
|
||||
|
||||
[[package]]
|
||||
name = "unicode-width"
|
||||
version = "0.2.2"
|
||||
@ -1555,6 +1903,12 @@ version = "1.0.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b6c140620e7ffbb22c2dee59cafe6084a59b5ffc27a8859a5f0d494b5d52b6be"
|
||||
|
||||
[[package]]
|
||||
name = "utf8parse"
|
||||
version = "0.2.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "06abde3611657adf66d383f00b093d7faecc7fa57071cce2578660c9f1010821"
|
||||
|
||||
[[package]]
|
||||
name = "version_check"
|
||||
version = "0.9.5"
|
||||
@ -1650,6 +2004,19 @@ dependencies = [
|
||||
"unicode-ident",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "wasm-streams"
|
||||
version = "0.4.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "15053d8d85c7eccdbefef60f06769760a563c7f0a9d6902a13d35c7800b0ad65"
|
||||
dependencies = [
|
||||
"futures-util",
|
||||
"js-sys",
|
||||
"wasm-bindgen",
|
||||
"wasm-bindgen-futures",
|
||||
"web-sys",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "web-sys"
|
||||
version = "0.3.93"
|
||||
@ -1725,6 +2092,15 @@ dependencies = [
|
||||
"windows-targets 0.52.6",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "windows-sys"
|
||||
version = "0.59.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1e38bc4d79ed67fd075bcc251a1c39b32a1776bbe92e5bef1f0bf1f8c531853b"
|
||||
dependencies = [
|
||||
"windows-targets 0.52.6",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "windows-sys"
|
||||
version = "0.60.2"
|
||||
|
||||
@ -8,6 +8,10 @@ edition = "2021"
|
||||
license = "MIT"
|
||||
publish = false
|
||||
|
||||
[workspace.dependencies]
|
||||
lsp-types = "0.97"
|
||||
serde_json = "1"
|
||||
|
||||
[workspace.lints.rust]
|
||||
unsafe_code = "forbid"
|
||||
|
||||
|
||||
256
rust/README.md
256
rust/README.md
@ -1,230 +1,122 @@
|
||||
# Rusty Claude CLI
|
||||
# Claw Code
|
||||
|
||||
`rust/` contains the Rust workspace for the integrated `rusty-claude-cli` deliverable.
|
||||
It is intended to be something you can clone, build, and run directly.
|
||||
Claw Code 是一个使用安全 Rust 实现的本地编程代理(coding-agent)命令行工具。它的设计灵感来自 **Claude Code**,并作为一个**净室实现(clean-room implementation)**开发:旨在提供强大的本地代理体验,但它**不是** Claude Code 的直接移植或复制。
|
||||
|
||||
## Workspace layout
|
||||
Rust 工作区是当前主要的产品界面。`claw` 二进制文件在单个工作区内提供交互式会话、单次提示、工作区感知工具、本地代理工作流以及支持插件的操作。
|
||||
|
||||
```text
|
||||
rust/
|
||||
├── Cargo.toml
|
||||
├── Cargo.lock
|
||||
├── README.md
|
||||
└── crates/
|
||||
├── api/ # Anthropic API client + SSE streaming support
|
||||
├── commands/ # Shared slash-command metadata/help surfaces
|
||||
├── compat-harness/ # Upstream TS manifest extraction harness
|
||||
├── runtime/ # Session/runtime/config/prompt orchestration
|
||||
├── rusty-claude-cli/ # Main CLI binary
|
||||
└── tools/ # Built-in tool implementations
|
||||
```
|
||||
## 当前状态
|
||||
|
||||
## Prerequisites
|
||||
- **版本:** `0.1.0`
|
||||
- **发布阶段:** 初始公开发布,源码编译分发
|
||||
- **主要实现:** 本仓库中的 Rust 工作区
|
||||
- **平台焦点:** macOS 和 Linux 开发工作站
|
||||
|
||||
- Rust toolchain installed (`rustup`, stable toolchain)
|
||||
- Network access and Anthropic credentials for live prompt/REPL usage
|
||||
## 安装、构建与运行
|
||||
|
||||
## Build
|
||||
### 准备工作
|
||||
|
||||
From the repository root:
|
||||
- Rust 稳定版工具链
|
||||
- Cargo
|
||||
- 你想使用的模型的提供商凭据
|
||||
|
||||
### 身份验证
|
||||
|
||||
兼容 Anthropic 的模型:
|
||||
|
||||
```bash
|
||||
cd rust
|
||||
cargo build --release -p rusty-claude-cli
|
||||
export ANTHROPIC_API_KEY="..."
|
||||
# 使用兼容的端点时可选
|
||||
export ANTHROPIC_BASE_URL="https://api.anthropic.com"
|
||||
```
|
||||
|
||||
The optimized binary will be written to:
|
||||
Grok 模型:
|
||||
|
||||
```bash
|
||||
./target/release/rusty-claude-cli
|
||||
export XAI_API_KEY="..."
|
||||
# 使用兼容的端点时可选
|
||||
export XAI_BASE_URL="https://api.x.ai"
|
||||
```
|
||||
|
||||
## Test
|
||||
|
||||
Run the verified workspace test suite used for release-readiness:
|
||||
也可以使用 OAuth 登录:
|
||||
|
||||
```bash
|
||||
cd rust
|
||||
cargo test --workspace --exclude compat-harness
|
||||
cargo run --bin claw -- login
|
||||
```
|
||||
|
||||
## Quick start
|
||||
|
||||
### Show help
|
||||
### 本地安装
|
||||
|
||||
```bash
|
||||
cd rust
|
||||
cargo run -p rusty-claude-cli -- --help
|
||||
cargo install --path crates/claw-cli --locked
|
||||
```
|
||||
|
||||
### Print version
|
||||
### 从源码构建
|
||||
|
||||
```bash
|
||||
cd rust
|
||||
cargo run -p rusty-claude-cli -- --version
|
||||
cargo build --release -p claw-cli
|
||||
```
|
||||
|
||||
### Login with OAuth
|
||||
### 运行
|
||||
|
||||
Configure `settings.json` with an `oauth` block containing `clientId`, `authorizeUrl`, `tokenUrl`, optional `callbackPort`, and optional `scopes`, then run:
|
||||
在工作区内运行:
|
||||
|
||||
```bash
|
||||
cd rust
|
||||
cargo run -p rusty-claude-cli -- login
|
||||
cargo run --bin claw -- --help
|
||||
cargo run --bin claw --
|
||||
cargo run --bin claw -- prompt "总结此工作区"
|
||||
cargo run --bin claw -- --model sonnet "审查最新更改"
|
||||
```
|
||||
|
||||
This opens the browser, listens on the configured localhost callback, exchanges the auth code for tokens, and stores OAuth credentials in `~/.claude/credentials.json` (or `$CLAUDE_CONFIG_HOME/credentials.json`).
|
||||
|
||||
### Logout
|
||||
运行发布版本:
|
||||
|
||||
```bash
|
||||
cd rust
|
||||
cargo run -p rusty-claude-cli -- logout
|
||||
./target/release/claw
|
||||
./target/release/claw prompt "解释 crates/runtime"
|
||||
```
|
||||
|
||||
This removes only the stored OAuth credentials and preserves unrelated JSON fields in `credentials.json`.
|
||||
## 支持的功能
|
||||
|
||||
### Self-update
|
||||
- 交互式 REPL 和单次提示执行
|
||||
- 已保存会话的检查和恢复流程
|
||||
- 内置工作区工具:shell、文件读/写/编辑、搜索、网页获取/搜索、待办事项和笔记本更新
|
||||
- 斜杠命令:状态、压缩、配置检查、差异(diff)、导出、会话管理和版本报告
|
||||
- 本地代理和技能发现:通过 `claw agents` 和 `claw skills`
|
||||
- 通过命令行和斜杠命令界面发现并管理插件
|
||||
- OAuth 登录/注销,以及从命令行选择模型/提供商
|
||||
- 工作区感知的指令/配置加载(`CLAW.md`、配置文件、权限、插件设置)
|
||||
|
||||
```bash
|
||||
cd rust
|
||||
cargo run -p rusty-claude-cli -- self-update
|
||||
```
|
||||
## 当前限制
|
||||
|
||||
The command checks the latest GitHub release for `instructkr/clawd-code`, compares it to the current binary version, downloads the matching binary asset plus checksum manifest, verifies SHA-256, replaces the current executable, and prints the release changelog. If no published release or matching asset exists, it exits safely with an explanatory message.
|
||||
- 目前公开发布**仅限源码构建**;此工作区尚未设置 crates.io 发布
|
||||
- GitHub CI 验证 `cargo check`、`cargo test` 和发布构建,但尚未提供自动化的发布打包
|
||||
- 当前 CI 目标为 Ubuntu 和 macOS;Windows 的发布就绪性仍待建立
|
||||
- 一些实时提供商集成覆盖是可选的,因为它们需要外部凭据 and 网络访问
|
||||
- 命令界面可能会在 `0.x` 系列期间继续演进
|
||||
|
||||
## Usage examples
|
||||
## 实现现状
|
||||
|
||||
### 1) Prompt mode
|
||||
Rust 工作区是当前的产品实现。目前包含以下 crate:
|
||||
|
||||
Send one prompt, stream the answer, then exit:
|
||||
- `claw-cli` — 面向用户的二进制文件
|
||||
- `api` — 提供商客户端和流式处理
|
||||
- `runtime` — 会话、配置、权限、提示词和运行时循环
|
||||
- `tools` — 内置工具实现
|
||||
- `commands` — 斜杠命令注册和处理程序
|
||||
- `plugins` — 插件发现、注册和生命周期支持
|
||||
- `lsp` — 语言服务器协议支持类型和进程助手
|
||||
- `server` 和 `compat-harness` — 支持服务和兼容性工具
|
||||
|
||||
```bash
|
||||
cd rust
|
||||
cargo run -p rusty-claude-cli -- prompt "Summarize the architecture of this repository"
|
||||
```
|
||||
## 路线图
|
||||
|
||||
Use a specific model:
|
||||
- 发布打包好的构件,用于公共安装
|
||||
- 添加可重复的发布工作流和长期维护的变更日志(changelog)规范
|
||||
- 将平台验证扩展到当前 CI 矩阵之外
|
||||
- 添加更多以任务为中心的示例和操作员文档
|
||||
- 继续加强 Rust 实现的功能覆盖并磨炼用户体验(UX)
|
||||
|
||||
```bash
|
||||
cd rust
|
||||
cargo run -p rusty-claude-cli -- --model claude-sonnet-4-20250514 prompt "List the key crates in this workspace"
|
||||
```
|
||||
## 发行版本说明
|
||||
|
||||
Restrict enabled tools in an interactive session:
|
||||
- 0.1.0 发行说明草案:[`docs/releases/0.1.0.md`](docs/releases/0.1.0.md)
|
||||
|
||||
```bash
|
||||
cd rust
|
||||
cargo run -p rusty-claude-cli -- --allowedTools read,glob
|
||||
```
|
||||
## 许可
|
||||
|
||||
Bootstrap Claude project files for the current repo:
|
||||
|
||||
```bash
|
||||
cd rust
|
||||
cargo run -p rusty-claude-cli -- init
|
||||
```
|
||||
|
||||
### 2) REPL mode
|
||||
|
||||
Start the interactive shell:
|
||||
|
||||
```bash
|
||||
cd rust
|
||||
cargo run -p rusty-claude-cli --
|
||||
```
|
||||
|
||||
Inside the REPL, useful commands include:
|
||||
|
||||
```text
|
||||
/help
|
||||
/status
|
||||
/model claude-sonnet-4-20250514
|
||||
/permissions workspace-write
|
||||
/cost
|
||||
/compact
|
||||
/memory
|
||||
/config
|
||||
/init
|
||||
/diff
|
||||
/version
|
||||
/export notes.txt
|
||||
/sessions
|
||||
/session list
|
||||
/exit
|
||||
```
|
||||
|
||||
### 3) Resume an existing session
|
||||
|
||||
Inspect or maintain a saved session file without entering the REPL:
|
||||
|
||||
```bash
|
||||
cd rust
|
||||
cargo run -p rusty-claude-cli -- --resume session-123456 /status /compact /cost
|
||||
```
|
||||
|
||||
You can also inspect memory/config state for a restored session:
|
||||
|
||||
```bash
|
||||
cd rust
|
||||
cargo run -p rusty-claude-cli -- --resume ~/.claude/sessions/session-123456.json /memory /config
|
||||
```
|
||||
|
||||
## Available commands
|
||||
|
||||
### Top-level CLI commands
|
||||
|
||||
- `prompt <text...>` — run one prompt non-interactively
|
||||
- `--resume <session-id-or-path> [/commands...]` — inspect or maintain a saved session stored under `~/.claude/sessions/`
|
||||
- `dump-manifests` — print extracted upstream manifest counts
|
||||
- `bootstrap-plan` — print the current bootstrap skeleton
|
||||
- `system-prompt [--cwd PATH] [--date YYYY-MM-DD]` — render the synthesized system prompt
|
||||
- `self-update` — update the installed binary from the latest GitHub release when a matching asset is available
|
||||
- `--help` / `-h` — show CLI help
|
||||
- `--version` / `-V` — print the CLI version and build info locally (no API call)
|
||||
- `--output-format text|json` — choose non-interactive prompt output rendering
|
||||
- `--allowedTools <tool[,tool...]>` — restrict enabled tools for interactive sessions and prompt-mode tool use
|
||||
|
||||
### Interactive slash commands
|
||||
|
||||
- `/help` — show command help
|
||||
- `/status` — show current session status
|
||||
- `/compact` — compact local session history
|
||||
- `/model [model]` — inspect or switch the active model
|
||||
- `/permissions [read-only|workspace-write|danger-full-access]` — inspect or switch permissions
|
||||
- `/clear [--confirm]` — clear the current local session
|
||||
- `/cost` — show token usage totals
|
||||
- `/resume <session-id-or-path>` — load a saved session into the REPL
|
||||
- `/config [env|hooks|model]` — inspect discovered Claude config
|
||||
- `/memory` — inspect loaded instruction memory files
|
||||
- `/init` — bootstrap `.claude.json`, `.claude/`, `CLAUDE.md`, and local ignore rules
|
||||
- `/diff` — show the current git diff for the workspace
|
||||
- `/version` — print version and build metadata locally
|
||||
- `/export [file]` — export the current conversation transcript
|
||||
- `/sessions` — list recent managed local sessions from `~/.claude/sessions/`
|
||||
- `/session [list|switch <session-id>]` — inspect or switch managed local sessions
|
||||
- `/exit` — leave the REPL
|
||||
|
||||
## Environment variables
|
||||
|
||||
### Anthropic/API
|
||||
|
||||
- `ANTHROPIC_API_KEY` — highest-precedence API credential
|
||||
- `ANTHROPIC_AUTH_TOKEN` — bearer-token override used when no API key is set
|
||||
- Persisted OAuth credentials in `~/.claude/credentials.json` — used when neither env var is set
|
||||
- `ANTHROPIC_BASE_URL` — override the Anthropic API base URL
|
||||
- `ANTHROPIC_MODEL` — default model used by selected live integration tests
|
||||
|
||||
### CLI/runtime
|
||||
|
||||
- `RUSTY_CLAUDE_PERMISSION_MODE` — default REPL permission mode (`read-only`, `workspace-write`, or `danger-full-access`)
|
||||
- `CLAUDE_CONFIG_HOME` — override Claude config discovery root
|
||||
- `CLAUDE_CODE_REMOTE` — enable remote-session bootstrap handling when supported
|
||||
- `CLAUDE_CODE_REMOTE_SESSION_ID` — remote session identifier when using remote mode
|
||||
- `CLAUDE_CODE_UPSTREAM` — override the upstream TS source path for compat-harness extraction
|
||||
- `CLAWD_WEB_SEARCH_BASE_URL` — override the built-in web search service endpoint used by tooling
|
||||
|
||||
## Notes
|
||||
|
||||
- `compat-harness` exists to compare the Rust port against the upstream TypeScript codebase and is intentionally excluded from the requested release test run.
|
||||
- The CLI currently focuses on a practical integrated workflow: prompt execution, REPL operation, session inspection/resume, config discovery, and tool/runtime plumbing.
|
||||
有关许可详情,请参阅仓库根目录。
|
||||
|
||||
@ -9,7 +9,7 @@ publish.workspace = true
|
||||
reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls"] }
|
||||
runtime = { path = "../runtime" }
|
||||
serde = { version = "1", features = ["derive"] }
|
||||
serde_json = "1"
|
||||
serde_json.workspace = true
|
||||
tokio = { version = "1", features = ["io-util", "macros", "net", "rt-multi-thread", "time"] }
|
||||
|
||||
[lints]
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@ -4,7 +4,10 @@ use std::time::Duration;
|
||||
|
||||
#[derive(Debug)]
|
||||
pub enum ApiError {
|
||||
MissingApiKey,
|
||||
MissingCredentials {
|
||||
provider: &'static str,
|
||||
env_vars: &'static [&'static str],
|
||||
},
|
||||
ExpiredOAuthToken,
|
||||
Auth(String),
|
||||
InvalidApiKeyEnv(VarError),
|
||||
@ -30,13 +33,21 @@ pub enum ApiError {
|
||||
}
|
||||
|
||||
impl ApiError {
|
||||
#[must_use]
|
||||
pub const fn missing_credentials(
|
||||
provider: &'static str,
|
||||
env_vars: &'static [&'static str],
|
||||
) -> Self {
|
||||
Self::MissingCredentials { provider, env_vars }
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn is_retryable(&self) -> bool {
|
||||
match self {
|
||||
Self::Http(error) => error.is_connect() || error.is_timeout() || error.is_request(),
|
||||
Self::Api { retryable, .. } => *retryable,
|
||||
Self::RetriesExhausted { last_error, .. } => last_error.is_retryable(),
|
||||
Self::MissingApiKey
|
||||
Self::MissingCredentials { .. }
|
||||
| Self::ExpiredOAuthToken
|
||||
| Self::Auth(_)
|
||||
| Self::InvalidApiKeyEnv(_)
|
||||
@ -51,12 +62,11 @@ impl ApiError {
|
||||
impl Display for ApiError {
|
||||
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
|
||||
match self {
|
||||
Self::MissingApiKey => {
|
||||
write!(
|
||||
f,
|
||||
"ANTHROPIC_AUTH_TOKEN or ANTHROPIC_API_KEY is not set; export one before calling the Anthropic API"
|
||||
)
|
||||
}
|
||||
Self::MissingCredentials { provider, env_vars } => write!(
|
||||
f,
|
||||
"missing {provider} credentials; export {} before calling the {provider} API",
|
||||
env_vars.join(" or ")
|
||||
),
|
||||
Self::ExpiredOAuthToken => {
|
||||
write!(
|
||||
f,
|
||||
@ -65,10 +75,7 @@ impl Display for ApiError {
|
||||
}
|
||||
Self::Auth(message) => write!(f, "auth error: {message}"),
|
||||
Self::InvalidApiKeyEnv(error) => {
|
||||
write!(
|
||||
f,
|
||||
"failed to read ANTHROPIC_AUTH_TOKEN / ANTHROPIC_API_KEY: {error}"
|
||||
)
|
||||
write!(f, "failed to read credential environment variable: {error}")
|
||||
}
|
||||
Self::Http(error) => write!(f, "http error: {error}"),
|
||||
Self::Io(error) => write!(f, "io error: {error}"),
|
||||
@ -81,20 +88,14 @@ impl Display for ApiError {
|
||||
..
|
||||
} => match (error_type, message) {
|
||||
(Some(error_type), Some(message)) => {
|
||||
write!(
|
||||
f,
|
||||
"anthropic api returned {status} ({error_type}): {message}"
|
||||
)
|
||||
write!(f, "api returned {status} ({error_type}): {message}")
|
||||
}
|
||||
_ => write!(f, "anthropic api returned {status}: {body}"),
|
||||
_ => write!(f, "api returned {status}: {body}"),
|
||||
},
|
||||
Self::RetriesExhausted {
|
||||
attempts,
|
||||
last_error,
|
||||
} => write!(
|
||||
f,
|
||||
"anthropic api failed after {attempts} attempts: {last_error}"
|
||||
),
|
||||
} => write!(f, "api failed after {attempts} attempts: {last_error}"),
|
||||
Self::InvalidSseFrame(message) => write!(f, "invalid sse frame: {message}"),
|
||||
Self::BackoffOverflow {
|
||||
attempt,
|
||||
|
||||
@ -1,13 +1,19 @@
|
||||
mod client;
|
||||
mod error;
|
||||
mod providers;
|
||||
mod sse;
|
||||
mod types;
|
||||
|
||||
pub use client::{
|
||||
oauth_token_is_expired, read_base_url, resolve_saved_oauth_token,
|
||||
resolve_startup_auth_source, AnthropicClient, AuthSource, MessageStream, OAuthTokenSet,
|
||||
oauth_token_is_expired, read_base_url, read_xai_base_url, resolve_saved_oauth_token,
|
||||
resolve_startup_auth_source, MessageStream, OAuthTokenSet, ProviderClient,
|
||||
};
|
||||
pub use error::ApiError;
|
||||
pub use providers::claw_provider::{AuthSource, ClawApiClient, ClawApiClient as ApiClient};
|
||||
pub use providers::openai_compat::{OpenAiCompatClient, OpenAiCompatConfig};
|
||||
pub use providers::{
|
||||
detect_provider_kind, max_tokens_for_model, resolve_model_alias, ProviderKind,
|
||||
};
|
||||
pub use sse::{parse_frame, SseParser};
|
||||
pub use types::{
|
||||
ContentBlockDelta, ContentBlockDeltaEvent, ContentBlockStartEvent, ContentBlockStopEvent,
|
||||
|
||||
1046
rust/crates/api/src/providers/claw_provider.rs
Normal file
1046
rust/crates/api/src/providers/claw_provider.rs
Normal file
File diff suppressed because it is too large
Load Diff
239
rust/crates/api/src/providers/mod.rs
Normal file
239
rust/crates/api/src/providers/mod.rs
Normal file
@ -0,0 +1,239 @@
|
||||
use std::future::Future;
|
||||
use std::pin::Pin;
|
||||
|
||||
use crate::error::ApiError;
|
||||
use crate::types::{MessageRequest, MessageResponse};
|
||||
|
||||
pub mod claw_provider;
|
||||
pub mod openai_compat;
|
||||
|
||||
pub type ProviderFuture<'a, T> = Pin<Box<dyn Future<Output = Result<T, ApiError>> + Send + 'a>>;
|
||||
|
||||
pub trait Provider {
|
||||
type Stream;
|
||||
|
||||
fn send_message<'a>(
|
||||
&'a self,
|
||||
request: &'a MessageRequest,
|
||||
) -> ProviderFuture<'a, MessageResponse>;
|
||||
|
||||
fn stream_message<'a>(
|
||||
&'a self,
|
||||
request: &'a MessageRequest,
|
||||
) -> ProviderFuture<'a, Self::Stream>;
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||
pub enum ProviderKind {
|
||||
ClawApi,
|
||||
Xai,
|
||||
OpenAi,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||
pub struct ProviderMetadata {
|
||||
pub provider: ProviderKind,
|
||||
pub auth_env: &'static str,
|
||||
pub base_url_env: &'static str,
|
||||
pub default_base_url: &'static str,
|
||||
}
|
||||
|
||||
const MODEL_REGISTRY: &[(&str, ProviderMetadata)] = &[
|
||||
(
|
||||
"opus",
|
||||
ProviderMetadata {
|
||||
provider: ProviderKind::ClawApi,
|
||||
auth_env: "ANTHROPIC_API_KEY",
|
||||
base_url_env: "ANTHROPIC_BASE_URL",
|
||||
default_base_url: claw_provider::DEFAULT_BASE_URL,
|
||||
},
|
||||
),
|
||||
(
|
||||
"sonnet",
|
||||
ProviderMetadata {
|
||||
provider: ProviderKind::ClawApi,
|
||||
auth_env: "ANTHROPIC_API_KEY",
|
||||
base_url_env: "ANTHROPIC_BASE_URL",
|
||||
default_base_url: claw_provider::DEFAULT_BASE_URL,
|
||||
},
|
||||
),
|
||||
(
|
||||
"haiku",
|
||||
ProviderMetadata {
|
||||
provider: ProviderKind::ClawApi,
|
||||
auth_env: "ANTHROPIC_API_KEY",
|
||||
base_url_env: "ANTHROPIC_BASE_URL",
|
||||
default_base_url: claw_provider::DEFAULT_BASE_URL,
|
||||
},
|
||||
),
|
||||
(
|
||||
"claude-opus-4-6",
|
||||
ProviderMetadata {
|
||||
provider: ProviderKind::ClawApi,
|
||||
auth_env: "ANTHROPIC_API_KEY",
|
||||
base_url_env: "ANTHROPIC_BASE_URL",
|
||||
default_base_url: claw_provider::DEFAULT_BASE_URL,
|
||||
},
|
||||
),
|
||||
(
|
||||
"claude-sonnet-4-6",
|
||||
ProviderMetadata {
|
||||
provider: ProviderKind::ClawApi,
|
||||
auth_env: "ANTHROPIC_API_KEY",
|
||||
base_url_env: "ANTHROPIC_BASE_URL",
|
||||
default_base_url: claw_provider::DEFAULT_BASE_URL,
|
||||
},
|
||||
),
|
||||
(
|
||||
"claude-haiku-4-5-20251213",
|
||||
ProviderMetadata {
|
||||
provider: ProviderKind::ClawApi,
|
||||
auth_env: "ANTHROPIC_API_KEY",
|
||||
base_url_env: "ANTHROPIC_BASE_URL",
|
||||
default_base_url: claw_provider::DEFAULT_BASE_URL,
|
||||
},
|
||||
),
|
||||
(
|
||||
"grok",
|
||||
ProviderMetadata {
|
||||
provider: ProviderKind::Xai,
|
||||
auth_env: "XAI_API_KEY",
|
||||
base_url_env: "XAI_BASE_URL",
|
||||
default_base_url: openai_compat::DEFAULT_XAI_BASE_URL,
|
||||
},
|
||||
),
|
||||
(
|
||||
"grok-3",
|
||||
ProviderMetadata {
|
||||
provider: ProviderKind::Xai,
|
||||
auth_env: "XAI_API_KEY",
|
||||
base_url_env: "XAI_BASE_URL",
|
||||
default_base_url: openai_compat::DEFAULT_XAI_BASE_URL,
|
||||
},
|
||||
),
|
||||
(
|
||||
"grok-mini",
|
||||
ProviderMetadata {
|
||||
provider: ProviderKind::Xai,
|
||||
auth_env: "XAI_API_KEY",
|
||||
base_url_env: "XAI_BASE_URL",
|
||||
default_base_url: openai_compat::DEFAULT_XAI_BASE_URL,
|
||||
},
|
||||
),
|
||||
(
|
||||
"grok-3-mini",
|
||||
ProviderMetadata {
|
||||
provider: ProviderKind::Xai,
|
||||
auth_env: "XAI_API_KEY",
|
||||
base_url_env: "XAI_BASE_URL",
|
||||
default_base_url: openai_compat::DEFAULT_XAI_BASE_URL,
|
||||
},
|
||||
),
|
||||
(
|
||||
"grok-2",
|
||||
ProviderMetadata {
|
||||
provider: ProviderKind::Xai,
|
||||
auth_env: "XAI_API_KEY",
|
||||
base_url_env: "XAI_BASE_URL",
|
||||
default_base_url: openai_compat::DEFAULT_XAI_BASE_URL,
|
||||
},
|
||||
),
|
||||
];
|
||||
|
||||
#[must_use]
|
||||
pub fn resolve_model_alias(model: &str) -> String {
|
||||
let trimmed = model.trim();
|
||||
let lower = trimmed.to_ascii_lowercase();
|
||||
MODEL_REGISTRY
|
||||
.iter()
|
||||
.find_map(|(alias, metadata)| {
|
||||
(*alias == lower).then_some(match metadata.provider {
|
||||
ProviderKind::ClawApi => match *alias {
|
||||
"opus" => "claude-opus-4-6",
|
||||
"sonnet" => "claude-sonnet-4-6",
|
||||
"haiku" => "claude-haiku-4-5-20251213",
|
||||
_ => trimmed,
|
||||
},
|
||||
ProviderKind::Xai => match *alias {
|
||||
"grok" | "grok-3" => "grok-3",
|
||||
"grok-mini" | "grok-3-mini" => "grok-3-mini",
|
||||
"grok-2" => "grok-2",
|
||||
_ => trimmed,
|
||||
},
|
||||
ProviderKind::OpenAi => trimmed,
|
||||
})
|
||||
})
|
||||
.map_or_else(|| trimmed.to_string(), ToOwned::to_owned)
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn metadata_for_model(model: &str) -> Option<ProviderMetadata> {
|
||||
let canonical = resolve_model_alias(model);
|
||||
let lower = canonical.to_ascii_lowercase();
|
||||
if let Some((_, metadata)) = MODEL_REGISTRY.iter().find(|(alias, _)| *alias == lower) {
|
||||
return Some(*metadata);
|
||||
}
|
||||
if lower.starts_with("grok") {
|
||||
return Some(ProviderMetadata {
|
||||
provider: ProviderKind::Xai,
|
||||
auth_env: "XAI_API_KEY",
|
||||
base_url_env: "XAI_BASE_URL",
|
||||
default_base_url: openai_compat::DEFAULT_XAI_BASE_URL,
|
||||
});
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn detect_provider_kind(model: &str) -> ProviderKind {
|
||||
if let Some(metadata) = metadata_for_model(model) {
|
||||
return metadata.provider;
|
||||
}
|
||||
if claw_provider::has_auth_from_env_or_saved().unwrap_or(false) {
|
||||
return ProviderKind::ClawApi;
|
||||
}
|
||||
if openai_compat::has_api_key("OPENAI_API_KEY") {
|
||||
return ProviderKind::OpenAi;
|
||||
}
|
||||
if openai_compat::has_api_key("XAI_API_KEY") {
|
||||
return ProviderKind::Xai;
|
||||
}
|
||||
ProviderKind::ClawApi
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn max_tokens_for_model(model: &str) -> u32 {
|
||||
let canonical = resolve_model_alias(model);
|
||||
if canonical.contains("opus") {
|
||||
32_000
|
||||
} else {
|
||||
64_000
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::{detect_provider_kind, max_tokens_for_model, resolve_model_alias, ProviderKind};
|
||||
|
||||
#[test]
|
||||
fn resolves_grok_aliases() {
|
||||
assert_eq!(resolve_model_alias("grok"), "grok-3");
|
||||
assert_eq!(resolve_model_alias("grok-mini"), "grok-3-mini");
|
||||
assert_eq!(resolve_model_alias("grok-2"), "grok-2");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn detects_provider_from_model_name_first() {
|
||||
assert_eq!(detect_provider_kind("grok"), ProviderKind::Xai);
|
||||
assert_eq!(
|
||||
detect_provider_kind("claude-sonnet-4-6"),
|
||||
ProviderKind::ClawApi
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn keeps_existing_max_token_heuristic() {
|
||||
assert_eq!(max_tokens_for_model("opus"), 32_000);
|
||||
assert_eq!(max_tokens_for_model("grok-3"), 64_000);
|
||||
}
|
||||
}
|
||||
1050
rust/crates/api/src/providers/openai_compat.rs
Normal file
1050
rust/crates/api/src/providers/openai_compat.rs
Normal file
File diff suppressed because it is too large
Load Diff
@ -216,4 +216,64 @@ mod tests {
|
||||
))
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parses_thinking_content_block_start() {
|
||||
let frame = concat!(
|
||||
"event: content_block_start\n",
|
||||
"data: {\"type\":\"content_block_start\",\"index\":0,\"content_block\":{\"type\":\"thinking\",\"thinking\":\"\",\"signature\":null}}\n\n"
|
||||
);
|
||||
|
||||
let event = parse_frame(frame).expect("frame should parse");
|
||||
assert_eq!(
|
||||
event,
|
||||
Some(StreamEvent::ContentBlockStart(
|
||||
crate::types::ContentBlockStartEvent {
|
||||
index: 0,
|
||||
content_block: OutputContentBlock::Thinking {
|
||||
thinking: String::new(),
|
||||
signature: None,
|
||||
},
|
||||
},
|
||||
))
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parses_thinking_related_deltas() {
|
||||
let thinking = concat!(
|
||||
"event: content_block_delta\n",
|
||||
"data: {\"type\":\"content_block_delta\",\"index\":0,\"delta\":{\"type\":\"thinking_delta\",\"thinking\":\"step 1\"}}\n\n"
|
||||
);
|
||||
let signature = concat!(
|
||||
"event: content_block_delta\n",
|
||||
"data: {\"type\":\"content_block_delta\",\"index\":0,\"delta\":{\"type\":\"signature_delta\",\"signature\":\"sig_123\"}}\n\n"
|
||||
);
|
||||
|
||||
let thinking_event = parse_frame(thinking).expect("thinking delta should parse");
|
||||
let signature_event = parse_frame(signature).expect("signature delta should parse");
|
||||
|
||||
assert_eq!(
|
||||
thinking_event,
|
||||
Some(StreamEvent::ContentBlockDelta(
|
||||
crate::types::ContentBlockDeltaEvent {
|
||||
index: 0,
|
||||
delta: ContentBlockDelta::ThinkingDelta {
|
||||
thinking: "step 1".to_string(),
|
||||
},
|
||||
}
|
||||
))
|
||||
);
|
||||
assert_eq!(
|
||||
signature_event,
|
||||
Some(StreamEvent::ContentBlockDelta(
|
||||
crate::types::ContentBlockDeltaEvent {
|
||||
index: 0,
|
||||
delta: ContentBlockDelta::SignatureDelta {
|
||||
signature: "sig_123".to_string(),
|
||||
},
|
||||
}
|
||||
))
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@ -135,6 +135,15 @@ pub enum OutputContentBlock {
|
||||
name: String,
|
||||
input: Value,
|
||||
},
|
||||
Thinking {
|
||||
#[serde(default)]
|
||||
thinking: String,
|
||||
#[serde(default, skip_serializing_if = "Option::is_none")]
|
||||
signature: Option<String>,
|
||||
},
|
||||
RedactedThinking {
|
||||
data: Value,
|
||||
},
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
||||
@ -190,6 +199,8 @@ pub struct ContentBlockDeltaEvent {
|
||||
pub enum ContentBlockDelta {
|
||||
TextDelta { text: String },
|
||||
InputJsonDelta { partial_json: String },
|
||||
ThinkingDelta { thinking: String },
|
||||
SignatureDelta { signature: String },
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
|
||||
|
||||
@ -3,9 +3,9 @@ use std::sync::Arc;
|
||||
use std::time::Duration;
|
||||
|
||||
use api::{
|
||||
AnthropicClient, ApiError, ContentBlockDelta, ContentBlockDeltaEvent, ContentBlockStartEvent,
|
||||
InputContentBlock, InputMessage, MessageDeltaEvent, MessageRequest, OutputContentBlock,
|
||||
StreamEvent, ToolChoice, ToolDefinition,
|
||||
ApiClient, ApiError, AuthSource, ContentBlockDelta, ContentBlockDeltaEvent,
|
||||
ContentBlockStartEvent, InputContentBlock, InputMessage, MessageDeltaEvent, MessageRequest,
|
||||
OutputContentBlock, ProviderClient, StreamEvent, ToolChoice, ToolDefinition,
|
||||
};
|
||||
use serde_json::json;
|
||||
use tokio::io::{AsyncReadExt, AsyncWriteExt};
|
||||
@ -20,8 +20,8 @@ async fn send_message_posts_json_and_parses_response() {
|
||||
"\"id\":\"msg_test\",",
|
||||
"\"type\":\"message\",",
|
||||
"\"role\":\"assistant\",",
|
||||
"\"content\":[{\"type\":\"text\",\"text\":\"Hello from Claude\"}],",
|
||||
"\"model\":\"claude-3-7-sonnet-latest\",",
|
||||
"\"content\":[{\"type\":\"text\",\"text\":\"Hello from Claw\"}],",
|
||||
"\"model\":\"claude-sonnet-4-6\",",
|
||||
"\"stop_reason\":\"end_turn\",",
|
||||
"\"stop_sequence\":null,",
|
||||
"\"usage\":{\"input_tokens\":12,\"output_tokens\":4},",
|
||||
@ -34,7 +34,7 @@ async fn send_message_posts_json_and_parses_response() {
|
||||
)
|
||||
.await;
|
||||
|
||||
let client = AnthropicClient::new("test-key")
|
||||
let client = ApiClient::new("test-key")
|
||||
.with_auth_token(Some("proxy-token".to_string()))
|
||||
.with_base_url(server.base_url());
|
||||
let response = client
|
||||
@ -48,7 +48,7 @@ async fn send_message_posts_json_and_parses_response() {
|
||||
assert_eq!(
|
||||
response.content,
|
||||
vec![OutputContentBlock::Text {
|
||||
text: "Hello from Claude".to_string(),
|
||||
text: "Hello from Claw".to_string(),
|
||||
}]
|
||||
);
|
||||
|
||||
@ -68,7 +68,7 @@ async fn send_message_posts_json_and_parses_response() {
|
||||
serde_json::from_str(&request.body).expect("request body should be json");
|
||||
assert_eq!(
|
||||
body.get("model").and_then(serde_json::Value::as_str),
|
||||
Some("claude-3-7-sonnet-latest")
|
||||
Some("claude-sonnet-4-6")
|
||||
);
|
||||
assert!(body.get("stream").is_none());
|
||||
assert_eq!(body["tools"][0]["name"], json!("get_weather"));
|
||||
@ -80,7 +80,7 @@ async fn stream_message_parses_sse_events_with_tool_use() {
|
||||
let state = Arc::new(Mutex::new(Vec::<CapturedRequest>::new()));
|
||||
let sse = concat!(
|
||||
"event: message_start\n",
|
||||
"data: {\"type\":\"message_start\",\"message\":{\"id\":\"msg_stream\",\"type\":\"message\",\"role\":\"assistant\",\"content\":[],\"model\":\"claude-3-7-sonnet-latest\",\"stop_reason\":null,\"stop_sequence\":null,\"usage\":{\"input_tokens\":8,\"output_tokens\":0}}}\n\n",
|
||||
"data: {\"type\":\"message_start\",\"message\":{\"id\":\"msg_stream\",\"type\":\"message\",\"role\":\"assistant\",\"content\":[],\"model\":\"claude-sonnet-4-6\",\"stop_reason\":null,\"stop_sequence\":null,\"usage\":{\"input_tokens\":8,\"output_tokens\":0}}}\n\n",
|
||||
"event: content_block_start\n",
|
||||
"data: {\"type\":\"content_block_start\",\"index\":0,\"content_block\":{\"type\":\"tool_use\",\"id\":\"toolu_123\",\"name\":\"get_weather\",\"input\":{}}}\n\n",
|
||||
"event: content_block_delta\n",
|
||||
@ -104,7 +104,7 @@ async fn stream_message_parses_sse_events_with_tool_use() {
|
||||
)
|
||||
.await;
|
||||
|
||||
let client = AnthropicClient::new("test-key")
|
||||
let client = ApiClient::new("test-key")
|
||||
.with_auth_token(Some("proxy-token".to_string()))
|
||||
.with_base_url(server.base_url());
|
||||
let mut stream = client
|
||||
@ -176,13 +176,13 @@ async fn retries_retryable_failures_before_succeeding() {
|
||||
http_response(
|
||||
"200 OK",
|
||||
"application/json",
|
||||
"{\"id\":\"msg_retry\",\"type\":\"message\",\"role\":\"assistant\",\"content\":[{\"type\":\"text\",\"text\":\"Recovered\"}],\"model\":\"claude-3-7-sonnet-latest\",\"stop_reason\":\"end_turn\",\"stop_sequence\":null,\"usage\":{\"input_tokens\":3,\"output_tokens\":2}}",
|
||||
"{\"id\":\"msg_retry\",\"type\":\"message\",\"role\":\"assistant\",\"content\":[{\"type\":\"text\",\"text\":\"Recovered\"}],\"model\":\"claude-sonnet-4-6\",\"stop_reason\":\"end_turn\",\"stop_sequence\":null,\"usage\":{\"input_tokens\":3,\"output_tokens\":2}}",
|
||||
),
|
||||
],
|
||||
)
|
||||
.await;
|
||||
|
||||
let client = AnthropicClient::new("test-key")
|
||||
let client = ApiClient::new("test-key")
|
||||
.with_base_url(server.base_url())
|
||||
.with_retry_policy(2, Duration::from_millis(1), Duration::from_millis(2));
|
||||
|
||||
@ -195,6 +195,47 @@ async fn retries_retryable_failures_before_succeeding() {
|
||||
assert_eq!(state.lock().await.len(), 2);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn provider_client_dispatches_api_requests() {
|
||||
let state = Arc::new(Mutex::new(Vec::<CapturedRequest>::new()));
|
||||
let server = spawn_server(
|
||||
state.clone(),
|
||||
vec![http_response(
|
||||
"200 OK",
|
||||
"application/json",
|
||||
"{\"id\":\"msg_provider\",\"type\":\"message\",\"role\":\"assistant\",\"content\":[{\"type\":\"text\",\"text\":\"Dispatched\"}],\"model\":\"claude-sonnet-4-6\",\"stop_reason\":\"end_turn\",\"stop_sequence\":null,\"usage\":{\"input_tokens\":3,\"output_tokens\":2}}",
|
||||
)],
|
||||
)
|
||||
.await;
|
||||
|
||||
let client = ProviderClient::from_model_with_default_auth(
|
||||
"claude-sonnet-4-6",
|
||||
Some(AuthSource::ApiKey("test-key".to_string())),
|
||||
)
|
||||
.expect("api provider client should be constructed");
|
||||
let client = match client {
|
||||
ProviderClient::ClawApi(client) => {
|
||||
ProviderClient::ClawApi(client.with_base_url(server.base_url()))
|
||||
}
|
||||
other => panic!("expected default provider, got {other:?}"),
|
||||
};
|
||||
|
||||
let response = client
|
||||
.send_message(&sample_request(false))
|
||||
.await
|
||||
.expect("provider-dispatched request should succeed");
|
||||
|
||||
assert_eq!(response.total_tokens(), 5);
|
||||
|
||||
let captured = state.lock().await;
|
||||
let request = captured.first().expect("server should capture request");
|
||||
assert_eq!(request.path, "/v1/messages");
|
||||
assert_eq!(
|
||||
request.headers.get("x-api-key").map(String::as_str),
|
||||
Some("test-key")
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn surfaces_retry_exhaustion_for_persistent_retryable_errors() {
|
||||
let state = Arc::new(Mutex::new(Vec::<CapturedRequest>::new()));
|
||||
@ -215,7 +256,7 @@ async fn surfaces_retry_exhaustion_for_persistent_retryable_errors() {
|
||||
)
|
||||
.await;
|
||||
|
||||
let client = AnthropicClient::new("test-key")
|
||||
let client = ApiClient::new("test-key")
|
||||
.with_base_url(server.base_url())
|
||||
.with_retry_policy(1, Duration::from_millis(1), Duration::from_millis(2));
|
||||
|
||||
@ -246,11 +287,10 @@ async fn surfaces_retry_exhaustion_for_persistent_retryable_errors() {
|
||||
#[tokio::test]
|
||||
#[ignore = "requires ANTHROPIC_API_KEY and network access"]
|
||||
async fn live_stream_smoke_test() {
|
||||
let client = AnthropicClient::from_env().expect("ANTHROPIC_API_KEY must be set");
|
||||
let client = ApiClient::from_env().expect("ANTHROPIC_API_KEY must be set");
|
||||
let mut stream = client
|
||||
.stream_message(&MessageRequest {
|
||||
model: std::env::var("ANTHROPIC_MODEL")
|
||||
.unwrap_or_else(|_| "claude-3-7-sonnet-latest".to_string()),
|
||||
model: std::env::var("CLAW_MODEL").unwrap_or_else(|_| "claude-sonnet-4-6".to_string()),
|
||||
max_tokens: 32,
|
||||
messages: vec![InputMessage::user_text(
|
||||
"Reply with exactly: hello from rust",
|
||||
@ -410,7 +450,7 @@ fn http_response_with_headers(
|
||||
|
||||
fn sample_request(stream: bool) -> MessageRequest {
|
||||
MessageRequest {
|
||||
model: "claude-3-7-sonnet-latest".to_string(),
|
||||
model: "claude-sonnet-4-6".to_string(),
|
||||
max_tokens: 64,
|
||||
messages: vec![InputMessage {
|
||||
role: "user".to_string(),
|
||||
|
||||
415
rust/crates/api/tests/openai_compat_integration.rs
Normal file
415
rust/crates/api/tests/openai_compat_integration.rs
Normal file
@ -0,0 +1,415 @@
|
||||
use std::collections::HashMap;
|
||||
use std::ffi::OsString;
|
||||
use std::sync::Arc;
|
||||
use std::sync::{Mutex as StdMutex, OnceLock};
|
||||
|
||||
use api::{
|
||||
ContentBlockDelta, ContentBlockDeltaEvent, ContentBlockStartEvent, ContentBlockStopEvent,
|
||||
InputContentBlock, InputMessage, MessageRequest, OpenAiCompatClient, OpenAiCompatConfig,
|
||||
OutputContentBlock, ProviderClient, StreamEvent, ToolChoice, ToolDefinition,
|
||||
};
|
||||
use serde_json::json;
|
||||
use tokio::io::{AsyncReadExt, AsyncWriteExt};
|
||||
use tokio::net::TcpListener;
|
||||
use tokio::sync::Mutex;
|
||||
|
||||
#[tokio::test]
|
||||
async fn send_message_uses_openai_compatible_endpoint_and_auth() {
|
||||
let state = Arc::new(Mutex::new(Vec::<CapturedRequest>::new()));
|
||||
let body = concat!(
|
||||
"{",
|
||||
"\"id\":\"chatcmpl_test\",",
|
||||
"\"model\":\"grok-3\",",
|
||||
"\"choices\":[{",
|
||||
"\"message\":{\"role\":\"assistant\",\"content\":\"Hello from Grok\",\"tool_calls\":[]},",
|
||||
"\"finish_reason\":\"stop\"",
|
||||
"}],",
|
||||
"\"usage\":{\"prompt_tokens\":11,\"completion_tokens\":5}",
|
||||
"}"
|
||||
);
|
||||
let server = spawn_server(
|
||||
state.clone(),
|
||||
vec![http_response("200 OK", "application/json", body)],
|
||||
)
|
||||
.await;
|
||||
|
||||
let client = OpenAiCompatClient::new("xai-test-key", OpenAiCompatConfig::xai())
|
||||
.with_base_url(server.base_url());
|
||||
let response = client
|
||||
.send_message(&sample_request(false))
|
||||
.await
|
||||
.expect("request should succeed");
|
||||
|
||||
assert_eq!(response.model, "grok-3");
|
||||
assert_eq!(response.total_tokens(), 16);
|
||||
assert_eq!(
|
||||
response.content,
|
||||
vec![OutputContentBlock::Text {
|
||||
text: "Hello from Grok".to_string(),
|
||||
}]
|
||||
);
|
||||
|
||||
let captured = state.lock().await;
|
||||
let request = captured.first().expect("server should capture request");
|
||||
assert_eq!(request.path, "/chat/completions");
|
||||
assert_eq!(
|
||||
request.headers.get("authorization").map(String::as_str),
|
||||
Some("Bearer xai-test-key")
|
||||
);
|
||||
let body: serde_json::Value = serde_json::from_str(&request.body).expect("json body");
|
||||
assert_eq!(body["model"], json!("grok-3"));
|
||||
assert_eq!(body["messages"][0]["role"], json!("system"));
|
||||
assert_eq!(body["tools"][0]["type"], json!("function"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn send_message_accepts_full_chat_completions_endpoint_override() {
|
||||
let state = Arc::new(Mutex::new(Vec::<CapturedRequest>::new()));
|
||||
let body = concat!(
|
||||
"{",
|
||||
"\"id\":\"chatcmpl_full_endpoint\",",
|
||||
"\"model\":\"grok-3\",",
|
||||
"\"choices\":[{",
|
||||
"\"message\":{\"role\":\"assistant\",\"content\":\"Endpoint override works\",\"tool_calls\":[]},",
|
||||
"\"finish_reason\":\"stop\"",
|
||||
"}],",
|
||||
"\"usage\":{\"prompt_tokens\":7,\"completion_tokens\":3}",
|
||||
"}"
|
||||
);
|
||||
let server = spawn_server(
|
||||
state.clone(),
|
||||
vec![http_response("200 OK", "application/json", body)],
|
||||
)
|
||||
.await;
|
||||
|
||||
let endpoint_url = format!("{}/chat/completions", server.base_url());
|
||||
let client = OpenAiCompatClient::new("xai-test-key", OpenAiCompatConfig::xai())
|
||||
.with_base_url(endpoint_url);
|
||||
let response = client
|
||||
.send_message(&sample_request(false))
|
||||
.await
|
||||
.expect("request should succeed");
|
||||
|
||||
assert_eq!(response.total_tokens(), 10);
|
||||
|
||||
let captured = state.lock().await;
|
||||
let request = captured.first().expect("server should capture request");
|
||||
assert_eq!(request.path, "/chat/completions");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn stream_message_normalizes_text_and_multiple_tool_calls() {
|
||||
let state = Arc::new(Mutex::new(Vec::<CapturedRequest>::new()));
|
||||
let sse = concat!(
|
||||
"data: {\"id\":\"chatcmpl_stream\",\"model\":\"grok-3\",\"choices\":[{\"delta\":{\"content\":\"Hello\"}}]}\n\n",
|
||||
"data: {\"id\":\"chatcmpl_stream\",\"choices\":[{\"delta\":{\"tool_calls\":[{\"index\":0,\"id\":\"call_1\",\"function\":{\"name\":\"weather\",\"arguments\":\"{\\\"city\\\":\\\"Paris\\\"}\"}},{\"index\":1,\"id\":\"call_2\",\"function\":{\"name\":\"clock\",\"arguments\":\"{\\\"zone\\\":\\\"UTC\\\"}\"}}]}}]}\n\n",
|
||||
"data: {\"id\":\"chatcmpl_stream\",\"choices\":[{\"delta\":{},\"finish_reason\":\"tool_calls\"}]}\n\n",
|
||||
"data: [DONE]\n\n"
|
||||
);
|
||||
let server = spawn_server(
|
||||
state.clone(),
|
||||
vec![http_response_with_headers(
|
||||
"200 OK",
|
||||
"text/event-stream",
|
||||
sse,
|
||||
&[("x-request-id", "req_grok_stream")],
|
||||
)],
|
||||
)
|
||||
.await;
|
||||
|
||||
let client = OpenAiCompatClient::new("xai-test-key", OpenAiCompatConfig::xai())
|
||||
.with_base_url(server.base_url());
|
||||
let mut stream = client
|
||||
.stream_message(&sample_request(false))
|
||||
.await
|
||||
.expect("stream should start");
|
||||
|
||||
assert_eq!(stream.request_id(), Some("req_grok_stream"));
|
||||
|
||||
let mut events = Vec::new();
|
||||
while let Some(event) = stream.next_event().await.expect("event should parse") {
|
||||
events.push(event);
|
||||
}
|
||||
|
||||
assert!(matches!(events[0], StreamEvent::MessageStart(_)));
|
||||
assert!(matches!(
|
||||
events[1],
|
||||
StreamEvent::ContentBlockStart(ContentBlockStartEvent {
|
||||
content_block: OutputContentBlock::Text { .. },
|
||||
..
|
||||
})
|
||||
));
|
||||
assert!(matches!(
|
||||
events[2],
|
||||
StreamEvent::ContentBlockDelta(ContentBlockDeltaEvent {
|
||||
delta: ContentBlockDelta::TextDelta { .. },
|
||||
..
|
||||
})
|
||||
));
|
||||
assert!(matches!(
|
||||
events[3],
|
||||
StreamEvent::ContentBlockStart(ContentBlockStartEvent {
|
||||
index: 1,
|
||||
content_block: OutputContentBlock::ToolUse { .. },
|
||||
})
|
||||
));
|
||||
assert!(matches!(
|
||||
events[4],
|
||||
StreamEvent::ContentBlockDelta(ContentBlockDeltaEvent {
|
||||
index: 1,
|
||||
delta: ContentBlockDelta::InputJsonDelta { .. },
|
||||
})
|
||||
));
|
||||
assert!(matches!(
|
||||
events[5],
|
||||
StreamEvent::ContentBlockStart(ContentBlockStartEvent {
|
||||
index: 2,
|
||||
content_block: OutputContentBlock::ToolUse { .. },
|
||||
})
|
||||
));
|
||||
assert!(matches!(
|
||||
events[6],
|
||||
StreamEvent::ContentBlockDelta(ContentBlockDeltaEvent {
|
||||
index: 2,
|
||||
delta: ContentBlockDelta::InputJsonDelta { .. },
|
||||
})
|
||||
));
|
||||
assert!(matches!(
|
||||
events[7],
|
||||
StreamEvent::ContentBlockStop(ContentBlockStopEvent { index: 1 })
|
||||
));
|
||||
assert!(matches!(
|
||||
events[8],
|
||||
StreamEvent::ContentBlockStop(ContentBlockStopEvent { index: 2 })
|
||||
));
|
||||
assert!(matches!(
|
||||
events[9],
|
||||
StreamEvent::ContentBlockStop(ContentBlockStopEvent { index: 0 })
|
||||
));
|
||||
assert!(matches!(events[10], StreamEvent::MessageDelta(_)));
|
||||
assert!(matches!(events[11], StreamEvent::MessageStop(_)));
|
||||
|
||||
let captured = state.lock().await;
|
||||
let request = captured.first().expect("captured request");
|
||||
assert_eq!(request.path, "/chat/completions");
|
||||
assert!(request.body.contains("\"stream\":true"));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn provider_client_dispatches_xai_requests_from_env() {
|
||||
let _lock = env_lock();
|
||||
let _api_key = ScopedEnvVar::set("XAI_API_KEY", "xai-test-key");
|
||||
|
||||
let state = Arc::new(Mutex::new(Vec::<CapturedRequest>::new()));
|
||||
let server = spawn_server(
|
||||
state.clone(),
|
||||
vec![http_response(
|
||||
"200 OK",
|
||||
"application/json",
|
||||
"{\"id\":\"chatcmpl_provider\",\"model\":\"grok-3\",\"choices\":[{\"message\":{\"role\":\"assistant\",\"content\":\"Through provider client\",\"tool_calls\":[]},\"finish_reason\":\"stop\"}],\"usage\":{\"prompt_tokens\":9,\"completion_tokens\":4}}",
|
||||
)],
|
||||
)
|
||||
.await;
|
||||
let _base_url = ScopedEnvVar::set("XAI_BASE_URL", server.base_url());
|
||||
|
||||
let client =
|
||||
ProviderClient::from_model("grok").expect("xAI provider client should be constructed");
|
||||
assert!(matches!(client, ProviderClient::Xai(_)));
|
||||
|
||||
let response = client
|
||||
.send_message(&sample_request(false))
|
||||
.await
|
||||
.expect("provider-dispatched request should succeed");
|
||||
|
||||
assert_eq!(response.total_tokens(), 13);
|
||||
|
||||
let captured = state.lock().await;
|
||||
let request = captured.first().expect("captured request");
|
||||
assert_eq!(request.path, "/chat/completions");
|
||||
assert_eq!(
|
||||
request.headers.get("authorization").map(String::as_str),
|
||||
Some("Bearer xai-test-key")
|
||||
);
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
struct CapturedRequest {
|
||||
path: String,
|
||||
headers: HashMap<String, String>,
|
||||
body: String,
|
||||
}
|
||||
|
||||
struct TestServer {
|
||||
base_url: String,
|
||||
join_handle: tokio::task::JoinHandle<()>,
|
||||
}
|
||||
|
||||
impl TestServer {
|
||||
fn base_url(&self) -> String {
|
||||
self.base_url.clone()
|
||||
}
|
||||
}
|
||||
|
||||
impl Drop for TestServer {
|
||||
fn drop(&mut self) {
|
||||
self.join_handle.abort();
|
||||
}
|
||||
}
|
||||
|
||||
async fn spawn_server(
|
||||
state: Arc<Mutex<Vec<CapturedRequest>>>,
|
||||
responses: Vec<String>,
|
||||
) -> TestServer {
|
||||
let listener = TcpListener::bind("127.0.0.1:0")
|
||||
.await
|
||||
.expect("listener should bind");
|
||||
let address = listener.local_addr().expect("listener addr");
|
||||
let join_handle = tokio::spawn(async move {
|
||||
for response in responses {
|
||||
let (mut socket, _) = listener.accept().await.expect("accept");
|
||||
let mut buffer = Vec::new();
|
||||
let mut header_end = None;
|
||||
loop {
|
||||
let mut chunk = [0_u8; 1024];
|
||||
let read = socket.read(&mut chunk).await.expect("read request");
|
||||
if read == 0 {
|
||||
break;
|
||||
}
|
||||
buffer.extend_from_slice(&chunk[..read]);
|
||||
if let Some(position) = find_header_end(&buffer) {
|
||||
header_end = Some(position);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
let header_end = header_end.expect("headers should exist");
|
||||
let (header_bytes, remaining) = buffer.split_at(header_end);
|
||||
let header_text = String::from_utf8(header_bytes.to_vec()).expect("utf8 headers");
|
||||
let mut lines = header_text.split("\r\n");
|
||||
let request_line = lines.next().expect("request line");
|
||||
let path = request_line
|
||||
.split_whitespace()
|
||||
.nth(1)
|
||||
.expect("path")
|
||||
.to_string();
|
||||
let mut headers = HashMap::new();
|
||||
let mut content_length = 0_usize;
|
||||
for line in lines {
|
||||
if line.is_empty() {
|
||||
continue;
|
||||
}
|
||||
let (name, value) = line.split_once(':').expect("header");
|
||||
let value = value.trim().to_string();
|
||||
if name.eq_ignore_ascii_case("content-length") {
|
||||
content_length = value.parse().expect("content length");
|
||||
}
|
||||
headers.insert(name.to_ascii_lowercase(), value);
|
||||
}
|
||||
|
||||
let mut body = remaining[4..].to_vec();
|
||||
while body.len() < content_length {
|
||||
let mut chunk = vec![0_u8; content_length - body.len()];
|
||||
let read = socket.read(&mut chunk).await.expect("read body");
|
||||
if read == 0 {
|
||||
break;
|
||||
}
|
||||
body.extend_from_slice(&chunk[..read]);
|
||||
}
|
||||
|
||||
state.lock().await.push(CapturedRequest {
|
||||
path,
|
||||
headers,
|
||||
body: String::from_utf8(body).expect("utf8 body"),
|
||||
});
|
||||
|
||||
socket
|
||||
.write_all(response.as_bytes())
|
||||
.await
|
||||
.expect("write response");
|
||||
}
|
||||
});
|
||||
|
||||
TestServer {
|
||||
base_url: format!("http://{address}"),
|
||||
join_handle,
|
||||
}
|
||||
}
|
||||
|
||||
fn find_header_end(bytes: &[u8]) -> Option<usize> {
|
||||
bytes.windows(4).position(|window| window == b"\r\n\r\n")
|
||||
}
|
||||
|
||||
fn http_response(status: &str, content_type: &str, body: &str) -> String {
|
||||
http_response_with_headers(status, content_type, body, &[])
|
||||
}
|
||||
|
||||
fn http_response_with_headers(
|
||||
status: &str,
|
||||
content_type: &str,
|
||||
body: &str,
|
||||
headers: &[(&str, &str)],
|
||||
) -> String {
|
||||
let mut extra_headers = String::new();
|
||||
for (name, value) in headers {
|
||||
use std::fmt::Write as _;
|
||||
write!(&mut extra_headers, "{name}: {value}\r\n").expect("header write");
|
||||
}
|
||||
format!(
|
||||
"HTTP/1.1 {status}\r\ncontent-type: {content_type}\r\n{extra_headers}content-length: {}\r\nconnection: close\r\n\r\n{body}",
|
||||
body.len()
|
||||
)
|
||||
}
|
||||
|
||||
fn sample_request(stream: bool) -> MessageRequest {
|
||||
MessageRequest {
|
||||
model: "grok-3".to_string(),
|
||||
max_tokens: 64,
|
||||
messages: vec![InputMessage {
|
||||
role: "user".to_string(),
|
||||
content: vec![InputContentBlock::Text {
|
||||
text: "Say hello".to_string(),
|
||||
}],
|
||||
}],
|
||||
system: Some("Use tools when needed".to_string()),
|
||||
tools: Some(vec![ToolDefinition {
|
||||
name: "weather".to_string(),
|
||||
description: Some("Fetches weather".to_string()),
|
||||
input_schema: json!({
|
||||
"type": "object",
|
||||
"properties": {"city": {"type": "string"}},
|
||||
"required": ["city"]
|
||||
}),
|
||||
}]),
|
||||
tool_choice: Some(ToolChoice::Auto),
|
||||
stream,
|
||||
}
|
||||
}
|
||||
|
||||
fn env_lock() -> std::sync::MutexGuard<'static, ()> {
|
||||
static LOCK: OnceLock<StdMutex<()>> = OnceLock::new();
|
||||
LOCK.get_or_init(|| StdMutex::new(()))
|
||||
.lock()
|
||||
.unwrap_or_else(|poisoned| poisoned.into_inner())
|
||||
}
|
||||
|
||||
struct ScopedEnvVar {
|
||||
key: &'static str,
|
||||
previous: Option<OsString>,
|
||||
}
|
||||
|
||||
impl ScopedEnvVar {
|
||||
fn set(key: &'static str, value: impl AsRef<std::ffi::OsStr>) -> Self {
|
||||
let previous = std::env::var_os(key);
|
||||
std::env::set_var(key, value);
|
||||
Self { key, previous }
|
||||
}
|
||||
}
|
||||
|
||||
impl Drop for ScopedEnvVar {
|
||||
fn drop(&mut self) {
|
||||
match &self.previous {
|
||||
Some(value) => std::env::set_var(self.key, value),
|
||||
None => std::env::remove_var(self.key),
|
||||
}
|
||||
}
|
||||
}
|
||||
86
rust/crates/api/tests/provider_client_integration.rs
Normal file
86
rust/crates/api/tests/provider_client_integration.rs
Normal file
@ -0,0 +1,86 @@
|
||||
use std::ffi::OsString;
|
||||
use std::sync::{Mutex, OnceLock};
|
||||
|
||||
use api::{read_xai_base_url, ApiError, AuthSource, ProviderClient, ProviderKind};
|
||||
|
||||
#[test]
|
||||
fn provider_client_routes_grok_aliases_through_xai() {
|
||||
let _lock = env_lock();
|
||||
let _xai_api_key = EnvVarGuard::set("XAI_API_KEY", Some("xai-test-key"));
|
||||
|
||||
let client = ProviderClient::from_model("grok-mini").expect("grok alias should resolve");
|
||||
|
||||
assert_eq!(client.provider_kind(), ProviderKind::Xai);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn provider_client_reports_missing_xai_credentials_for_grok_models() {
|
||||
let _lock = env_lock();
|
||||
let _xai_api_key = EnvVarGuard::set("XAI_API_KEY", None);
|
||||
|
||||
let error = ProviderClient::from_model("grok-3")
|
||||
.expect_err("grok requests without XAI_API_KEY should fail fast");
|
||||
|
||||
match error {
|
||||
ApiError::MissingCredentials { provider, env_vars } => {
|
||||
assert_eq!(provider, "xAI");
|
||||
assert_eq!(env_vars, &["XAI_API_KEY"]);
|
||||
}
|
||||
other => panic!("expected missing xAI credentials, got {other:?}"),
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn provider_client_uses_explicit_auth_without_env_lookup() {
|
||||
let _lock = env_lock();
|
||||
let _api_key = EnvVarGuard::set("ANTHROPIC_API_KEY", None);
|
||||
let _auth_token = EnvVarGuard::set("ANTHROPIC_AUTH_TOKEN", None);
|
||||
|
||||
let client = ProviderClient::from_model_with_default_auth(
|
||||
"claude-sonnet-4-6",
|
||||
Some(AuthSource::ApiKey("claw-test-key".to_string())),
|
||||
)
|
||||
.expect("explicit auth should avoid env lookup");
|
||||
|
||||
assert_eq!(client.provider_kind(), ProviderKind::ClawApi);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn read_xai_base_url_prefers_env_override() {
|
||||
let _lock = env_lock();
|
||||
let _xai_base_url = EnvVarGuard::set("XAI_BASE_URL", Some("https://example.xai.test/v1"));
|
||||
|
||||
assert_eq!(read_xai_base_url(), "https://example.xai.test/v1");
|
||||
}
|
||||
|
||||
fn env_lock() -> std::sync::MutexGuard<'static, ()> {
|
||||
static LOCK: OnceLock<Mutex<()>> = OnceLock::new();
|
||||
LOCK.get_or_init(|| Mutex::new(()))
|
||||
.lock()
|
||||
.unwrap_or_else(|poisoned| poisoned.into_inner())
|
||||
}
|
||||
|
||||
struct EnvVarGuard {
|
||||
key: &'static str,
|
||||
original: Option<OsString>,
|
||||
}
|
||||
|
||||
impl EnvVarGuard {
|
||||
fn set(key: &'static str, value: Option<&str>) -> Self {
|
||||
let original = std::env::var_os(key);
|
||||
match value {
|
||||
Some(value) => std::env::set_var(key, value),
|
||||
None => std::env::remove_var(key),
|
||||
}
|
||||
Self { key, original }
|
||||
}
|
||||
}
|
||||
|
||||
impl Drop for EnvVarGuard {
|
||||
fn drop(&mut self) {
|
||||
match &self.original {
|
||||
Some(value) => std::env::set_var(self.key, value),
|
||||
None => std::env::remove_var(self.key),
|
||||
}
|
||||
}
|
||||
}
|
||||
27
rust/crates/claw-cli/Cargo.toml
Normal file
27
rust/crates/claw-cli/Cargo.toml
Normal file
@ -0,0 +1,27 @@
|
||||
[package]
|
||||
name = "claw-cli"
|
||||
version.workspace = true
|
||||
edition.workspace = true
|
||||
license.workspace = true
|
||||
publish.workspace = true
|
||||
|
||||
[[bin]]
|
||||
name = "claw"
|
||||
path = "src/main.rs"
|
||||
|
||||
[dependencies]
|
||||
api = { path = "../api" }
|
||||
commands = { path = "../commands" }
|
||||
compat-harness = { path = "../compat-harness" }
|
||||
crossterm = "0.28"
|
||||
pulldown-cmark = "0.13"
|
||||
rustyline = "15"
|
||||
runtime = { path = "../runtime" }
|
||||
plugins = { path = "../plugins" }
|
||||
serde_json.workspace = true
|
||||
syntect = "5"
|
||||
tokio = { version = "1", features = ["rt-multi-thread", "time"] }
|
||||
tools = { path = "../tools" }
|
||||
|
||||
[lints]
|
||||
workspace = true
|
||||
402
rust/crates/claw-cli/src/app.rs
Normal file
402
rust/crates/claw-cli/src/app.rs
Normal file
@ -0,0 +1,402 @@
|
||||
use std::io::{self, Write};
|
||||
use std::path::PathBuf;
|
||||
|
||||
use crate::args::{OutputFormat, PermissionMode};
|
||||
use crate::input::{LineEditor, ReadOutcome};
|
||||
use crate::render::{Spinner, TerminalRenderer};
|
||||
use runtime::{ConversationClient, ConversationMessage, RuntimeError, StreamEvent, UsageSummary};
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub struct SessionConfig {
|
||||
pub model: String,
|
||||
pub permission_mode: PermissionMode,
|
||||
pub config: Option<PathBuf>,
|
||||
pub output_format: OutputFormat,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub struct SessionState {
|
||||
pub turns: usize,
|
||||
pub compacted_messages: usize,
|
||||
pub last_model: String,
|
||||
pub last_usage: UsageSummary,
|
||||
}
|
||||
|
||||
impl SessionState {
|
||||
#[must_use]
|
||||
pub fn new(model: impl Into<String>) -> Self {
|
||||
Self {
|
||||
turns: 0,
|
||||
compacted_messages: 0,
|
||||
last_model: model.into(),
|
||||
last_usage: UsageSummary::default(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||
pub enum CommandResult {
|
||||
Continue,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub enum SlashCommand {
|
||||
Help,
|
||||
Status,
|
||||
Compact,
|
||||
Unknown(String),
|
||||
}
|
||||
|
||||
impl SlashCommand {
|
||||
#[must_use]
|
||||
pub fn parse(input: &str) -> Option<Self> {
|
||||
let trimmed = input.trim();
|
||||
if !trimmed.starts_with('/') {
|
||||
return None;
|
||||
}
|
||||
|
||||
let command = trimmed
|
||||
.trim_start_matches('/')
|
||||
.split_whitespace()
|
||||
.next()
|
||||
.unwrap_or_default();
|
||||
Some(match command {
|
||||
"help" => Self::Help,
|
||||
"status" => Self::Status,
|
||||
"compact" => Self::Compact,
|
||||
other => Self::Unknown(other.to_string()),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
struct SlashCommandHandler {
|
||||
command: SlashCommand,
|
||||
summary: &'static str,
|
||||
}
|
||||
|
||||
const SLASH_COMMAND_HANDLERS: &[SlashCommandHandler] = &[
|
||||
SlashCommandHandler {
|
||||
command: SlashCommand::Help,
|
||||
summary: "Show command help",
|
||||
},
|
||||
SlashCommandHandler {
|
||||
command: SlashCommand::Status,
|
||||
summary: "Show current session status",
|
||||
},
|
||||
SlashCommandHandler {
|
||||
command: SlashCommand::Compact,
|
||||
summary: "Compact local session history",
|
||||
},
|
||||
];
|
||||
|
||||
pub struct CliApp {
|
||||
config: SessionConfig,
|
||||
renderer: TerminalRenderer,
|
||||
state: SessionState,
|
||||
conversation_client: ConversationClient,
|
||||
conversation_history: Vec<ConversationMessage>,
|
||||
}
|
||||
|
||||
impl CliApp {
|
||||
pub fn new(config: SessionConfig) -> Result<Self, RuntimeError> {
|
||||
let state = SessionState::new(config.model.clone());
|
||||
let conversation_client = ConversationClient::from_env(config.model.clone())?;
|
||||
Ok(Self {
|
||||
config,
|
||||
renderer: TerminalRenderer::new(),
|
||||
state,
|
||||
conversation_client,
|
||||
conversation_history: Vec::new(),
|
||||
})
|
||||
}
|
||||
|
||||
pub fn run_repl(&mut self) -> io::Result<()> {
|
||||
let mut editor = LineEditor::new("› ", Vec::new());
|
||||
println!("Claw Code interactive mode");
|
||||
println!("Type /help for commands. Shift+Enter or Ctrl+J inserts a newline.");
|
||||
|
||||
loop {
|
||||
match editor.read_line()? {
|
||||
ReadOutcome::Submit(input) => {
|
||||
if input.trim().is_empty() {
|
||||
continue;
|
||||
}
|
||||
self.handle_submission(&input, &mut io::stdout())?;
|
||||
}
|
||||
ReadOutcome::Cancel => continue,
|
||||
ReadOutcome::Exit => break,
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn run_prompt(&mut self, prompt: &str, out: &mut impl Write) -> io::Result<()> {
|
||||
self.render_response(prompt, out)
|
||||
}
|
||||
|
||||
pub fn handle_submission(
|
||||
&mut self,
|
||||
input: &str,
|
||||
out: &mut impl Write,
|
||||
) -> io::Result<CommandResult> {
|
||||
if let Some(command) = SlashCommand::parse(input) {
|
||||
return self.dispatch_slash_command(command, out);
|
||||
}
|
||||
|
||||
self.state.turns += 1;
|
||||
self.render_response(input, out)?;
|
||||
Ok(CommandResult::Continue)
|
||||
}
|
||||
|
||||
fn dispatch_slash_command(
|
||||
&mut self,
|
||||
command: SlashCommand,
|
||||
out: &mut impl Write,
|
||||
) -> io::Result<CommandResult> {
|
||||
match command {
|
||||
SlashCommand::Help => Self::handle_help(out),
|
||||
SlashCommand::Status => self.handle_status(out),
|
||||
SlashCommand::Compact => self.handle_compact(out),
|
||||
SlashCommand::Unknown(name) => {
|
||||
writeln!(out, "Unknown slash command: /{name}")?;
|
||||
Ok(CommandResult::Continue)
|
||||
}
|
||||
_ => {
|
||||
writeln!(out, "Slash command unavailable in this mode")?;
|
||||
Ok(CommandResult::Continue)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn handle_help(out: &mut impl Write) -> io::Result<CommandResult> {
|
||||
writeln!(out, "Available commands:")?;
|
||||
for handler in SLASH_COMMAND_HANDLERS {
|
||||
let name = match handler.command {
|
||||
SlashCommand::Help => "/help",
|
||||
SlashCommand::Status => "/status",
|
||||
SlashCommand::Compact => "/compact",
|
||||
_ => continue,
|
||||
};
|
||||
writeln!(out, " {name:<9} {}", handler.summary)?;
|
||||
}
|
||||
Ok(CommandResult::Continue)
|
||||
}
|
||||
|
||||
fn handle_status(&mut self, out: &mut impl Write) -> io::Result<CommandResult> {
|
||||
writeln!(
|
||||
out,
|
||||
"status: turns={} model={} permission-mode={:?} output-format={:?} last-usage={} in/{} out config={}",
|
||||
self.state.turns,
|
||||
self.state.last_model,
|
||||
self.config.permission_mode,
|
||||
self.config.output_format,
|
||||
self.state.last_usage.input_tokens,
|
||||
self.state.last_usage.output_tokens,
|
||||
self.config
|
||||
.config
|
||||
.as_ref()
|
||||
.map_or_else(|| String::from("<none>"), |path| path.display().to_string())
|
||||
)?;
|
||||
Ok(CommandResult::Continue)
|
||||
}
|
||||
|
||||
fn handle_compact(&mut self, out: &mut impl Write) -> io::Result<CommandResult> {
|
||||
self.state.compacted_messages += self.state.turns;
|
||||
self.state.turns = 0;
|
||||
self.conversation_history.clear();
|
||||
writeln!(
|
||||
out,
|
||||
"Compacted session history into a local summary ({} messages total compacted).",
|
||||
self.state.compacted_messages
|
||||
)?;
|
||||
Ok(CommandResult::Continue)
|
||||
}
|
||||
|
||||
fn handle_stream_event(
|
||||
renderer: &TerminalRenderer,
|
||||
event: StreamEvent,
|
||||
stream_spinner: &mut Spinner,
|
||||
tool_spinner: &mut Spinner,
|
||||
saw_text: &mut bool,
|
||||
turn_usage: &mut UsageSummary,
|
||||
out: &mut impl Write,
|
||||
) {
|
||||
match event {
|
||||
StreamEvent::TextDelta(delta) => {
|
||||
if !*saw_text {
|
||||
let _ =
|
||||
stream_spinner.finish("Streaming response", renderer.color_theme(), out);
|
||||
*saw_text = true;
|
||||
}
|
||||
let _ = write!(out, "{delta}");
|
||||
let _ = out.flush();
|
||||
}
|
||||
StreamEvent::ToolCallStart { name, input } => {
|
||||
if *saw_text {
|
||||
let _ = writeln!(out);
|
||||
}
|
||||
let _ = tool_spinner.tick(
|
||||
&format!("Running tool `{name}` with {input}"),
|
||||
renderer.color_theme(),
|
||||
out,
|
||||
);
|
||||
}
|
||||
StreamEvent::ToolCallResult {
|
||||
name,
|
||||
output,
|
||||
is_error,
|
||||
} => {
|
||||
let label = if is_error {
|
||||
format!("Tool `{name}` failed")
|
||||
} else {
|
||||
format!("Tool `{name}` completed")
|
||||
};
|
||||
let _ = tool_spinner.finish(&label, renderer.color_theme(), out);
|
||||
let rendered_output = format!("### Tool `{name}`\n\n```text\n{output}\n```\n");
|
||||
let _ = renderer.stream_markdown(&rendered_output, out);
|
||||
}
|
||||
StreamEvent::Usage(usage) => {
|
||||
*turn_usage = usage;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn write_turn_output(
|
||||
&self,
|
||||
summary: &runtime::TurnSummary,
|
||||
out: &mut impl Write,
|
||||
) -> io::Result<()> {
|
||||
match self.config.output_format {
|
||||
OutputFormat::Text => {
|
||||
writeln!(
|
||||
out,
|
||||
"\nToken usage: {} input / {} output",
|
||||
self.state.last_usage.input_tokens, self.state.last_usage.output_tokens
|
||||
)?;
|
||||
}
|
||||
OutputFormat::Json => {
|
||||
writeln!(
|
||||
out,
|
||||
"{}",
|
||||
serde_json::json!({
|
||||
"message": summary.assistant_text,
|
||||
"usage": {
|
||||
"input_tokens": self.state.last_usage.input_tokens,
|
||||
"output_tokens": self.state.last_usage.output_tokens,
|
||||
}
|
||||
})
|
||||
)?;
|
||||
}
|
||||
OutputFormat::Ndjson => {
|
||||
writeln!(
|
||||
out,
|
||||
"{}",
|
||||
serde_json::json!({
|
||||
"type": "message",
|
||||
"text": summary.assistant_text,
|
||||
"usage": {
|
||||
"input_tokens": self.state.last_usage.input_tokens,
|
||||
"output_tokens": self.state.last_usage.output_tokens,
|
||||
}
|
||||
})
|
||||
)?;
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn render_response(&mut self, input: &str, out: &mut impl Write) -> io::Result<()> {
|
||||
let mut stream_spinner = Spinner::new();
|
||||
stream_spinner.tick(
|
||||
"Opening conversation stream",
|
||||
self.renderer.color_theme(),
|
||||
out,
|
||||
)?;
|
||||
|
||||
let mut turn_usage = UsageSummary::default();
|
||||
let mut tool_spinner = Spinner::new();
|
||||
let mut saw_text = false;
|
||||
let renderer = &self.renderer;
|
||||
|
||||
let result =
|
||||
self.conversation_client
|
||||
.run_turn(&mut self.conversation_history, input, |event| {
|
||||
Self::handle_stream_event(
|
||||
renderer,
|
||||
event,
|
||||
&mut stream_spinner,
|
||||
&mut tool_spinner,
|
||||
&mut saw_text,
|
||||
&mut turn_usage,
|
||||
out,
|
||||
);
|
||||
});
|
||||
|
||||
let summary = match result {
|
||||
Ok(summary) => summary,
|
||||
Err(error) => {
|
||||
stream_spinner.fail(
|
||||
"Streaming response failed",
|
||||
self.renderer.color_theme(),
|
||||
out,
|
||||
)?;
|
||||
return Err(io::Error::other(error));
|
||||
}
|
||||
};
|
||||
self.state.last_usage = summary.usage.clone();
|
||||
if saw_text {
|
||||
writeln!(out)?;
|
||||
} else {
|
||||
stream_spinner.finish("Streaming response", self.renderer.color_theme(), out)?;
|
||||
}
|
||||
|
||||
self.write_turn_output(&summary, out)?;
|
||||
let _ = turn_usage;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use std::path::PathBuf;
|
||||
|
||||
use crate::args::{OutputFormat, PermissionMode};
|
||||
|
||||
use super::{CommandResult, SessionConfig, SlashCommand};
|
||||
|
||||
#[test]
|
||||
fn parses_required_slash_commands() {
|
||||
assert_eq!(SlashCommand::parse("/help"), Some(SlashCommand::Help));
|
||||
assert_eq!(SlashCommand::parse(" /status "), Some(SlashCommand::Status));
|
||||
assert_eq!(
|
||||
SlashCommand::parse("/compact now"),
|
||||
Some(SlashCommand::Compact)
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn help_output_lists_commands() {
|
||||
let mut out = Vec::new();
|
||||
let result = super::CliApp::handle_help(&mut out).expect("help succeeds");
|
||||
assert_eq!(result, CommandResult::Continue);
|
||||
let output = String::from_utf8_lossy(&out);
|
||||
assert!(output.contains("/help"));
|
||||
assert!(output.contains("/status"));
|
||||
assert!(output.contains("/compact"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn session_state_tracks_config_values() {
|
||||
let config = SessionConfig {
|
||||
model: "sonnet".into(),
|
||||
permission_mode: PermissionMode::DangerFullAccess,
|
||||
config: Some(PathBuf::from("settings.toml")),
|
||||
output_format: OutputFormat::Text,
|
||||
};
|
||||
|
||||
assert_eq!(config.model, "sonnet");
|
||||
assert_eq!(config.permission_mode, PermissionMode::DangerFullAccess);
|
||||
assert_eq!(config.config, Some(PathBuf::from("settings.toml")));
|
||||
}
|
||||
}
|
||||
104
rust/crates/claw-cli/src/args.rs
Normal file
104
rust/crates/claw-cli/src/args.rs
Normal file
@ -0,0 +1,104 @@
|
||||
use std::path::PathBuf;
|
||||
|
||||
use clap::{Parser, Subcommand, ValueEnum};
|
||||
|
||||
#[derive(Debug, Clone, Parser, PartialEq, Eq)]
|
||||
#[command(name = "claw-cli", version, about = "Claw Code CLI")]
|
||||
pub struct Cli {
|
||||
#[arg(long, default_value = "claude-opus-4-6")]
|
||||
pub model: String,
|
||||
|
||||
#[arg(long, value_enum, default_value_t = PermissionMode::DangerFullAccess)]
|
||||
pub permission_mode: PermissionMode,
|
||||
|
||||
#[arg(long)]
|
||||
pub config: Option<PathBuf>,
|
||||
|
||||
#[arg(long, value_enum, default_value_t = OutputFormat::Text)]
|
||||
pub output_format: OutputFormat,
|
||||
|
||||
#[command(subcommand)]
|
||||
pub command: Option<Command>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Subcommand, PartialEq, Eq)]
|
||||
pub enum Command {
|
||||
/// Read upstream TS sources and print extracted counts
|
||||
DumpManifests,
|
||||
/// Print the current bootstrap phase skeleton
|
||||
BootstrapPlan,
|
||||
/// Start the OAuth login flow
|
||||
Login,
|
||||
/// Clear saved OAuth credentials
|
||||
Logout,
|
||||
/// Run a non-interactive prompt and exit
|
||||
Prompt { prompt: Vec<String> },
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Copy, ValueEnum, PartialEq, Eq)]
|
||||
pub enum PermissionMode {
|
||||
ReadOnly,
|
||||
WorkspaceWrite,
|
||||
DangerFullAccess,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Copy, ValueEnum, PartialEq, Eq)]
|
||||
pub enum OutputFormat {
|
||||
Text,
|
||||
Json,
|
||||
Ndjson,
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use clap::Parser;
|
||||
|
||||
use super::{Cli, Command, OutputFormat, PermissionMode};
|
||||
|
||||
#[test]
|
||||
fn parses_requested_flags() {
|
||||
let cli = Cli::parse_from([
|
||||
"claw-cli",
|
||||
"--model",
|
||||
"claude-haiku-4-5-20251213",
|
||||
"--permission-mode",
|
||||
"read-only",
|
||||
"--config",
|
||||
"/tmp/config.toml",
|
||||
"--output-format",
|
||||
"ndjson",
|
||||
"prompt",
|
||||
"hello",
|
||||
"world",
|
||||
]);
|
||||
|
||||
assert_eq!(cli.model, "claude-haiku-4-5-20251213");
|
||||
assert_eq!(cli.permission_mode, PermissionMode::ReadOnly);
|
||||
assert_eq!(
|
||||
cli.config.as_deref(),
|
||||
Some(std::path::Path::new("/tmp/config.toml"))
|
||||
);
|
||||
assert_eq!(cli.output_format, OutputFormat::Ndjson);
|
||||
assert_eq!(
|
||||
cli.command,
|
||||
Some(Command::Prompt {
|
||||
prompt: vec!["hello".into(), "world".into()]
|
||||
})
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parses_login_and_logout_commands() {
|
||||
let login = Cli::parse_from(["claw-cli", "login"]);
|
||||
assert_eq!(login.command, Some(Command::Login));
|
||||
|
||||
let logout = Cli::parse_from(["claw-cli", "logout"]);
|
||||
assert_eq!(logout.command, Some(Command::Logout));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn defaults_to_danger_full_access_permission_mode() {
|
||||
let cli = Cli::parse_from(["claw-cli"]);
|
||||
assert_eq!(cli.permission_mode, PermissionMode::DangerFullAccess);
|
||||
}
|
||||
}
|
||||
432
rust/crates/claw-cli/src/init.rs
Normal file
432
rust/crates/claw-cli/src/init.rs
Normal file
@ -0,0 +1,432 @@
|
||||
use std::fs;
|
||||
use std::path::{Path, PathBuf};
|
||||
|
||||
const STARTER_CLAW_JSON: &str = concat!(
|
||||
"{\n",
|
||||
" \"permissions\": {\n",
|
||||
" \"defaultMode\": \"dontAsk\"\n",
|
||||
" }\n",
|
||||
"}\n",
|
||||
);
|
||||
const GITIGNORE_COMMENT: &str = "# Claw Code local artifacts";
|
||||
const GITIGNORE_ENTRIES: [&str; 2] = [".claw/settings.local.json", ".claw/sessions/"];
|
||||
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||
pub(crate) enum InitStatus {
|
||||
Created,
|
||||
Updated,
|
||||
Skipped,
|
||||
}
|
||||
|
||||
impl InitStatus {
|
||||
#[must_use]
|
||||
pub(crate) fn label(self) -> &'static str {
|
||||
match self {
|
||||
Self::Created => "created",
|
||||
Self::Updated => "updated",
|
||||
Self::Skipped => "skipped (already exists)",
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub(crate) struct InitArtifact {
|
||||
pub(crate) name: &'static str,
|
||||
pub(crate) status: InitStatus,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub(crate) struct InitReport {
|
||||
pub(crate) project_root: PathBuf,
|
||||
pub(crate) artifacts: Vec<InitArtifact>,
|
||||
}
|
||||
|
||||
impl InitReport {
|
||||
#[must_use]
|
||||
pub(crate) fn render(&self) -> String {
|
||||
let mut lines = vec![
|
||||
"Init".to_string(),
|
||||
format!(" Project {}", self.project_root.display()),
|
||||
];
|
||||
for artifact in &self.artifacts {
|
||||
lines.push(format!(
|
||||
" {:<16} {}",
|
||||
artifact.name,
|
||||
artifact.status.label()
|
||||
));
|
||||
}
|
||||
lines.push(" Next step Review and tailor the generated guidance".to_string());
|
||||
lines.join("\n")
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Default, PartialEq, Eq)]
|
||||
#[allow(clippy::struct_excessive_bools)]
|
||||
struct RepoDetection {
|
||||
rust_workspace: bool,
|
||||
rust_root: bool,
|
||||
python: bool,
|
||||
package_json: bool,
|
||||
typescript: bool,
|
||||
nextjs: bool,
|
||||
react: bool,
|
||||
vite: bool,
|
||||
nest: bool,
|
||||
src_dir: bool,
|
||||
tests_dir: bool,
|
||||
rust_dir: bool,
|
||||
}
|
||||
|
||||
pub(crate) fn initialize_repo(cwd: &Path) -> Result<InitReport, Box<dyn std::error::Error>> {
|
||||
let mut artifacts = Vec::new();
|
||||
|
||||
let claw_dir = cwd.join(".claw");
|
||||
artifacts.push(InitArtifact {
|
||||
name: ".claw/",
|
||||
status: ensure_dir(&claw_dir)?,
|
||||
});
|
||||
|
||||
let claw_json = cwd.join(".claw.json");
|
||||
artifacts.push(InitArtifact {
|
||||
name: ".claw.json",
|
||||
status: write_file_if_missing(&claw_json, STARTER_CLAW_JSON)?,
|
||||
});
|
||||
|
||||
let gitignore = cwd.join(".gitignore");
|
||||
artifacts.push(InitArtifact {
|
||||
name: ".gitignore",
|
||||
status: ensure_gitignore_entries(&gitignore)?,
|
||||
});
|
||||
|
||||
let claw_md = cwd.join("CLAW.md");
|
||||
let content = render_init_claw_md(cwd);
|
||||
artifacts.push(InitArtifact {
|
||||
name: "CLAW.md",
|
||||
status: write_file_if_missing(&claw_md, &content)?,
|
||||
});
|
||||
|
||||
Ok(InitReport {
|
||||
project_root: cwd.to_path_buf(),
|
||||
artifacts,
|
||||
})
|
||||
}
|
||||
|
||||
fn ensure_dir(path: &Path) -> Result<InitStatus, std::io::Error> {
|
||||
if path.is_dir() {
|
||||
return Ok(InitStatus::Skipped);
|
||||
}
|
||||
fs::create_dir_all(path)?;
|
||||
Ok(InitStatus::Created)
|
||||
}
|
||||
|
||||
fn write_file_if_missing(path: &Path, content: &str) -> Result<InitStatus, std::io::Error> {
|
||||
if path.exists() {
|
||||
return Ok(InitStatus::Skipped);
|
||||
}
|
||||
fs::write(path, content)?;
|
||||
Ok(InitStatus::Created)
|
||||
}
|
||||
|
||||
fn ensure_gitignore_entries(path: &Path) -> Result<InitStatus, std::io::Error> {
|
||||
if !path.exists() {
|
||||
let mut lines = vec![GITIGNORE_COMMENT.to_string()];
|
||||
lines.extend(GITIGNORE_ENTRIES.iter().map(|entry| (*entry).to_string()));
|
||||
fs::write(path, format!("{}\n", lines.join("\n")))?;
|
||||
return Ok(InitStatus::Created);
|
||||
}
|
||||
|
||||
let existing = fs::read_to_string(path)?;
|
||||
let mut lines = existing.lines().map(ToOwned::to_owned).collect::<Vec<_>>();
|
||||
let mut changed = false;
|
||||
|
||||
if !lines.iter().any(|line| line == GITIGNORE_COMMENT) {
|
||||
lines.push(GITIGNORE_COMMENT.to_string());
|
||||
changed = true;
|
||||
}
|
||||
|
||||
for entry in GITIGNORE_ENTRIES {
|
||||
if !lines.iter().any(|line| line == entry) {
|
||||
lines.push(entry.to_string());
|
||||
changed = true;
|
||||
}
|
||||
}
|
||||
|
||||
if !changed {
|
||||
return Ok(InitStatus::Skipped);
|
||||
}
|
||||
|
||||
fs::write(path, format!("{}\n", lines.join("\n")))?;
|
||||
Ok(InitStatus::Updated)
|
||||
}
|
||||
|
||||
pub(crate) fn render_init_claw_md(cwd: &Path) -> String {
|
||||
let detection = detect_repo(cwd);
|
||||
let mut lines = vec![
|
||||
"# CLAW.md".to_string(),
|
||||
String::new(),
|
||||
"This file provides guidance to Claw Code (clawcode.dev) when working with code in this repository.".to_string(),
|
||||
String::new(),
|
||||
];
|
||||
|
||||
let detected_languages = detected_languages(&detection);
|
||||
let detected_frameworks = detected_frameworks(&detection);
|
||||
lines.push("## Detected stack".to_string());
|
||||
if detected_languages.is_empty() {
|
||||
lines.push("- No specific language markers were detected yet; document the primary language and verification commands once the project structure settles.".to_string());
|
||||
} else {
|
||||
lines.push(format!("- Languages: {}.", detected_languages.join(", ")));
|
||||
}
|
||||
if detected_frameworks.is_empty() {
|
||||
lines.push("- Frameworks: none detected from the supported starter markers.".to_string());
|
||||
} else {
|
||||
lines.push(format!(
|
||||
"- Frameworks/tooling markers: {}.",
|
||||
detected_frameworks.join(", ")
|
||||
));
|
||||
}
|
||||
lines.push(String::new());
|
||||
|
||||
let verification_lines = verification_lines(cwd, &detection);
|
||||
if !verification_lines.is_empty() {
|
||||
lines.push("## Verification".to_string());
|
||||
lines.extend(verification_lines);
|
||||
lines.push(String::new());
|
||||
}
|
||||
|
||||
let structure_lines = repository_shape_lines(&detection);
|
||||
if !structure_lines.is_empty() {
|
||||
lines.push("## Repository shape".to_string());
|
||||
lines.extend(structure_lines);
|
||||
lines.push(String::new());
|
||||
}
|
||||
|
||||
let framework_lines = framework_notes(&detection);
|
||||
if !framework_lines.is_empty() {
|
||||
lines.push("## Framework notes".to_string());
|
||||
lines.extend(framework_lines);
|
||||
lines.push(String::new());
|
||||
}
|
||||
|
||||
lines.push("## Working agreement".to_string());
|
||||
lines.push("- Prefer small, reviewable changes and keep generated bootstrap files aligned with actual repo workflows.".to_string());
|
||||
lines.push("- Keep shared defaults in `.claw.json`; reserve `.claw/settings.local.json` for machine-local overrides.".to_string());
|
||||
lines.push("- Do not overwrite existing `CLAW.md` content automatically; update it intentionally when repo workflows change.".to_string());
|
||||
lines.push(String::new());
|
||||
|
||||
lines.join("\n")
|
||||
}
|
||||
|
||||
fn detect_repo(cwd: &Path) -> RepoDetection {
|
||||
let package_json_contents = fs::read_to_string(cwd.join("package.json"))
|
||||
.unwrap_or_default()
|
||||
.to_ascii_lowercase();
|
||||
RepoDetection {
|
||||
rust_workspace: cwd.join("rust").join("Cargo.toml").is_file(),
|
||||
rust_root: cwd.join("Cargo.toml").is_file(),
|
||||
python: cwd.join("pyproject.toml").is_file()
|
||||
|| cwd.join("requirements.txt").is_file()
|
||||
|| cwd.join("setup.py").is_file(),
|
||||
package_json: cwd.join("package.json").is_file(),
|
||||
typescript: cwd.join("tsconfig.json").is_file()
|
||||
|| package_json_contents.contains("typescript"),
|
||||
nextjs: package_json_contents.contains("\"next\""),
|
||||
react: package_json_contents.contains("\"react\""),
|
||||
vite: package_json_contents.contains("\"vite\""),
|
||||
nest: package_json_contents.contains("@nestjs"),
|
||||
src_dir: cwd.join("src").is_dir(),
|
||||
tests_dir: cwd.join("tests").is_dir(),
|
||||
rust_dir: cwd.join("rust").is_dir(),
|
||||
}
|
||||
}
|
||||
|
||||
fn detected_languages(detection: &RepoDetection) -> Vec<&'static str> {
|
||||
let mut languages = Vec::new();
|
||||
if detection.rust_workspace || detection.rust_root {
|
||||
languages.push("Rust");
|
||||
}
|
||||
if detection.python {
|
||||
languages.push("Python");
|
||||
}
|
||||
if detection.typescript {
|
||||
languages.push("TypeScript");
|
||||
} else if detection.package_json {
|
||||
languages.push("JavaScript/Node.js");
|
||||
}
|
||||
languages
|
||||
}
|
||||
|
||||
fn detected_frameworks(detection: &RepoDetection) -> Vec<&'static str> {
|
||||
let mut frameworks = Vec::new();
|
||||
if detection.nextjs {
|
||||
frameworks.push("Next.js");
|
||||
}
|
||||
if detection.react {
|
||||
frameworks.push("React");
|
||||
}
|
||||
if detection.vite {
|
||||
frameworks.push("Vite");
|
||||
}
|
||||
if detection.nest {
|
||||
frameworks.push("NestJS");
|
||||
}
|
||||
frameworks
|
||||
}
|
||||
|
||||
fn verification_lines(cwd: &Path, detection: &RepoDetection) -> Vec<String> {
|
||||
let mut lines = Vec::new();
|
||||
if detection.rust_workspace {
|
||||
lines.push("- Run Rust verification from `rust/`: `cargo fmt`, `cargo clippy --workspace --all-targets -- -D warnings`, `cargo test --workspace`".to_string());
|
||||
} else if detection.rust_root {
|
||||
lines.push("- Run Rust verification from the repo root: `cargo fmt`, `cargo clippy --workspace --all-targets -- -D warnings`, `cargo test --workspace`".to_string());
|
||||
}
|
||||
if detection.python {
|
||||
if cwd.join("pyproject.toml").is_file() {
|
||||
lines.push("- Run the Python project checks declared in `pyproject.toml` (for example: `pytest`, `ruff check`, and `mypy` when configured).".to_string());
|
||||
} else {
|
||||
lines.push(
|
||||
"- Run the repo's Python test/lint commands before shipping changes.".to_string(),
|
||||
);
|
||||
}
|
||||
}
|
||||
if detection.package_json {
|
||||
lines.push("- Run the JavaScript/TypeScript checks from `package.json` before shipping changes (`npm test`, `npm run lint`, `npm run build`, or the repo equivalent).".to_string());
|
||||
}
|
||||
if detection.tests_dir && detection.src_dir {
|
||||
lines.push("- `src/` and `tests/` are both present; update both surfaces together when behavior changes.".to_string());
|
||||
}
|
||||
lines
|
||||
}
|
||||
|
||||
fn repository_shape_lines(detection: &RepoDetection) -> Vec<String> {
|
||||
let mut lines = Vec::new();
|
||||
if detection.rust_dir {
|
||||
lines.push(
|
||||
"- `rust/` contains the Rust workspace and active CLI/runtime implementation."
|
||||
.to_string(),
|
||||
);
|
||||
}
|
||||
if detection.src_dir {
|
||||
lines.push("- `src/` contains source files that should stay consistent with generated guidance and tests.".to_string());
|
||||
}
|
||||
if detection.tests_dir {
|
||||
lines.push("- `tests/` contains validation surfaces that should be reviewed alongside code changes.".to_string());
|
||||
}
|
||||
lines
|
||||
}
|
||||
|
||||
fn framework_notes(detection: &RepoDetection) -> Vec<String> {
|
||||
let mut lines = Vec::new();
|
||||
if detection.nextjs {
|
||||
lines.push("- Next.js detected: preserve routing/data-fetching conventions and verify production builds after changing app structure.".to_string());
|
||||
}
|
||||
if detection.react && !detection.nextjs {
|
||||
lines.push("- React detected: keep component behavior covered with focused tests and avoid unnecessary prop/API churn.".to_string());
|
||||
}
|
||||
if detection.vite {
|
||||
lines.push("- Vite detected: validate the production bundle after changing build-sensitive configuration or imports.".to_string());
|
||||
}
|
||||
if detection.nest {
|
||||
lines.push("- NestJS detected: keep module/provider boundaries explicit and verify controller/service wiring after refactors.".to_string());
|
||||
}
|
||||
lines
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::{initialize_repo, render_init_claw_md};
|
||||
use std::fs;
|
||||
use std::path::Path;
|
||||
use std::time::{SystemTime, UNIX_EPOCH};
|
||||
|
||||
fn temp_dir() -> std::path::PathBuf {
|
||||
let nanos = SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.expect("time should be after epoch")
|
||||
.as_nanos();
|
||||
std::env::temp_dir().join(format!("claw-init-{nanos}"))
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn initialize_repo_creates_expected_files_and_gitignore_entries() {
|
||||
let root = temp_dir();
|
||||
fs::create_dir_all(root.join("rust")).expect("create rust dir");
|
||||
fs::write(root.join("rust").join("Cargo.toml"), "[workspace]\n").expect("write cargo");
|
||||
|
||||
let report = initialize_repo(&root).expect("init should succeed");
|
||||
let rendered = report.render();
|
||||
assert!(rendered.contains(".claw/ created"));
|
||||
assert!(rendered.contains(".claw.json created"));
|
||||
assert!(rendered.contains(".gitignore created"));
|
||||
assert!(rendered.contains("CLAW.md created"));
|
||||
assert!(root.join(".claw").is_dir());
|
||||
assert!(root.join(".claw.json").is_file());
|
||||
assert!(root.join("CLAW.md").is_file());
|
||||
assert_eq!(
|
||||
fs::read_to_string(root.join(".claw.json")).expect("read claw json"),
|
||||
concat!(
|
||||
"{\n",
|
||||
" \"permissions\": {\n",
|
||||
" \"defaultMode\": \"dontAsk\"\n",
|
||||
" }\n",
|
||||
"}\n",
|
||||
)
|
||||
);
|
||||
let gitignore = fs::read_to_string(root.join(".gitignore")).expect("read gitignore");
|
||||
assert!(gitignore.contains(".claw/settings.local.json"));
|
||||
assert!(gitignore.contains(".claw/sessions/"));
|
||||
let claw_md = fs::read_to_string(root.join("CLAW.md")).expect("read claw md");
|
||||
assert!(claw_md.contains("Languages: Rust."));
|
||||
assert!(claw_md.contains("cargo clippy --workspace --all-targets -- -D warnings"));
|
||||
|
||||
fs::remove_dir_all(root).expect("cleanup temp dir");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn initialize_repo_is_idempotent_and_preserves_existing_files() {
|
||||
let root = temp_dir();
|
||||
fs::create_dir_all(&root).expect("create root");
|
||||
fs::write(root.join("CLAW.md"), "custom guidance\n").expect("write existing claw md");
|
||||
fs::write(root.join(".gitignore"), ".claw/settings.local.json\n").expect("write gitignore");
|
||||
|
||||
let first = initialize_repo(&root).expect("first init should succeed");
|
||||
assert!(first
|
||||
.render()
|
||||
.contains("CLAW.md skipped (already exists)"));
|
||||
let second = initialize_repo(&root).expect("second init should succeed");
|
||||
let second_rendered = second.render();
|
||||
assert!(second_rendered.contains(".claw/ skipped (already exists)"));
|
||||
assert!(second_rendered.contains(".claw.json skipped (already exists)"));
|
||||
assert!(second_rendered.contains(".gitignore skipped (already exists)"));
|
||||
assert!(second_rendered.contains("CLAW.md skipped (already exists)"));
|
||||
assert_eq!(
|
||||
fs::read_to_string(root.join("CLAW.md")).expect("read existing claw md"),
|
||||
"custom guidance\n"
|
||||
);
|
||||
let gitignore = fs::read_to_string(root.join(".gitignore")).expect("read gitignore");
|
||||
assert_eq!(gitignore.matches(".claw/settings.local.json").count(), 1);
|
||||
assert_eq!(gitignore.matches(".claw/sessions/").count(), 1);
|
||||
|
||||
fs::remove_dir_all(root).expect("cleanup temp dir");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn render_init_template_mentions_detected_python_and_nextjs_markers() {
|
||||
let root = temp_dir();
|
||||
fs::create_dir_all(&root).expect("create root");
|
||||
fs::write(root.join("pyproject.toml"), "[project]\nname = \"demo\"\n")
|
||||
.expect("write pyproject");
|
||||
fs::write(
|
||||
root.join("package.json"),
|
||||
r#"{"dependencies":{"next":"14.0.0","react":"18.0.0"},"devDependencies":{"typescript":"5.0.0"}}"#,
|
||||
)
|
||||
.expect("write package json");
|
||||
|
||||
let rendered = render_init_claw_md(Path::new(&root));
|
||||
assert!(rendered.contains("Languages: Python, TypeScript."));
|
||||
assert!(rendered.contains("Frameworks/tooling markers: Next.js, React."));
|
||||
assert!(rendered.contains("pyproject.toml"));
|
||||
assert!(rendered.contains("Next.js detected"));
|
||||
|
||||
fs::remove_dir_all(root).expect("cleanup temp dir");
|
||||
}
|
||||
}
|
||||
1195
rust/crates/claw-cli/src/input.rs
Normal file
1195
rust/crates/claw-cli/src/input.rs
Normal file
File diff suppressed because it is too large
Load Diff
5090
rust/crates/claw-cli/src/main.rs
Normal file
5090
rust/crates/claw-cli/src/main.rs
Normal file
File diff suppressed because it is too large
Load Diff
797
rust/crates/claw-cli/src/render.rs
Normal file
797
rust/crates/claw-cli/src/render.rs
Normal file
@ -0,0 +1,797 @@
|
||||
use std::fmt::Write as FmtWrite;
|
||||
use std::io::{self, Write};
|
||||
|
||||
use crossterm::cursor::{MoveToColumn, RestorePosition, SavePosition};
|
||||
use crossterm::style::{Color, Print, ResetColor, SetForegroundColor, Stylize};
|
||||
use crossterm::terminal::{Clear, ClearType};
|
||||
use crossterm::{execute, queue};
|
||||
use pulldown_cmark::{CodeBlockKind, Event, Options, Parser, Tag, TagEnd};
|
||||
use syntect::easy::HighlightLines;
|
||||
use syntect::highlighting::{Theme, ThemeSet};
|
||||
use syntect::parsing::SyntaxSet;
|
||||
use syntect::util::{as_24_bit_terminal_escaped, LinesWithEndings};
|
||||
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||
pub struct ColorTheme {
|
||||
heading: Color,
|
||||
emphasis: Color,
|
||||
strong: Color,
|
||||
inline_code: Color,
|
||||
link: Color,
|
||||
quote: Color,
|
||||
table_border: Color,
|
||||
code_block_border: Color,
|
||||
spinner_active: Color,
|
||||
spinner_done: Color,
|
||||
spinner_failed: Color,
|
||||
}
|
||||
|
||||
impl Default for ColorTheme {
|
||||
fn default() -> Self {
|
||||
Self {
|
||||
heading: Color::Cyan,
|
||||
emphasis: Color::Magenta,
|
||||
strong: Color::Yellow,
|
||||
inline_code: Color::Green,
|
||||
link: Color::Blue,
|
||||
quote: Color::DarkGrey,
|
||||
table_border: Color::DarkCyan,
|
||||
code_block_border: Color::DarkGrey,
|
||||
spinner_active: Color::Blue,
|
||||
spinner_done: Color::Green,
|
||||
spinner_failed: Color::Red,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, Clone, PartialEq, Eq)]
|
||||
pub struct Spinner {
|
||||
frame_index: usize,
|
||||
}
|
||||
|
||||
impl Spinner {
|
||||
const FRAMES: [&str; 10] = ["⠋", "⠙", "⠹", "⠸", "⠼", "⠴", "⠦", "⠧", "⠇", "⠏"];
|
||||
|
||||
#[must_use]
|
||||
pub fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
|
||||
pub fn tick(
|
||||
&mut self,
|
||||
label: &str,
|
||||
theme: &ColorTheme,
|
||||
out: &mut impl Write,
|
||||
) -> io::Result<()> {
|
||||
let frame = Self::FRAMES[self.frame_index % Self::FRAMES.len()];
|
||||
self.frame_index += 1;
|
||||
queue!(
|
||||
out,
|
||||
SavePosition,
|
||||
MoveToColumn(0),
|
||||
Clear(ClearType::CurrentLine),
|
||||
SetForegroundColor(theme.spinner_active),
|
||||
Print(format!("{frame} {label}")),
|
||||
ResetColor,
|
||||
RestorePosition
|
||||
)?;
|
||||
out.flush()
|
||||
}
|
||||
|
||||
pub fn finish(
|
||||
&mut self,
|
||||
label: &str,
|
||||
theme: &ColorTheme,
|
||||
out: &mut impl Write,
|
||||
) -> io::Result<()> {
|
||||
self.frame_index = 0;
|
||||
execute!(
|
||||
out,
|
||||
MoveToColumn(0),
|
||||
Clear(ClearType::CurrentLine),
|
||||
SetForegroundColor(theme.spinner_done),
|
||||
Print(format!("✔ {label}\n")),
|
||||
ResetColor
|
||||
)?;
|
||||
out.flush()
|
||||
}
|
||||
|
||||
pub fn fail(
|
||||
&mut self,
|
||||
label: &str,
|
||||
theme: &ColorTheme,
|
||||
out: &mut impl Write,
|
||||
) -> io::Result<()> {
|
||||
self.frame_index = 0;
|
||||
execute!(
|
||||
out,
|
||||
MoveToColumn(0),
|
||||
Clear(ClearType::CurrentLine),
|
||||
SetForegroundColor(theme.spinner_failed),
|
||||
Print(format!("✘ {label}\n")),
|
||||
ResetColor
|
||||
)?;
|
||||
out.flush()
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
enum ListKind {
|
||||
Unordered,
|
||||
Ordered { next_index: u64 },
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, Clone, PartialEq, Eq)]
|
||||
struct TableState {
|
||||
headers: Vec<String>,
|
||||
rows: Vec<Vec<String>>,
|
||||
current_row: Vec<String>,
|
||||
current_cell: String,
|
||||
in_head: bool,
|
||||
}
|
||||
|
||||
impl TableState {
|
||||
fn push_cell(&mut self) {
|
||||
let cell = self.current_cell.trim().to_string();
|
||||
self.current_row.push(cell);
|
||||
self.current_cell.clear();
|
||||
}
|
||||
|
||||
fn finish_row(&mut self) {
|
||||
if self.current_row.is_empty() {
|
||||
return;
|
||||
}
|
||||
let row = std::mem::take(&mut self.current_row);
|
||||
if self.in_head {
|
||||
self.headers = row;
|
||||
} else {
|
||||
self.rows.push(row);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, Clone, PartialEq, Eq)]
|
||||
struct RenderState {
|
||||
emphasis: usize,
|
||||
strong: usize,
|
||||
heading_level: Option<u8>,
|
||||
quote: usize,
|
||||
list_stack: Vec<ListKind>,
|
||||
link_stack: Vec<LinkState>,
|
||||
table: Option<TableState>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
struct LinkState {
|
||||
destination: String,
|
||||
text: String,
|
||||
}
|
||||
|
||||
impl RenderState {
|
||||
fn style_text(&self, text: &str, theme: &ColorTheme) -> String {
|
||||
let mut style = text.stylize();
|
||||
|
||||
if matches!(self.heading_level, Some(1 | 2)) || self.strong > 0 {
|
||||
style = style.bold();
|
||||
}
|
||||
if self.emphasis > 0 {
|
||||
style = style.italic();
|
||||
}
|
||||
|
||||
if let Some(level) = self.heading_level {
|
||||
style = match level {
|
||||
1 => style.with(theme.heading),
|
||||
2 => style.white(),
|
||||
3 => style.with(Color::Blue),
|
||||
_ => style.with(Color::Grey),
|
||||
};
|
||||
} else if self.strong > 0 {
|
||||
style = style.with(theme.strong);
|
||||
} else if self.emphasis > 0 {
|
||||
style = style.with(theme.emphasis);
|
||||
}
|
||||
|
||||
if self.quote > 0 {
|
||||
style = style.with(theme.quote);
|
||||
}
|
||||
|
||||
format!("{style}")
|
||||
}
|
||||
|
||||
fn append_raw(&mut self, output: &mut String, text: &str) {
|
||||
if let Some(link) = self.link_stack.last_mut() {
|
||||
link.text.push_str(text);
|
||||
} else if let Some(table) = self.table.as_mut() {
|
||||
table.current_cell.push_str(text);
|
||||
} else {
|
||||
output.push_str(text);
|
||||
}
|
||||
}
|
||||
|
||||
fn append_styled(&mut self, output: &mut String, text: &str, theme: &ColorTheme) {
|
||||
let styled = self.style_text(text, theme);
|
||||
self.append_raw(output, &styled);
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct TerminalRenderer {
|
||||
syntax_set: SyntaxSet,
|
||||
syntax_theme: Theme,
|
||||
color_theme: ColorTheme,
|
||||
}
|
||||
|
||||
impl Default for TerminalRenderer {
|
||||
fn default() -> Self {
|
||||
let syntax_set = SyntaxSet::load_defaults_newlines();
|
||||
let syntax_theme = ThemeSet::load_defaults()
|
||||
.themes
|
||||
.remove("base16-ocean.dark")
|
||||
.unwrap_or_default();
|
||||
Self {
|
||||
syntax_set,
|
||||
syntax_theme,
|
||||
color_theme: ColorTheme::default(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl TerminalRenderer {
|
||||
#[must_use]
|
||||
pub fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn color_theme(&self) -> &ColorTheme {
|
||||
&self.color_theme
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn render_markdown(&self, markdown: &str) -> String {
|
||||
let mut output = String::new();
|
||||
let mut state = RenderState::default();
|
||||
let mut code_language = String::new();
|
||||
let mut code_buffer = String::new();
|
||||
let mut in_code_block = false;
|
||||
|
||||
for event in Parser::new_ext(markdown, Options::all()) {
|
||||
self.render_event(
|
||||
event,
|
||||
&mut state,
|
||||
&mut output,
|
||||
&mut code_buffer,
|
||||
&mut code_language,
|
||||
&mut in_code_block,
|
||||
);
|
||||
}
|
||||
|
||||
output.trim_end().to_string()
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn markdown_to_ansi(&self, markdown: &str) -> String {
|
||||
self.render_markdown(markdown)
|
||||
}
|
||||
|
||||
#[allow(clippy::too_many_lines)]
|
||||
fn render_event(
|
||||
&self,
|
||||
event: Event<'_>,
|
||||
state: &mut RenderState,
|
||||
output: &mut String,
|
||||
code_buffer: &mut String,
|
||||
code_language: &mut String,
|
||||
in_code_block: &mut bool,
|
||||
) {
|
||||
match event {
|
||||
Event::Start(Tag::Heading { level, .. }) => {
|
||||
self.start_heading(state, level as u8, output);
|
||||
}
|
||||
Event::End(TagEnd::Paragraph) => output.push_str("\n\n"),
|
||||
Event::Start(Tag::BlockQuote(..)) => self.start_quote(state, output),
|
||||
Event::End(TagEnd::BlockQuote(..)) => {
|
||||
state.quote = state.quote.saturating_sub(1);
|
||||
output.push('\n');
|
||||
}
|
||||
Event::End(TagEnd::Heading(..)) => {
|
||||
state.heading_level = None;
|
||||
output.push_str("\n\n");
|
||||
}
|
||||
Event::End(TagEnd::Item) | Event::SoftBreak | Event::HardBreak => {
|
||||
state.append_raw(output, "\n");
|
||||
}
|
||||
Event::Start(Tag::List(first_item)) => {
|
||||
let kind = match first_item {
|
||||
Some(index) => ListKind::Ordered { next_index: index },
|
||||
None => ListKind::Unordered,
|
||||
};
|
||||
state.list_stack.push(kind);
|
||||
}
|
||||
Event::End(TagEnd::List(..)) => {
|
||||
state.list_stack.pop();
|
||||
output.push('\n');
|
||||
}
|
||||
Event::Start(Tag::Item) => Self::start_item(state, output),
|
||||
Event::Start(Tag::CodeBlock(kind)) => {
|
||||
*in_code_block = true;
|
||||
*code_language = match kind {
|
||||
CodeBlockKind::Indented => String::from("text"),
|
||||
CodeBlockKind::Fenced(lang) => lang.to_string(),
|
||||
};
|
||||
code_buffer.clear();
|
||||
self.start_code_block(code_language, output);
|
||||
}
|
||||
Event::End(TagEnd::CodeBlock) => {
|
||||
self.finish_code_block(code_buffer, code_language, output);
|
||||
*in_code_block = false;
|
||||
code_language.clear();
|
||||
code_buffer.clear();
|
||||
}
|
||||
Event::Start(Tag::Emphasis) => state.emphasis += 1,
|
||||
Event::End(TagEnd::Emphasis) => state.emphasis = state.emphasis.saturating_sub(1),
|
||||
Event::Start(Tag::Strong) => state.strong += 1,
|
||||
Event::End(TagEnd::Strong) => state.strong = state.strong.saturating_sub(1),
|
||||
Event::Code(code) => {
|
||||
let rendered =
|
||||
format!("{}", format!("`{code}`").with(self.color_theme.inline_code));
|
||||
state.append_raw(output, &rendered);
|
||||
}
|
||||
Event::Rule => output.push_str("---\n"),
|
||||
Event::Text(text) => {
|
||||
self.push_text(text.as_ref(), state, output, code_buffer, *in_code_block);
|
||||
}
|
||||
Event::Html(html) | Event::InlineHtml(html) => {
|
||||
state.append_raw(output, &html);
|
||||
}
|
||||
Event::FootnoteReference(reference) => {
|
||||
state.append_raw(output, &format!("[{reference}]"));
|
||||
}
|
||||
Event::TaskListMarker(done) => {
|
||||
state.append_raw(output, if done { "[x] " } else { "[ ] " });
|
||||
}
|
||||
Event::InlineMath(math) | Event::DisplayMath(math) => {
|
||||
state.append_raw(output, &math);
|
||||
}
|
||||
Event::Start(Tag::Link { dest_url, .. }) => {
|
||||
state.link_stack.push(LinkState {
|
||||
destination: dest_url.to_string(),
|
||||
text: String::new(),
|
||||
});
|
||||
}
|
||||
Event::End(TagEnd::Link) => {
|
||||
if let Some(link) = state.link_stack.pop() {
|
||||
let label = if link.text.is_empty() {
|
||||
link.destination.clone()
|
||||
} else {
|
||||
link.text
|
||||
};
|
||||
let rendered = format!(
|
||||
"{}",
|
||||
format!("[{label}]({})", link.destination)
|
||||
.underlined()
|
||||
.with(self.color_theme.link)
|
||||
);
|
||||
state.append_raw(output, &rendered);
|
||||
}
|
||||
}
|
||||
Event::Start(Tag::Image { dest_url, .. }) => {
|
||||
let rendered = format!(
|
||||
"{}",
|
||||
format!("[image:{dest_url}]").with(self.color_theme.link)
|
||||
);
|
||||
state.append_raw(output, &rendered);
|
||||
}
|
||||
Event::Start(Tag::Table(..)) => state.table = Some(TableState::default()),
|
||||
Event::End(TagEnd::Table) => {
|
||||
if let Some(table) = state.table.take() {
|
||||
output.push_str(&self.render_table(&table));
|
||||
output.push_str("\n\n");
|
||||
}
|
||||
}
|
||||
Event::Start(Tag::TableHead) => {
|
||||
if let Some(table) = state.table.as_mut() {
|
||||
table.in_head = true;
|
||||
}
|
||||
}
|
||||
Event::End(TagEnd::TableHead) => {
|
||||
if let Some(table) = state.table.as_mut() {
|
||||
table.finish_row();
|
||||
table.in_head = false;
|
||||
}
|
||||
}
|
||||
Event::Start(Tag::TableRow) => {
|
||||
if let Some(table) = state.table.as_mut() {
|
||||
table.current_row.clear();
|
||||
table.current_cell.clear();
|
||||
}
|
||||
}
|
||||
Event::End(TagEnd::TableRow) => {
|
||||
if let Some(table) = state.table.as_mut() {
|
||||
table.finish_row();
|
||||
}
|
||||
}
|
||||
Event::Start(Tag::TableCell) => {
|
||||
if let Some(table) = state.table.as_mut() {
|
||||
table.current_cell.clear();
|
||||
}
|
||||
}
|
||||
Event::End(TagEnd::TableCell) => {
|
||||
if let Some(table) = state.table.as_mut() {
|
||||
table.push_cell();
|
||||
}
|
||||
}
|
||||
Event::Start(Tag::Paragraph | Tag::MetadataBlock(..) | _)
|
||||
| Event::End(TagEnd::Image | TagEnd::MetadataBlock(..) | _) => {}
|
||||
}
|
||||
}
|
||||
|
||||
#[allow(clippy::unused_self)]
|
||||
fn start_heading(&self, state: &mut RenderState, level: u8, output: &mut String) {
|
||||
state.heading_level = Some(level);
|
||||
if !output.is_empty() {
|
||||
output.push('\n');
|
||||
}
|
||||
}
|
||||
|
||||
fn start_quote(&self, state: &mut RenderState, output: &mut String) {
|
||||
state.quote += 1;
|
||||
let _ = write!(output, "{}", "│ ".with(self.color_theme.quote));
|
||||
}
|
||||
|
||||
fn start_item(state: &mut RenderState, output: &mut String) {
|
||||
let depth = state.list_stack.len().saturating_sub(1);
|
||||
output.push_str(&" ".repeat(depth));
|
||||
|
||||
let marker = match state.list_stack.last_mut() {
|
||||
Some(ListKind::Ordered { next_index }) => {
|
||||
let value = *next_index;
|
||||
*next_index += 1;
|
||||
format!("{value}. ")
|
||||
}
|
||||
_ => "• ".to_string(),
|
||||
};
|
||||
output.push_str(&marker);
|
||||
}
|
||||
|
||||
fn start_code_block(&self, code_language: &str, output: &mut String) {
|
||||
let label = if code_language.is_empty() {
|
||||
"code".to_string()
|
||||
} else {
|
||||
code_language.to_string()
|
||||
};
|
||||
let _ = writeln!(
|
||||
output,
|
||||
"{}",
|
||||
format!("╭─ {label}")
|
||||
.bold()
|
||||
.with(self.color_theme.code_block_border)
|
||||
);
|
||||
}
|
||||
|
||||
fn finish_code_block(&self, code_buffer: &str, code_language: &str, output: &mut String) {
|
||||
output.push_str(&self.highlight_code(code_buffer, code_language));
|
||||
let _ = write!(
|
||||
output,
|
||||
"{}",
|
||||
"╰─".bold().with(self.color_theme.code_block_border)
|
||||
);
|
||||
output.push_str("\n\n");
|
||||
}
|
||||
|
||||
fn push_text(
|
||||
&self,
|
||||
text: &str,
|
||||
state: &mut RenderState,
|
||||
output: &mut String,
|
||||
code_buffer: &mut String,
|
||||
in_code_block: bool,
|
||||
) {
|
||||
if in_code_block {
|
||||
code_buffer.push_str(text);
|
||||
} else {
|
||||
state.append_styled(output, text, &self.color_theme);
|
||||
}
|
||||
}
|
||||
|
||||
fn render_table(&self, table: &TableState) -> String {
|
||||
let mut rows = Vec::new();
|
||||
if !table.headers.is_empty() {
|
||||
rows.push(table.headers.clone());
|
||||
}
|
||||
rows.extend(table.rows.iter().cloned());
|
||||
|
||||
if rows.is_empty() {
|
||||
return String::new();
|
||||
}
|
||||
|
||||
let column_count = rows.iter().map(Vec::len).max().unwrap_or(0);
|
||||
let widths = (0..column_count)
|
||||
.map(|column| {
|
||||
rows.iter()
|
||||
.filter_map(|row| row.get(column))
|
||||
.map(|cell| visible_width(cell))
|
||||
.max()
|
||||
.unwrap_or(0)
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
let border = format!("{}", "│".with(self.color_theme.table_border));
|
||||
let separator = widths
|
||||
.iter()
|
||||
.map(|width| "─".repeat(*width + 2))
|
||||
.collect::<Vec<_>>()
|
||||
.join(&format!("{}", "┼".with(self.color_theme.table_border)));
|
||||
let separator = format!("{border}{separator}{border}");
|
||||
|
||||
let mut output = String::new();
|
||||
if !table.headers.is_empty() {
|
||||
output.push_str(&self.render_table_row(&table.headers, &widths, true));
|
||||
output.push('\n');
|
||||
output.push_str(&separator);
|
||||
if !table.rows.is_empty() {
|
||||
output.push('\n');
|
||||
}
|
||||
}
|
||||
|
||||
for (index, row) in table.rows.iter().enumerate() {
|
||||
output.push_str(&self.render_table_row(row, &widths, false));
|
||||
if index + 1 < table.rows.len() {
|
||||
output.push('\n');
|
||||
}
|
||||
}
|
||||
|
||||
output
|
||||
}
|
||||
|
||||
fn render_table_row(&self, row: &[String], widths: &[usize], is_header: bool) -> String {
|
||||
let border = format!("{}", "│".with(self.color_theme.table_border));
|
||||
let mut line = String::new();
|
||||
line.push_str(&border);
|
||||
|
||||
for (index, width) in widths.iter().enumerate() {
|
||||
let cell = row.get(index).map_or("", String::as_str);
|
||||
line.push(' ');
|
||||
if is_header {
|
||||
let _ = write!(line, "{}", cell.bold().with(self.color_theme.heading));
|
||||
} else {
|
||||
line.push_str(cell);
|
||||
}
|
||||
let padding = width.saturating_sub(visible_width(cell));
|
||||
line.push_str(&" ".repeat(padding + 1));
|
||||
line.push_str(&border);
|
||||
}
|
||||
|
||||
line
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn highlight_code(&self, code: &str, language: &str) -> String {
|
||||
let syntax = self
|
||||
.syntax_set
|
||||
.find_syntax_by_token(language)
|
||||
.unwrap_or_else(|| self.syntax_set.find_syntax_plain_text());
|
||||
let mut syntax_highlighter = HighlightLines::new(syntax, &self.syntax_theme);
|
||||
let mut colored_output = String::new();
|
||||
|
||||
for line in LinesWithEndings::from(code) {
|
||||
match syntax_highlighter.highlight_line(line, &self.syntax_set) {
|
||||
Ok(ranges) => {
|
||||
let escaped = as_24_bit_terminal_escaped(&ranges[..], false);
|
||||
colored_output.push_str(&apply_code_block_background(&escaped));
|
||||
}
|
||||
Err(_) => colored_output.push_str(&apply_code_block_background(line)),
|
||||
}
|
||||
}
|
||||
|
||||
colored_output
|
||||
}
|
||||
|
||||
pub fn stream_markdown(&self, markdown: &str, out: &mut impl Write) -> io::Result<()> {
|
||||
let rendered_markdown = self.markdown_to_ansi(markdown);
|
||||
write!(out, "{rendered_markdown}")?;
|
||||
if !rendered_markdown.ends_with('\n') {
|
||||
writeln!(out)?;
|
||||
}
|
||||
out.flush()
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, Clone, PartialEq, Eq)]
|
||||
pub struct MarkdownStreamState {
|
||||
pending: String,
|
||||
}
|
||||
|
||||
impl MarkdownStreamState {
|
||||
#[must_use]
|
||||
pub fn push(&mut self, renderer: &TerminalRenderer, delta: &str) -> Option<String> {
|
||||
self.pending.push_str(delta);
|
||||
let split = find_stream_safe_boundary(&self.pending)?;
|
||||
let ready = self.pending[..split].to_string();
|
||||
self.pending.drain(..split);
|
||||
Some(renderer.markdown_to_ansi(&ready))
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn flush(&mut self, renderer: &TerminalRenderer) -> Option<String> {
|
||||
if self.pending.trim().is_empty() {
|
||||
self.pending.clear();
|
||||
None
|
||||
} else {
|
||||
let pending = std::mem::take(&mut self.pending);
|
||||
Some(renderer.markdown_to_ansi(&pending))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn apply_code_block_background(line: &str) -> String {
|
||||
let trimmed = line.trim_end_matches('\n');
|
||||
let trailing_newline = if trimmed.len() == line.len() {
|
||||
""
|
||||
} else {
|
||||
"\n"
|
||||
};
|
||||
let with_background = trimmed.replace("\u{1b}[0m", "\u{1b}[0;48;5;236m");
|
||||
format!("\u{1b}[48;5;236m{with_background}\u{1b}[0m{trailing_newline}")
|
||||
}
|
||||
|
||||
fn find_stream_safe_boundary(markdown: &str) -> Option<usize> {
|
||||
let mut in_fence = false;
|
||||
let mut last_boundary = None;
|
||||
|
||||
for (offset, line) in markdown.split_inclusive('\n').scan(0usize, |cursor, line| {
|
||||
let start = *cursor;
|
||||
*cursor += line.len();
|
||||
Some((start, line))
|
||||
}) {
|
||||
let trimmed = line.trim_start();
|
||||
if trimmed.starts_with("```") || trimmed.starts_with("~~~") {
|
||||
in_fence = !in_fence;
|
||||
if !in_fence {
|
||||
last_boundary = Some(offset + line.len());
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
if in_fence {
|
||||
continue;
|
||||
}
|
||||
|
||||
if trimmed.is_empty() {
|
||||
last_boundary = Some(offset + line.len());
|
||||
}
|
||||
}
|
||||
|
||||
last_boundary
|
||||
}
|
||||
|
||||
fn visible_width(input: &str) -> usize {
|
||||
strip_ansi(input).chars().count()
|
||||
}
|
||||
|
||||
fn strip_ansi(input: &str) -> String {
|
||||
let mut output = String::new();
|
||||
let mut chars = input.chars().peekable();
|
||||
|
||||
while let Some(ch) = chars.next() {
|
||||
if ch == '\u{1b}' {
|
||||
if chars.peek() == Some(&'[') {
|
||||
chars.next();
|
||||
for next in chars.by_ref() {
|
||||
if next.is_ascii_alphabetic() {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
output.push(ch);
|
||||
}
|
||||
}
|
||||
|
||||
output
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::{strip_ansi, MarkdownStreamState, Spinner, TerminalRenderer};
|
||||
|
||||
#[test]
|
||||
fn renders_markdown_with_styling_and_lists() {
|
||||
let terminal_renderer = TerminalRenderer::new();
|
||||
let markdown_output = terminal_renderer
|
||||
.render_markdown("# Heading\n\nThis is **bold** and *italic*.\n\n- item\n\n`code`");
|
||||
|
||||
assert!(markdown_output.contains("Heading"));
|
||||
assert!(markdown_output.contains("• item"));
|
||||
assert!(markdown_output.contains("code"));
|
||||
assert!(markdown_output.contains('\u{1b}'));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn renders_links_as_colored_markdown_labels() {
|
||||
let terminal_renderer = TerminalRenderer::new();
|
||||
let markdown_output =
|
||||
terminal_renderer.render_markdown("See [Claw](https://example.com/docs) now.");
|
||||
let plain_text = strip_ansi(&markdown_output);
|
||||
|
||||
assert!(plain_text.contains("[Claw](https://example.com/docs)"));
|
||||
assert!(markdown_output.contains('\u{1b}'));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn highlights_fenced_code_blocks() {
|
||||
let terminal_renderer = TerminalRenderer::new();
|
||||
let markdown_output =
|
||||
terminal_renderer.markdown_to_ansi("```rust\nfn hi() { println!(\"hi\"); }\n```");
|
||||
let plain_text = strip_ansi(&markdown_output);
|
||||
|
||||
assert!(plain_text.contains("╭─ rust"));
|
||||
assert!(plain_text.contains("fn hi"));
|
||||
assert!(markdown_output.contains('\u{1b}'));
|
||||
assert!(markdown_output.contains("[48;5;236m"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn renders_ordered_and_nested_lists() {
|
||||
let terminal_renderer = TerminalRenderer::new();
|
||||
let markdown_output =
|
||||
terminal_renderer.render_markdown("1. first\n2. second\n - nested\n - child");
|
||||
let plain_text = strip_ansi(&markdown_output);
|
||||
|
||||
assert!(plain_text.contains("1. first"));
|
||||
assert!(plain_text.contains("2. second"));
|
||||
assert!(plain_text.contains(" • nested"));
|
||||
assert!(plain_text.contains(" • child"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn renders_tables_with_alignment() {
|
||||
let terminal_renderer = TerminalRenderer::new();
|
||||
let markdown_output = terminal_renderer
|
||||
.render_markdown("| Name | Value |\n| ---- | ----- |\n| alpha | 1 |\n| beta | 22 |");
|
||||
let plain_text = strip_ansi(&markdown_output);
|
||||
let lines = plain_text.lines().collect::<Vec<_>>();
|
||||
|
||||
assert_eq!(lines[0], "│ Name │ Value │");
|
||||
assert_eq!(lines[1], "│───────┼───────│");
|
||||
assert_eq!(lines[2], "│ alpha │ 1 │");
|
||||
assert_eq!(lines[3], "│ beta │ 22 │");
|
||||
assert!(markdown_output.contains('\u{1b}'));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn streaming_state_waits_for_complete_blocks() {
|
||||
let renderer = TerminalRenderer::new();
|
||||
let mut state = MarkdownStreamState::default();
|
||||
|
||||
assert_eq!(state.push(&renderer, "# Heading"), None);
|
||||
let flushed = state
|
||||
.push(&renderer, "\n\nParagraph\n\n")
|
||||
.expect("completed block");
|
||||
let plain_text = strip_ansi(&flushed);
|
||||
assert!(plain_text.contains("Heading"));
|
||||
assert!(plain_text.contains("Paragraph"));
|
||||
|
||||
assert_eq!(state.push(&renderer, "```rust\nfn main() {}\n"), None);
|
||||
let code = state
|
||||
.push(&renderer, "```\n")
|
||||
.expect("closed code fence flushes");
|
||||
assert!(strip_ansi(&code).contains("fn main()"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn spinner_advances_frames() {
|
||||
let terminal_renderer = TerminalRenderer::new();
|
||||
let mut spinner = Spinner::new();
|
||||
let mut out = Vec::new();
|
||||
spinner
|
||||
.tick("Working", terminal_renderer.color_theme(), &mut out)
|
||||
.expect("tick succeeds");
|
||||
spinner
|
||||
.tick("Working", terminal_renderer.color_theme(), &mut out)
|
||||
.expect("tick succeeds");
|
||||
|
||||
let output = String::from_utf8_lossy(&out);
|
||||
assert!(output.contains("Working"));
|
||||
}
|
||||
}
|
||||
@ -9,4 +9,6 @@ publish.workspace = true
|
||||
workspace = true
|
||||
|
||||
[dependencies]
|
||||
plugins = { path = "../plugins" }
|
||||
runtime = { path = "../runtime" }
|
||||
serde_json.workspace = true
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
16
rust/crates/lsp/Cargo.toml
Normal file
16
rust/crates/lsp/Cargo.toml
Normal file
@ -0,0 +1,16 @@
|
||||
[package]
|
||||
name = "lsp"
|
||||
version.workspace = true
|
||||
edition.workspace = true
|
||||
license.workspace = true
|
||||
publish.workspace = true
|
||||
|
||||
[dependencies]
|
||||
lsp-types.workspace = true
|
||||
serde = { version = "1", features = ["derive"] }
|
||||
serde_json.workspace = true
|
||||
tokio = { version = "1", features = ["io-util", "macros", "process", "rt", "rt-multi-thread", "sync", "time"] }
|
||||
url = "2"
|
||||
|
||||
[lints]
|
||||
workspace = true
|
||||
463
rust/crates/lsp/src/client.rs
Normal file
463
rust/crates/lsp/src/client.rs
Normal file
@ -0,0 +1,463 @@
|
||||
use std::collections::BTreeMap;
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::process::Stdio;
|
||||
use std::sync::Arc;
|
||||
use std::sync::atomic::{AtomicI64, Ordering};
|
||||
|
||||
use lsp_types::{
|
||||
Diagnostic, GotoDefinitionResponse, Location, LocationLink, Position, PublishDiagnosticsParams,
|
||||
};
|
||||
use serde_json::{json, Value};
|
||||
use tokio::io::{AsyncBufReadExt, AsyncRead, AsyncReadExt, AsyncWriteExt, BufReader, BufWriter};
|
||||
use tokio::process::{Child, ChildStdin, ChildStdout, Command};
|
||||
use tokio::sync::{oneshot, Mutex};
|
||||
|
||||
use crate::error::LspError;
|
||||
use crate::types::{LspServerConfig, SymbolLocation};
|
||||
|
||||
pub(crate) struct LspClient {
|
||||
config: LspServerConfig,
|
||||
writer: Mutex<BufWriter<ChildStdin>>,
|
||||
child: Mutex<Child>,
|
||||
pending_requests: Arc<Mutex<BTreeMap<i64, oneshot::Sender<Result<Value, LspError>>>>>,
|
||||
diagnostics: Arc<Mutex<BTreeMap<String, Vec<Diagnostic>>>>,
|
||||
open_documents: Mutex<BTreeMap<PathBuf, i32>>,
|
||||
next_request_id: AtomicI64,
|
||||
}
|
||||
|
||||
impl LspClient {
|
||||
pub(crate) async fn connect(config: LspServerConfig) -> Result<Self, LspError> {
|
||||
let mut command = Command::new(&config.command);
|
||||
command
|
||||
.args(&config.args)
|
||||
.current_dir(&config.workspace_root)
|
||||
.stdin(Stdio::piped())
|
||||
.stdout(Stdio::piped())
|
||||
.stderr(Stdio::piped())
|
||||
.envs(config.env.clone());
|
||||
|
||||
let mut child = command.spawn()?;
|
||||
let stdin = child
|
||||
.stdin
|
||||
.take()
|
||||
.ok_or_else(|| LspError::Protocol("missing LSP stdin pipe".to_string()))?;
|
||||
let stdout = child
|
||||
.stdout
|
||||
.take()
|
||||
.ok_or_else(|| LspError::Protocol("missing LSP stdout pipe".to_string()))?;
|
||||
let stderr = child.stderr.take();
|
||||
|
||||
let client = Self {
|
||||
config,
|
||||
writer: Mutex::new(BufWriter::new(stdin)),
|
||||
child: Mutex::new(child),
|
||||
pending_requests: Arc::new(Mutex::new(BTreeMap::new())),
|
||||
diagnostics: Arc::new(Mutex::new(BTreeMap::new())),
|
||||
open_documents: Mutex::new(BTreeMap::new()),
|
||||
next_request_id: AtomicI64::new(1),
|
||||
};
|
||||
|
||||
client.spawn_reader(stdout);
|
||||
if let Some(stderr) = stderr {
|
||||
client.spawn_stderr_drain(stderr);
|
||||
}
|
||||
client.initialize().await?;
|
||||
Ok(client)
|
||||
}
|
||||
|
||||
pub(crate) async fn ensure_document_open(&self, path: &Path) -> Result<(), LspError> {
|
||||
if self.is_document_open(path).await {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let contents = std::fs::read_to_string(path)?;
|
||||
self.open_document(path, &contents).await
|
||||
}
|
||||
|
||||
pub(crate) async fn open_document(&self, path: &Path, text: &str) -> Result<(), LspError> {
|
||||
let uri = file_url(path)?;
|
||||
let language_id = self
|
||||
.config
|
||||
.language_id_for(path)
|
||||
.ok_or_else(|| LspError::UnsupportedDocument(path.to_path_buf()))?;
|
||||
|
||||
self.notify(
|
||||
"textDocument/didOpen",
|
||||
json!({
|
||||
"textDocument": {
|
||||
"uri": uri,
|
||||
"languageId": language_id,
|
||||
"version": 1,
|
||||
"text": text,
|
||||
}
|
||||
}),
|
||||
)
|
||||
.await?;
|
||||
|
||||
self.open_documents
|
||||
.lock()
|
||||
.await
|
||||
.insert(path.to_path_buf(), 1);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub(crate) async fn change_document(&self, path: &Path, text: &str) -> Result<(), LspError> {
|
||||
if !self.is_document_open(path).await {
|
||||
return self.open_document(path, text).await;
|
||||
}
|
||||
|
||||
let uri = file_url(path)?;
|
||||
let next_version = {
|
||||
let mut open_documents = self.open_documents.lock().await;
|
||||
let version = open_documents
|
||||
.entry(path.to_path_buf())
|
||||
.and_modify(|value| *value += 1)
|
||||
.or_insert(1);
|
||||
*version
|
||||
};
|
||||
|
||||
self.notify(
|
||||
"textDocument/didChange",
|
||||
json!({
|
||||
"textDocument": {
|
||||
"uri": uri,
|
||||
"version": next_version,
|
||||
},
|
||||
"contentChanges": [{
|
||||
"text": text,
|
||||
}],
|
||||
}),
|
||||
)
|
||||
.await
|
||||
}
|
||||
|
||||
pub(crate) async fn save_document(&self, path: &Path) -> Result<(), LspError> {
|
||||
if !self.is_document_open(path).await {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
self.notify(
|
||||
"textDocument/didSave",
|
||||
json!({
|
||||
"textDocument": {
|
||||
"uri": file_url(path)?,
|
||||
}
|
||||
}),
|
||||
)
|
||||
.await
|
||||
}
|
||||
|
||||
pub(crate) async fn close_document(&self, path: &Path) -> Result<(), LspError> {
|
||||
if !self.is_document_open(path).await {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
self.notify(
|
||||
"textDocument/didClose",
|
||||
json!({
|
||||
"textDocument": {
|
||||
"uri": file_url(path)?,
|
||||
}
|
||||
}),
|
||||
)
|
||||
.await?;
|
||||
|
||||
self.open_documents.lock().await.remove(path);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub(crate) async fn is_document_open(&self, path: &Path) -> bool {
|
||||
self.open_documents.lock().await.contains_key(path)
|
||||
}
|
||||
|
||||
pub(crate) async fn go_to_definition(
|
||||
&self,
|
||||
path: &Path,
|
||||
position: Position,
|
||||
) -> Result<Vec<SymbolLocation>, LspError> {
|
||||
self.ensure_document_open(path).await?;
|
||||
let response = self
|
||||
.request::<Option<GotoDefinitionResponse>>(
|
||||
"textDocument/definition",
|
||||
json!({
|
||||
"textDocument": { "uri": file_url(path)? },
|
||||
"position": position,
|
||||
}),
|
||||
)
|
||||
.await?;
|
||||
|
||||
Ok(match response {
|
||||
Some(GotoDefinitionResponse::Scalar(location)) => {
|
||||
location_to_symbol_locations(vec![location])
|
||||
}
|
||||
Some(GotoDefinitionResponse::Array(locations)) => location_to_symbol_locations(locations),
|
||||
Some(GotoDefinitionResponse::Link(links)) => location_links_to_symbol_locations(links),
|
||||
None => Vec::new(),
|
||||
})
|
||||
}
|
||||
|
||||
pub(crate) async fn find_references(
|
||||
&self,
|
||||
path: &Path,
|
||||
position: Position,
|
||||
include_declaration: bool,
|
||||
) -> Result<Vec<SymbolLocation>, LspError> {
|
||||
self.ensure_document_open(path).await?;
|
||||
let response = self
|
||||
.request::<Option<Vec<Location>>>(
|
||||
"textDocument/references",
|
||||
json!({
|
||||
"textDocument": { "uri": file_url(path)? },
|
||||
"position": position,
|
||||
"context": {
|
||||
"includeDeclaration": include_declaration,
|
||||
},
|
||||
}),
|
||||
)
|
||||
.await?;
|
||||
|
||||
Ok(location_to_symbol_locations(response.unwrap_or_default()))
|
||||
}
|
||||
|
||||
pub(crate) async fn diagnostics_snapshot(&self) -> BTreeMap<String, Vec<Diagnostic>> {
|
||||
self.diagnostics.lock().await.clone()
|
||||
}
|
||||
|
||||
pub(crate) async fn shutdown(&self) -> Result<(), LspError> {
|
||||
let _ = self.request::<Value>("shutdown", json!({})).await;
|
||||
let _ = self.notify("exit", Value::Null).await;
|
||||
|
||||
let mut child = self.child.lock().await;
|
||||
if child.kill().await.is_err() {
|
||||
let _ = child.wait().await;
|
||||
return Ok(());
|
||||
}
|
||||
let _ = child.wait().await;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn spawn_reader(&self, stdout: ChildStdout) {
|
||||
let diagnostics = &self.diagnostics;
|
||||
let pending_requests = &self.pending_requests;
|
||||
|
||||
let diagnostics = diagnostics.clone();
|
||||
let pending_requests = pending_requests.clone();
|
||||
tokio::spawn(async move {
|
||||
let mut reader = BufReader::new(stdout);
|
||||
let result = async {
|
||||
while let Some(message) = read_message(&mut reader).await? {
|
||||
if let Some(id) = message.get("id").and_then(Value::as_i64) {
|
||||
let response = if let Some(error) = message.get("error") {
|
||||
Err(LspError::Protocol(error.to_string()))
|
||||
} else {
|
||||
Ok(message.get("result").cloned().unwrap_or(Value::Null))
|
||||
};
|
||||
|
||||
if let Some(sender) = pending_requests.lock().await.remove(&id) {
|
||||
let _ = sender.send(response);
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
let Some(method) = message.get("method").and_then(Value::as_str) else {
|
||||
continue;
|
||||
};
|
||||
if method != "textDocument/publishDiagnostics" {
|
||||
continue;
|
||||
}
|
||||
|
||||
let params = message.get("params").cloned().unwrap_or(Value::Null);
|
||||
let notification = serde_json::from_value::<PublishDiagnosticsParams>(params)?;
|
||||
let mut diagnostics_map = diagnostics.lock().await;
|
||||
if notification.diagnostics.is_empty() {
|
||||
diagnostics_map.remove(¬ification.uri.to_string());
|
||||
} else {
|
||||
diagnostics_map.insert(notification.uri.to_string(), notification.diagnostics);
|
||||
}
|
||||
}
|
||||
Ok::<(), LspError>(())
|
||||
}
|
||||
.await;
|
||||
|
||||
if let Err(error) = result {
|
||||
let mut pending = pending_requests.lock().await;
|
||||
let drained = pending
|
||||
.iter()
|
||||
.map(|(id, _)| *id)
|
||||
.collect::<Vec<_>>();
|
||||
for id in drained {
|
||||
if let Some(sender) = pending.remove(&id) {
|
||||
let _ = sender.send(Err(LspError::Protocol(error.to_string())));
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
fn spawn_stderr_drain<R>(&self, stderr: R)
|
||||
where
|
||||
R: AsyncRead + Unpin + Send + 'static,
|
||||
{
|
||||
tokio::spawn(async move {
|
||||
let mut reader = BufReader::new(stderr);
|
||||
let mut sink = Vec::new();
|
||||
let _ = reader.read_to_end(&mut sink).await;
|
||||
});
|
||||
}
|
||||
|
||||
async fn initialize(&self) -> Result<(), LspError> {
|
||||
let workspace_uri = file_url(&self.config.workspace_root)?;
|
||||
let _ = self
|
||||
.request::<Value>(
|
||||
"initialize",
|
||||
json!({
|
||||
"processId": std::process::id(),
|
||||
"rootUri": workspace_uri,
|
||||
"rootPath": self.config.workspace_root,
|
||||
"workspaceFolders": [{
|
||||
"uri": workspace_uri,
|
||||
"name": self.config.name,
|
||||
}],
|
||||
"initializationOptions": self.config.initialization_options.clone().unwrap_or(Value::Null),
|
||||
"capabilities": {
|
||||
"textDocument": {
|
||||
"publishDiagnostics": {
|
||||
"relatedInformation": true,
|
||||
},
|
||||
"definition": {
|
||||
"linkSupport": true,
|
||||
},
|
||||
"references": {}
|
||||
},
|
||||
"workspace": {
|
||||
"configuration": false,
|
||||
"workspaceFolders": true,
|
||||
},
|
||||
"general": {
|
||||
"positionEncodings": ["utf-16"],
|
||||
}
|
||||
}
|
||||
}),
|
||||
)
|
||||
.await?;
|
||||
self.notify("initialized", json!({})).await
|
||||
}
|
||||
|
||||
async fn request<T>(&self, method: &str, params: Value) -> Result<T, LspError>
|
||||
where
|
||||
T: for<'de> serde::Deserialize<'de>,
|
||||
{
|
||||
let id = self.next_request_id.fetch_add(1, Ordering::Relaxed);
|
||||
let (sender, receiver) = oneshot::channel();
|
||||
self.pending_requests.lock().await.insert(id, sender);
|
||||
|
||||
if let Err(error) = self
|
||||
.send_message(&json!({
|
||||
"jsonrpc": "2.0",
|
||||
"id": id,
|
||||
"method": method,
|
||||
"params": params,
|
||||
}))
|
||||
.await
|
||||
{
|
||||
self.pending_requests.lock().await.remove(&id);
|
||||
return Err(error);
|
||||
}
|
||||
|
||||
let response = receiver
|
||||
.await
|
||||
.map_err(|_| LspError::Protocol(format!("request channel closed for {method}")))??;
|
||||
Ok(serde_json::from_value(response)?)
|
||||
}
|
||||
|
||||
async fn notify(&self, method: &str, params: Value) -> Result<(), LspError> {
|
||||
self.send_message(&json!({
|
||||
"jsonrpc": "2.0",
|
||||
"method": method,
|
||||
"params": params,
|
||||
}))
|
||||
.await
|
||||
}
|
||||
|
||||
async fn send_message(&self, payload: &Value) -> Result<(), LspError> {
|
||||
let body = serde_json::to_vec(payload)?;
|
||||
let mut writer = self.writer.lock().await;
|
||||
writer
|
||||
.write_all(format!("Content-Length: {}\r\n\r\n", body.len()).as_bytes())
|
||||
.await?;
|
||||
writer.write_all(&body).await?;
|
||||
writer.flush().await?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
async fn read_message<R>(reader: &mut BufReader<R>) -> Result<Option<Value>, LspError>
|
||||
where
|
||||
R: AsyncRead + Unpin,
|
||||
{
|
||||
let mut content_length = None;
|
||||
|
||||
loop {
|
||||
let mut line = String::new();
|
||||
let read = reader.read_line(&mut line).await?;
|
||||
if read == 0 {
|
||||
return Ok(None);
|
||||
}
|
||||
|
||||
if line == "\r\n" {
|
||||
break;
|
||||
}
|
||||
|
||||
let trimmed = line.trim_end_matches(['\r', '\n']);
|
||||
if let Some((name, value)) = trimmed.split_once(':') {
|
||||
if name.eq_ignore_ascii_case("Content-Length") {
|
||||
let value = value.trim().to_string();
|
||||
content_length = Some(
|
||||
value
|
||||
.parse::<usize>()
|
||||
.map_err(|_| LspError::InvalidContentLength(value.clone()))?,
|
||||
);
|
||||
}
|
||||
} else {
|
||||
return Err(LspError::InvalidHeader(trimmed.to_string()));
|
||||
}
|
||||
}
|
||||
|
||||
let content_length = content_length.ok_or(LspError::MissingContentLength)?;
|
||||
let mut body = vec![0_u8; content_length];
|
||||
reader.read_exact(&mut body).await?;
|
||||
Ok(Some(serde_json::from_slice(&body)?))
|
||||
}
|
||||
|
||||
fn file_url(path: &Path) -> Result<String, LspError> {
|
||||
url::Url::from_file_path(path)
|
||||
.map(|url| url.to_string())
|
||||
.map_err(|()| LspError::PathToUrl(path.to_path_buf()))
|
||||
}
|
||||
|
||||
fn location_to_symbol_locations(locations: Vec<Location>) -> Vec<SymbolLocation> {
|
||||
locations
|
||||
.into_iter()
|
||||
.filter_map(|location| {
|
||||
uri_to_path(&location.uri.to_string()).map(|path| SymbolLocation {
|
||||
path,
|
||||
range: location.range,
|
||||
})
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn location_links_to_symbol_locations(links: Vec<LocationLink>) -> Vec<SymbolLocation> {
|
||||
links.into_iter()
|
||||
.filter_map(|link| {
|
||||
uri_to_path(&link.target_uri.to_string()).map(|path| SymbolLocation {
|
||||
path,
|
||||
range: link.target_selection_range,
|
||||
})
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn uri_to_path(uri: &str) -> Option<PathBuf> {
|
||||
url::Url::parse(uri).ok()?.to_file_path().ok()
|
||||
}
|
||||
62
rust/crates/lsp/src/error.rs
Normal file
62
rust/crates/lsp/src/error.rs
Normal file
@ -0,0 +1,62 @@
|
||||
use std::fmt::{Display, Formatter};
|
||||
use std::path::PathBuf;
|
||||
|
||||
#[derive(Debug)]
|
||||
pub enum LspError {
|
||||
Io(std::io::Error),
|
||||
Json(serde_json::Error),
|
||||
InvalidHeader(String),
|
||||
MissingContentLength,
|
||||
InvalidContentLength(String),
|
||||
UnsupportedDocument(PathBuf),
|
||||
UnknownServer(String),
|
||||
DuplicateExtension {
|
||||
extension: String,
|
||||
existing_server: String,
|
||||
new_server: String,
|
||||
},
|
||||
PathToUrl(PathBuf),
|
||||
Protocol(String),
|
||||
}
|
||||
|
||||
impl Display for LspError {
|
||||
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
|
||||
match self {
|
||||
Self::Io(error) => write!(f, "{error}"),
|
||||
Self::Json(error) => write!(f, "{error}"),
|
||||
Self::InvalidHeader(header) => write!(f, "invalid LSP header: {header}"),
|
||||
Self::MissingContentLength => write!(f, "missing LSP Content-Length header"),
|
||||
Self::InvalidContentLength(value) => {
|
||||
write!(f, "invalid LSP Content-Length value: {value}")
|
||||
}
|
||||
Self::UnsupportedDocument(path) => {
|
||||
write!(f, "no LSP server configured for {}", path.display())
|
||||
}
|
||||
Self::UnknownServer(name) => write!(f, "unknown LSP server: {name}"),
|
||||
Self::DuplicateExtension {
|
||||
extension,
|
||||
existing_server,
|
||||
new_server,
|
||||
} => write!(
|
||||
f,
|
||||
"duplicate LSP extension mapping for {extension}: {existing_server} and {new_server}"
|
||||
),
|
||||
Self::PathToUrl(path) => write!(f, "failed to convert path to file URL: {}", path.display()),
|
||||
Self::Protocol(message) => write!(f, "LSP protocol error: {message}"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl std::error::Error for LspError {}
|
||||
|
||||
impl From<std::io::Error> for LspError {
|
||||
fn from(value: std::io::Error) -> Self {
|
||||
Self::Io(value)
|
||||
}
|
||||
}
|
||||
|
||||
impl From<serde_json::Error> for LspError {
|
||||
fn from(value: serde_json::Error) -> Self {
|
||||
Self::Json(value)
|
||||
}
|
||||
}
|
||||
283
rust/crates/lsp/src/lib.rs
Normal file
283
rust/crates/lsp/src/lib.rs
Normal file
@ -0,0 +1,283 @@
|
||||
mod client;
|
||||
mod error;
|
||||
mod manager;
|
||||
mod types;
|
||||
|
||||
pub use error::LspError;
|
||||
pub use manager::LspManager;
|
||||
pub use types::{
|
||||
FileDiagnostics, LspContextEnrichment, LspServerConfig, SymbolLocation, WorkspaceDiagnostics,
|
||||
};
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use std::collections::BTreeMap;
|
||||
use std::fs;
|
||||
use std::path::PathBuf;
|
||||
use std::process::Command;
|
||||
use std::time::{Duration, SystemTime, UNIX_EPOCH};
|
||||
|
||||
use lsp_types::{DiagnosticSeverity, Position};
|
||||
|
||||
use crate::{LspManager, LspServerConfig};
|
||||
|
||||
fn temp_dir(label: &str) -> PathBuf {
|
||||
let nanos = SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.expect("time should be after epoch")
|
||||
.as_nanos();
|
||||
std::env::temp_dir().join(format!("lsp-{label}-{nanos}"))
|
||||
}
|
||||
|
||||
fn python3_path() -> Option<String> {
|
||||
let candidates = ["python3", "/usr/bin/python3"];
|
||||
candidates.iter().find_map(|candidate| {
|
||||
Command::new(candidate)
|
||||
.arg("--version")
|
||||
.output()
|
||||
.ok()
|
||||
.filter(|output| output.status.success())
|
||||
.map(|_| (*candidate).to_string())
|
||||
})
|
||||
}
|
||||
|
||||
fn write_mock_server_script(root: &std::path::Path) -> PathBuf {
|
||||
let script_path = root.join("mock_lsp_server.py");
|
||||
fs::write(
|
||||
&script_path,
|
||||
r#"import json
|
||||
import sys
|
||||
|
||||
|
||||
def read_message():
|
||||
headers = {}
|
||||
while True:
|
||||
line = sys.stdin.buffer.readline()
|
||||
if not line:
|
||||
return None
|
||||
if line == b"\r\n":
|
||||
break
|
||||
key, value = line.decode("utf-8").split(":", 1)
|
||||
headers[key.lower()] = value.strip()
|
||||
length = int(headers["content-length"])
|
||||
body = sys.stdin.buffer.read(length)
|
||||
return json.loads(body)
|
||||
|
||||
|
||||
def write_message(payload):
|
||||
raw = json.dumps(payload).encode("utf-8")
|
||||
sys.stdout.buffer.write(f"Content-Length: {len(raw)}\r\n\r\n".encode("utf-8"))
|
||||
sys.stdout.buffer.write(raw)
|
||||
sys.stdout.buffer.flush()
|
||||
|
||||
|
||||
while True:
|
||||
message = read_message()
|
||||
if message is None:
|
||||
break
|
||||
|
||||
method = message.get("method")
|
||||
if method == "initialize":
|
||||
write_message({
|
||||
"jsonrpc": "2.0",
|
||||
"id": message["id"],
|
||||
"result": {
|
||||
"capabilities": {
|
||||
"definitionProvider": True,
|
||||
"referencesProvider": True,
|
||||
"textDocumentSync": 1,
|
||||
}
|
||||
},
|
||||
})
|
||||
elif method == "initialized":
|
||||
continue
|
||||
elif method == "textDocument/didOpen":
|
||||
document = message["params"]["textDocument"]
|
||||
write_message({
|
||||
"jsonrpc": "2.0",
|
||||
"method": "textDocument/publishDiagnostics",
|
||||
"params": {
|
||||
"uri": document["uri"],
|
||||
"diagnostics": [
|
||||
{
|
||||
"range": {
|
||||
"start": {"line": 0, "character": 0},
|
||||
"end": {"line": 0, "character": 3},
|
||||
},
|
||||
"severity": 1,
|
||||
"source": "mock-server",
|
||||
"message": "mock error",
|
||||
}
|
||||
],
|
||||
},
|
||||
})
|
||||
elif method == "textDocument/didChange":
|
||||
continue
|
||||
elif method == "textDocument/didSave":
|
||||
continue
|
||||
elif method == "textDocument/definition":
|
||||
uri = message["params"]["textDocument"]["uri"]
|
||||
write_message({
|
||||
"jsonrpc": "2.0",
|
||||
"id": message["id"],
|
||||
"result": [
|
||||
{
|
||||
"uri": uri,
|
||||
"range": {
|
||||
"start": {"line": 0, "character": 0},
|
||||
"end": {"line": 0, "character": 3},
|
||||
},
|
||||
}
|
||||
],
|
||||
})
|
||||
elif method == "textDocument/references":
|
||||
uri = message["params"]["textDocument"]["uri"]
|
||||
write_message({
|
||||
"jsonrpc": "2.0",
|
||||
"id": message["id"],
|
||||
"result": [
|
||||
{
|
||||
"uri": uri,
|
||||
"range": {
|
||||
"start": {"line": 0, "character": 0},
|
||||
"end": {"line": 0, "character": 3},
|
||||
},
|
||||
},
|
||||
{
|
||||
"uri": uri,
|
||||
"range": {
|
||||
"start": {"line": 1, "character": 4},
|
||||
"end": {"line": 1, "character": 7},
|
||||
},
|
||||
},
|
||||
],
|
||||
})
|
||||
elif method == "shutdown":
|
||||
write_message({"jsonrpc": "2.0", "id": message["id"], "result": None})
|
||||
elif method == "exit":
|
||||
break
|
||||
"#,
|
||||
)
|
||||
.expect("mock server should be written");
|
||||
script_path
|
||||
}
|
||||
|
||||
async fn wait_for_diagnostics(manager: &LspManager) {
|
||||
tokio::time::timeout(Duration::from_secs(2), async {
|
||||
loop {
|
||||
if manager
|
||||
.collect_workspace_diagnostics()
|
||||
.await
|
||||
.expect("diagnostics snapshot should load")
|
||||
.total_diagnostics()
|
||||
> 0
|
||||
{
|
||||
break;
|
||||
}
|
||||
tokio::time::sleep(Duration::from_millis(10)).await;
|
||||
}
|
||||
})
|
||||
.await
|
||||
.expect("diagnostics should arrive from mock server");
|
||||
}
|
||||
|
||||
#[tokio::test(flavor = "current_thread")]
|
||||
async fn collects_diagnostics_and_symbol_navigation_from_mock_server() {
|
||||
let Some(python) = python3_path() else {
|
||||
return;
|
||||
};
|
||||
|
||||
// given
|
||||
let root = temp_dir("manager");
|
||||
fs::create_dir_all(root.join("src")).expect("workspace root should exist");
|
||||
let script_path = write_mock_server_script(&root);
|
||||
let source_path = root.join("src").join("main.rs");
|
||||
fs::write(&source_path, "fn main() {}\nlet value = 1;\n").expect("source file should exist");
|
||||
let manager = LspManager::new(vec![LspServerConfig {
|
||||
name: "rust-analyzer".to_string(),
|
||||
command: python,
|
||||
args: vec![script_path.display().to_string()],
|
||||
env: BTreeMap::new(),
|
||||
workspace_root: root.clone(),
|
||||
initialization_options: None,
|
||||
extension_to_language: BTreeMap::from([(".rs".to_string(), "rust".to_string())]),
|
||||
}])
|
||||
.expect("manager should build");
|
||||
manager
|
||||
.open_document(&source_path, &fs::read_to_string(&source_path).expect("source read should succeed"))
|
||||
.await
|
||||
.expect("document should open");
|
||||
wait_for_diagnostics(&manager).await;
|
||||
|
||||
// when
|
||||
let diagnostics = manager
|
||||
.collect_workspace_diagnostics()
|
||||
.await
|
||||
.expect("diagnostics should be available");
|
||||
let definitions = manager
|
||||
.go_to_definition(&source_path, Position::new(0, 0))
|
||||
.await
|
||||
.expect("definition request should succeed");
|
||||
let references = manager
|
||||
.find_references(&source_path, Position::new(0, 0), true)
|
||||
.await
|
||||
.expect("references request should succeed");
|
||||
|
||||
// then
|
||||
assert_eq!(diagnostics.files.len(), 1);
|
||||
assert_eq!(diagnostics.total_diagnostics(), 1);
|
||||
assert_eq!(diagnostics.files[0].diagnostics[0].severity, Some(DiagnosticSeverity::ERROR));
|
||||
assert_eq!(definitions.len(), 1);
|
||||
assert_eq!(definitions[0].start_line(), 1);
|
||||
assert_eq!(references.len(), 2);
|
||||
|
||||
manager.shutdown().await.expect("shutdown should succeed");
|
||||
fs::remove_dir_all(root).expect("temp workspace should be removed");
|
||||
}
|
||||
|
||||
#[tokio::test(flavor = "current_thread")]
|
||||
async fn renders_runtime_context_enrichment_for_prompt_usage() {
|
||||
let Some(python) = python3_path() else {
|
||||
return;
|
||||
};
|
||||
|
||||
// given
|
||||
let root = temp_dir("prompt");
|
||||
fs::create_dir_all(root.join("src")).expect("workspace root should exist");
|
||||
let script_path = write_mock_server_script(&root);
|
||||
let source_path = root.join("src").join("lib.rs");
|
||||
fs::write(&source_path, "pub fn answer() -> i32 { 42 }\n").expect("source file should exist");
|
||||
let manager = LspManager::new(vec![LspServerConfig {
|
||||
name: "rust-analyzer".to_string(),
|
||||
command: python,
|
||||
args: vec![script_path.display().to_string()],
|
||||
env: BTreeMap::new(),
|
||||
workspace_root: root.clone(),
|
||||
initialization_options: None,
|
||||
extension_to_language: BTreeMap::from([(".rs".to_string(), "rust".to_string())]),
|
||||
}])
|
||||
.expect("manager should build");
|
||||
manager
|
||||
.open_document(&source_path, &fs::read_to_string(&source_path).expect("source read should succeed"))
|
||||
.await
|
||||
.expect("document should open");
|
||||
wait_for_diagnostics(&manager).await;
|
||||
|
||||
// when
|
||||
let enrichment = manager
|
||||
.context_enrichment(&source_path, Position::new(0, 0))
|
||||
.await
|
||||
.expect("context enrichment should succeed");
|
||||
let rendered = enrichment.render_prompt_section();
|
||||
|
||||
// then
|
||||
assert!(rendered.contains("# LSP context"));
|
||||
assert!(rendered.contains("Workspace diagnostics: 1 across 1 file(s)"));
|
||||
assert!(rendered.contains("Definitions:"));
|
||||
assert!(rendered.contains("References:"));
|
||||
assert!(rendered.contains("mock error"));
|
||||
|
||||
manager.shutdown().await.expect("shutdown should succeed");
|
||||
fs::remove_dir_all(root).expect("temp workspace should be removed");
|
||||
}
|
||||
}
|
||||
191
rust/crates/lsp/src/manager.rs
Normal file
191
rust/crates/lsp/src/manager.rs
Normal file
@ -0,0 +1,191 @@
|
||||
use std::collections::{BTreeMap, BTreeSet};
|
||||
use std::path::Path;
|
||||
use std::sync::Arc;
|
||||
|
||||
use lsp_types::Position;
|
||||
use tokio::sync::Mutex;
|
||||
|
||||
use crate::client::LspClient;
|
||||
use crate::error::LspError;
|
||||
use crate::types::{
|
||||
normalize_extension, FileDiagnostics, LspContextEnrichment, LspServerConfig, SymbolLocation,
|
||||
WorkspaceDiagnostics,
|
||||
};
|
||||
|
||||
pub struct LspManager {
|
||||
server_configs: BTreeMap<String, LspServerConfig>,
|
||||
extension_map: BTreeMap<String, String>,
|
||||
clients: Mutex<BTreeMap<String, Arc<LspClient>>>,
|
||||
}
|
||||
|
||||
impl LspManager {
|
||||
pub fn new(server_configs: Vec<LspServerConfig>) -> Result<Self, LspError> {
|
||||
let mut configs_by_name = BTreeMap::new();
|
||||
let mut extension_map = BTreeMap::new();
|
||||
|
||||
for config in server_configs {
|
||||
for extension in config.extension_to_language.keys() {
|
||||
let normalized = normalize_extension(extension);
|
||||
if let Some(existing_server) = extension_map.insert(normalized.clone(), config.name.clone()) {
|
||||
return Err(LspError::DuplicateExtension {
|
||||
extension: normalized,
|
||||
existing_server,
|
||||
new_server: config.name.clone(),
|
||||
});
|
||||
}
|
||||
}
|
||||
configs_by_name.insert(config.name.clone(), config);
|
||||
}
|
||||
|
||||
Ok(Self {
|
||||
server_configs: configs_by_name,
|
||||
extension_map,
|
||||
clients: Mutex::new(BTreeMap::new()),
|
||||
})
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn supports_path(&self, path: &Path) -> bool {
|
||||
path.extension().is_some_and(|extension| {
|
||||
let normalized = normalize_extension(extension.to_string_lossy().as_ref());
|
||||
self.extension_map.contains_key(&normalized)
|
||||
})
|
||||
}
|
||||
|
||||
pub async fn open_document(&self, path: &Path, text: &str) -> Result<(), LspError> {
|
||||
self.client_for_path(path).await?.open_document(path, text).await
|
||||
}
|
||||
|
||||
pub async fn sync_document_from_disk(&self, path: &Path) -> Result<(), LspError> {
|
||||
let contents = std::fs::read_to_string(path)?;
|
||||
self.change_document(path, &contents).await?;
|
||||
self.save_document(path).await
|
||||
}
|
||||
|
||||
pub async fn change_document(&self, path: &Path, text: &str) -> Result<(), LspError> {
|
||||
self.client_for_path(path).await?.change_document(path, text).await
|
||||
}
|
||||
|
||||
pub async fn save_document(&self, path: &Path) -> Result<(), LspError> {
|
||||
self.client_for_path(path).await?.save_document(path).await
|
||||
}
|
||||
|
||||
pub async fn close_document(&self, path: &Path) -> Result<(), LspError> {
|
||||
self.client_for_path(path).await?.close_document(path).await
|
||||
}
|
||||
|
||||
pub async fn go_to_definition(
|
||||
&self,
|
||||
path: &Path,
|
||||
position: Position,
|
||||
) -> Result<Vec<SymbolLocation>, LspError> {
|
||||
let mut locations = self.client_for_path(path).await?.go_to_definition(path, position).await?;
|
||||
dedupe_locations(&mut locations);
|
||||
Ok(locations)
|
||||
}
|
||||
|
||||
pub async fn find_references(
|
||||
&self,
|
||||
path: &Path,
|
||||
position: Position,
|
||||
include_declaration: bool,
|
||||
) -> Result<Vec<SymbolLocation>, LspError> {
|
||||
let mut locations = self
|
||||
.client_for_path(path)
|
||||
.await?
|
||||
.find_references(path, position, include_declaration)
|
||||
.await?;
|
||||
dedupe_locations(&mut locations);
|
||||
Ok(locations)
|
||||
}
|
||||
|
||||
pub async fn collect_workspace_diagnostics(&self) -> Result<WorkspaceDiagnostics, LspError> {
|
||||
let clients = self.clients.lock().await.values().cloned().collect::<Vec<_>>();
|
||||
let mut files = Vec::new();
|
||||
|
||||
for client in clients {
|
||||
for (uri, diagnostics) in client.diagnostics_snapshot().await {
|
||||
let Ok(path) = url::Url::parse(&uri)
|
||||
.and_then(|url| url.to_file_path().map_err(|()| url::ParseError::RelativeUrlWithoutBase))
|
||||
else {
|
||||
continue;
|
||||
};
|
||||
if diagnostics.is_empty() {
|
||||
continue;
|
||||
}
|
||||
files.push(FileDiagnostics {
|
||||
path,
|
||||
uri,
|
||||
diagnostics,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
files.sort_by(|left, right| left.path.cmp(&right.path));
|
||||
Ok(WorkspaceDiagnostics { files })
|
||||
}
|
||||
|
||||
pub async fn context_enrichment(
|
||||
&self,
|
||||
path: &Path,
|
||||
position: Position,
|
||||
) -> Result<LspContextEnrichment, LspError> {
|
||||
Ok(LspContextEnrichment {
|
||||
file_path: path.to_path_buf(),
|
||||
diagnostics: self.collect_workspace_diagnostics().await?,
|
||||
definitions: self.go_to_definition(path, position).await?,
|
||||
references: self.find_references(path, position, true).await?,
|
||||
})
|
||||
}
|
||||
|
||||
pub async fn shutdown(&self) -> Result<(), LspError> {
|
||||
let mut clients = self.clients.lock().await;
|
||||
let drained = clients.values().cloned().collect::<Vec<_>>();
|
||||
clients.clear();
|
||||
drop(clients);
|
||||
|
||||
for client in drained {
|
||||
client.shutdown().await?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn client_for_path(&self, path: &Path) -> Result<Arc<LspClient>, LspError> {
|
||||
let extension = path
|
||||
.extension()
|
||||
.map(|extension| normalize_extension(extension.to_string_lossy().as_ref()))
|
||||
.ok_or_else(|| LspError::UnsupportedDocument(path.to_path_buf()))?;
|
||||
let server_name = self
|
||||
.extension_map
|
||||
.get(&extension)
|
||||
.cloned()
|
||||
.ok_or_else(|| LspError::UnsupportedDocument(path.to_path_buf()))?;
|
||||
|
||||
let mut clients = self.clients.lock().await;
|
||||
if let Some(client) = clients.get(&server_name) {
|
||||
return Ok(client.clone());
|
||||
}
|
||||
|
||||
let config = self
|
||||
.server_configs
|
||||
.get(&server_name)
|
||||
.cloned()
|
||||
.ok_or_else(|| LspError::UnknownServer(server_name.clone()))?;
|
||||
let client = Arc::new(LspClient::connect(config).await?);
|
||||
clients.insert(server_name, client.clone());
|
||||
Ok(client)
|
||||
}
|
||||
}
|
||||
|
||||
fn dedupe_locations(locations: &mut Vec<SymbolLocation>) {
|
||||
let mut seen = BTreeSet::new();
|
||||
locations.retain(|location| {
|
||||
seen.insert((
|
||||
location.path.clone(),
|
||||
location.range.start.line,
|
||||
location.range.start.character,
|
||||
location.range.end.line,
|
||||
location.range.end.character,
|
||||
))
|
||||
});
|
||||
}
|
||||
186
rust/crates/lsp/src/types.rs
Normal file
186
rust/crates/lsp/src/types.rs
Normal file
@ -0,0 +1,186 @@
|
||||
use std::collections::BTreeMap;
|
||||
use std::fmt::{Display, Formatter};
|
||||
use std::path::{Path, PathBuf};
|
||||
|
||||
use lsp_types::{Diagnostic, Range};
|
||||
use serde_json::Value;
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub struct LspServerConfig {
|
||||
pub name: String,
|
||||
pub command: String,
|
||||
pub args: Vec<String>,
|
||||
pub env: BTreeMap<String, String>,
|
||||
pub workspace_root: PathBuf,
|
||||
pub initialization_options: Option<Value>,
|
||||
pub extension_to_language: BTreeMap<String, String>,
|
||||
}
|
||||
|
||||
impl LspServerConfig {
|
||||
#[must_use]
|
||||
pub fn language_id_for(&self, path: &Path) -> Option<&str> {
|
||||
let extension = normalize_extension(path.extension()?.to_string_lossy().as_ref());
|
||||
self.extension_to_language
|
||||
.get(&extension)
|
||||
.map(String::as_str)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq)]
|
||||
pub struct FileDiagnostics {
|
||||
pub path: PathBuf,
|
||||
pub uri: String,
|
||||
pub diagnostics: Vec<Diagnostic>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Default, PartialEq)]
|
||||
pub struct WorkspaceDiagnostics {
|
||||
pub files: Vec<FileDiagnostics>,
|
||||
}
|
||||
|
||||
impl WorkspaceDiagnostics {
|
||||
#[must_use]
|
||||
pub fn is_empty(&self) -> bool {
|
||||
self.files.is_empty()
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn total_diagnostics(&self) -> usize {
|
||||
self.files.iter().map(|file| file.diagnostics.len()).sum()
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub struct SymbolLocation {
|
||||
pub path: PathBuf,
|
||||
pub range: Range,
|
||||
}
|
||||
|
||||
impl SymbolLocation {
|
||||
#[must_use]
|
||||
pub fn start_line(&self) -> u32 {
|
||||
self.range.start.line + 1
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn start_character(&self) -> u32 {
|
||||
self.range.start.character + 1
|
||||
}
|
||||
}
|
||||
|
||||
impl Display for SymbolLocation {
|
||||
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
|
||||
write!(
|
||||
f,
|
||||
"{}:{}:{}",
|
||||
self.path.display(),
|
||||
self.start_line(),
|
||||
self.start_character()
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Default, PartialEq)]
|
||||
pub struct LspContextEnrichment {
|
||||
pub file_path: PathBuf,
|
||||
pub diagnostics: WorkspaceDiagnostics,
|
||||
pub definitions: Vec<SymbolLocation>,
|
||||
pub references: Vec<SymbolLocation>,
|
||||
}
|
||||
|
||||
impl LspContextEnrichment {
|
||||
#[must_use]
|
||||
pub fn is_empty(&self) -> bool {
|
||||
self.diagnostics.is_empty() && self.definitions.is_empty() && self.references.is_empty()
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn render_prompt_section(&self) -> String {
|
||||
const MAX_RENDERED_DIAGNOSTICS: usize = 12;
|
||||
const MAX_RENDERED_LOCATIONS: usize = 12;
|
||||
|
||||
let mut lines = vec!["# LSP context".to_string()];
|
||||
lines.push(format!(" - Focus file: {}", self.file_path.display()));
|
||||
lines.push(format!(
|
||||
" - Workspace diagnostics: {} across {} file(s)",
|
||||
self.diagnostics.total_diagnostics(),
|
||||
self.diagnostics.files.len()
|
||||
));
|
||||
|
||||
if !self.diagnostics.files.is_empty() {
|
||||
lines.push(String::new());
|
||||
lines.push("Diagnostics:".to_string());
|
||||
let mut rendered = 0usize;
|
||||
for file in &self.diagnostics.files {
|
||||
for diagnostic in &file.diagnostics {
|
||||
if rendered == MAX_RENDERED_DIAGNOSTICS {
|
||||
lines.push(" - Additional diagnostics omitted for brevity.".to_string());
|
||||
break;
|
||||
}
|
||||
let severity = diagnostic_severity_label(diagnostic.severity);
|
||||
lines.push(format!(
|
||||
" - {}:{}:{} [{}] {}",
|
||||
file.path.display(),
|
||||
diagnostic.range.start.line + 1,
|
||||
diagnostic.range.start.character + 1,
|
||||
severity,
|
||||
diagnostic.message.replace('\n', " ")
|
||||
));
|
||||
rendered += 1;
|
||||
}
|
||||
if rendered == MAX_RENDERED_DIAGNOSTICS {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if !self.definitions.is_empty() {
|
||||
lines.push(String::new());
|
||||
lines.push("Definitions:".to_string());
|
||||
lines.extend(
|
||||
self.definitions
|
||||
.iter()
|
||||
.take(MAX_RENDERED_LOCATIONS)
|
||||
.map(|location| format!(" - {location}")),
|
||||
);
|
||||
if self.definitions.len() > MAX_RENDERED_LOCATIONS {
|
||||
lines.push(" - Additional definitions omitted for brevity.".to_string());
|
||||
}
|
||||
}
|
||||
|
||||
if !self.references.is_empty() {
|
||||
lines.push(String::new());
|
||||
lines.push("References:".to_string());
|
||||
lines.extend(
|
||||
self.references
|
||||
.iter()
|
||||
.take(MAX_RENDERED_LOCATIONS)
|
||||
.map(|location| format!(" - {location}")),
|
||||
);
|
||||
if self.references.len() > MAX_RENDERED_LOCATIONS {
|
||||
lines.push(" - Additional references omitted for brevity.".to_string());
|
||||
}
|
||||
}
|
||||
|
||||
lines.join("\n")
|
||||
}
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub(crate) fn normalize_extension(extension: &str) -> String {
|
||||
if extension.starts_with('.') {
|
||||
extension.to_ascii_lowercase()
|
||||
} else {
|
||||
format!(".{}", extension.to_ascii_lowercase())
|
||||
}
|
||||
}
|
||||
|
||||
fn diagnostic_severity_label(severity: Option<lsp_types::DiagnosticSeverity>) -> &'static str {
|
||||
match severity {
|
||||
Some(lsp_types::DiagnosticSeverity::ERROR) => "error",
|
||||
Some(lsp_types::DiagnosticSeverity::WARNING) => "warning",
|
||||
Some(lsp_types::DiagnosticSeverity::INFORMATION) => "info",
|
||||
Some(lsp_types::DiagnosticSeverity::HINT) => "hint",
|
||||
_ => "unknown",
|
||||
}
|
||||
}
|
||||
13
rust/crates/plugins/Cargo.toml
Normal file
13
rust/crates/plugins/Cargo.toml
Normal file
@ -0,0 +1,13 @@
|
||||
[package]
|
||||
name = "plugins"
|
||||
version.workspace = true
|
||||
edition.workspace = true
|
||||
license.workspace = true
|
||||
publish.workspace = true
|
||||
|
||||
[dependencies]
|
||||
serde = { version = "1", features = ["derive"] }
|
||||
serde_json.workspace = true
|
||||
|
||||
[lints]
|
||||
workspace = true
|
||||
@ -0,0 +1,10 @@
|
||||
{
|
||||
"name": "example-bundled",
|
||||
"version": "0.1.0",
|
||||
"description": "Example bundled plugin scaffold for the Rust plugin system",
|
||||
"defaultEnabled": false,
|
||||
"hooks": {
|
||||
"PreToolUse": ["./hooks/pre.sh"],
|
||||
"PostToolUse": ["./hooks/post.sh"]
|
||||
}
|
||||
}
|
||||
@ -0,0 +1,2 @@
|
||||
#!/bin/sh
|
||||
printf '%s\n' 'example bundled post hook'
|
||||
2
rust/crates/plugins/bundled/example-bundled/hooks/pre.sh
Normal file
2
rust/crates/plugins/bundled/example-bundled/hooks/pre.sh
Normal file
@ -0,0 +1,2 @@
|
||||
#!/bin/sh
|
||||
printf '%s\n' 'example bundled pre hook'
|
||||
@ -0,0 +1,10 @@
|
||||
{
|
||||
"name": "sample-hooks",
|
||||
"version": "0.1.0",
|
||||
"description": "Bundled sample plugin scaffold for hook integration tests.",
|
||||
"defaultEnabled": false,
|
||||
"hooks": {
|
||||
"PreToolUse": ["./hooks/pre.sh"],
|
||||
"PostToolUse": ["./hooks/post.sh"]
|
||||
}
|
||||
}
|
||||
2
rust/crates/plugins/bundled/sample-hooks/hooks/post.sh
Normal file
2
rust/crates/plugins/bundled/sample-hooks/hooks/post.sh
Normal file
@ -0,0 +1,2 @@
|
||||
#!/bin/sh
|
||||
printf 'sample bundled post hook'
|
||||
2
rust/crates/plugins/bundled/sample-hooks/hooks/pre.sh
Normal file
2
rust/crates/plugins/bundled/sample-hooks/hooks/pre.sh
Normal file
@ -0,0 +1,2 @@
|
||||
#!/bin/sh
|
||||
printf 'sample bundled pre hook'
|
||||
395
rust/crates/plugins/src/hooks.rs
Normal file
395
rust/crates/plugins/src/hooks.rs
Normal file
@ -0,0 +1,395 @@
|
||||
use std::ffi::OsStr;
|
||||
use std::path::Path;
|
||||
use std::process::Command;
|
||||
|
||||
use serde_json::json;
|
||||
|
||||
use crate::{PluginError, PluginHooks, PluginRegistry};
|
||||
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||
pub enum HookEvent {
|
||||
PreToolUse,
|
||||
PostToolUse,
|
||||
}
|
||||
|
||||
impl HookEvent {
|
||||
fn as_str(self) -> &'static str {
|
||||
match self {
|
||||
Self::PreToolUse => "PreToolUse",
|
||||
Self::PostToolUse => "PostToolUse",
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub struct HookRunResult {
|
||||
denied: bool,
|
||||
messages: Vec<String>,
|
||||
}
|
||||
|
||||
impl HookRunResult {
|
||||
#[must_use]
|
||||
pub fn allow(messages: Vec<String>) -> Self {
|
||||
Self {
|
||||
denied: false,
|
||||
messages,
|
||||
}
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn is_denied(&self) -> bool {
|
||||
self.denied
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn messages(&self) -> &[String] {
|
||||
&self.messages
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Default)]
|
||||
pub struct HookRunner {
|
||||
hooks: PluginHooks,
|
||||
}
|
||||
|
||||
impl HookRunner {
|
||||
#[must_use]
|
||||
pub fn new(hooks: PluginHooks) -> Self {
|
||||
Self { hooks }
|
||||
}
|
||||
|
||||
pub fn from_registry(plugin_registry: &PluginRegistry) -> Result<Self, PluginError> {
|
||||
Ok(Self::new(plugin_registry.aggregated_hooks()?))
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn run_pre_tool_use(&self, tool_name: &str, tool_input: &str) -> HookRunResult {
|
||||
self.run_commands(
|
||||
HookEvent::PreToolUse,
|
||||
&self.hooks.pre_tool_use,
|
||||
tool_name,
|
||||
tool_input,
|
||||
None,
|
||||
false,
|
||||
)
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn run_post_tool_use(
|
||||
&self,
|
||||
tool_name: &str,
|
||||
tool_input: &str,
|
||||
tool_output: &str,
|
||||
is_error: bool,
|
||||
) -> HookRunResult {
|
||||
self.run_commands(
|
||||
HookEvent::PostToolUse,
|
||||
&self.hooks.post_tool_use,
|
||||
tool_name,
|
||||
tool_input,
|
||||
Some(tool_output),
|
||||
is_error,
|
||||
)
|
||||
}
|
||||
|
||||
fn run_commands(
|
||||
&self,
|
||||
event: HookEvent,
|
||||
commands: &[String],
|
||||
tool_name: &str,
|
||||
tool_input: &str,
|
||||
tool_output: Option<&str>,
|
||||
is_error: bool,
|
||||
) -> HookRunResult {
|
||||
if commands.is_empty() {
|
||||
return HookRunResult::allow(Vec::new());
|
||||
}
|
||||
|
||||
let payload = json!({
|
||||
"hook_event_name": event.as_str(),
|
||||
"tool_name": tool_name,
|
||||
"tool_input": parse_tool_input(tool_input),
|
||||
"tool_input_json": tool_input,
|
||||
"tool_output": tool_output,
|
||||
"tool_result_is_error": is_error,
|
||||
})
|
||||
.to_string();
|
||||
|
||||
let mut messages = Vec::new();
|
||||
|
||||
for command in commands {
|
||||
match self.run_command(
|
||||
command,
|
||||
event,
|
||||
tool_name,
|
||||
tool_input,
|
||||
tool_output,
|
||||
is_error,
|
||||
&payload,
|
||||
) {
|
||||
HookCommandOutcome::Allow { message } => {
|
||||
if let Some(message) = message {
|
||||
messages.push(message);
|
||||
}
|
||||
}
|
||||
HookCommandOutcome::Deny { message } => {
|
||||
messages.push(message.unwrap_or_else(|| {
|
||||
format!("{} hook denied tool `{tool_name}`", event.as_str())
|
||||
}));
|
||||
return HookRunResult {
|
||||
denied: true,
|
||||
messages,
|
||||
};
|
||||
}
|
||||
HookCommandOutcome::Warn { message } => messages.push(message),
|
||||
}
|
||||
}
|
||||
|
||||
HookRunResult::allow(messages)
|
||||
}
|
||||
|
||||
#[allow(clippy::too_many_arguments, clippy::unused_self)]
|
||||
fn run_command(
|
||||
&self,
|
||||
command: &str,
|
||||
event: HookEvent,
|
||||
tool_name: &str,
|
||||
tool_input: &str,
|
||||
tool_output: Option<&str>,
|
||||
is_error: bool,
|
||||
payload: &str,
|
||||
) -> HookCommandOutcome {
|
||||
let mut child = shell_command(command);
|
||||
child.stdin(std::process::Stdio::piped());
|
||||
child.stdout(std::process::Stdio::piped());
|
||||
child.stderr(std::process::Stdio::piped());
|
||||
child.env("HOOK_EVENT", event.as_str());
|
||||
child.env("HOOK_TOOL_NAME", tool_name);
|
||||
child.env("HOOK_TOOL_INPUT", tool_input);
|
||||
child.env("HOOK_TOOL_IS_ERROR", if is_error { "1" } else { "0" });
|
||||
if let Some(tool_output) = tool_output {
|
||||
child.env("HOOK_TOOL_OUTPUT", tool_output);
|
||||
}
|
||||
|
||||
match child.output_with_stdin(payload.as_bytes()) {
|
||||
Ok(output) => {
|
||||
let stdout = String::from_utf8_lossy(&output.stdout).trim().to_string();
|
||||
let stderr = String::from_utf8_lossy(&output.stderr).trim().to_string();
|
||||
let message = (!stdout.is_empty()).then_some(stdout);
|
||||
match output.status.code() {
|
||||
Some(0) => HookCommandOutcome::Allow { message },
|
||||
Some(2) => HookCommandOutcome::Deny { message },
|
||||
Some(code) => HookCommandOutcome::Warn {
|
||||
message: format_hook_warning(
|
||||
command,
|
||||
code,
|
||||
message.as_deref(),
|
||||
stderr.as_str(),
|
||||
),
|
||||
},
|
||||
None => HookCommandOutcome::Warn {
|
||||
message: format!(
|
||||
"{} hook `{command}` terminated by signal while handling `{tool_name}`",
|
||||
event.as_str()
|
||||
),
|
||||
},
|
||||
}
|
||||
}
|
||||
Err(error) => HookCommandOutcome::Warn {
|
||||
message: format!(
|
||||
"{} hook `{command}` failed to start for `{tool_name}`: {error}",
|
||||
event.as_str()
|
||||
),
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
enum HookCommandOutcome {
|
||||
Allow { message: Option<String> },
|
||||
Deny { message: Option<String> },
|
||||
Warn { message: String },
|
||||
}
|
||||
|
||||
fn parse_tool_input(tool_input: &str) -> serde_json::Value {
|
||||
serde_json::from_str(tool_input).unwrap_or_else(|_| json!({ "raw": tool_input }))
|
||||
}
|
||||
|
||||
fn format_hook_warning(command: &str, code: i32, stdout: Option<&str>, stderr: &str) -> String {
|
||||
let mut message =
|
||||
format!("Hook `{command}` exited with status {code}; allowing tool execution to continue");
|
||||
if let Some(stdout) = stdout.filter(|stdout| !stdout.is_empty()) {
|
||||
message.push_str(": ");
|
||||
message.push_str(stdout);
|
||||
} else if !stderr.is_empty() {
|
||||
message.push_str(": ");
|
||||
message.push_str(stderr);
|
||||
}
|
||||
message
|
||||
}
|
||||
|
||||
fn shell_command(command: &str) -> CommandWithStdin {
|
||||
#[cfg(windows)]
|
||||
let command_builder = {
|
||||
let mut command_builder = Command::new("cmd");
|
||||
command_builder.arg("/C").arg(command);
|
||||
CommandWithStdin::new(command_builder)
|
||||
};
|
||||
|
||||
#[cfg(not(windows))]
|
||||
let command_builder = if Path::new(command).exists() {
|
||||
let mut command_builder = Command::new("sh");
|
||||
command_builder.arg(command);
|
||||
CommandWithStdin::new(command_builder)
|
||||
} else {
|
||||
let mut command_builder = Command::new("sh");
|
||||
command_builder.arg("-lc").arg(command);
|
||||
CommandWithStdin::new(command_builder)
|
||||
};
|
||||
|
||||
command_builder
|
||||
}
|
||||
|
||||
struct CommandWithStdin {
|
||||
command: Command,
|
||||
}
|
||||
|
||||
impl CommandWithStdin {
|
||||
fn new(command: Command) -> Self {
|
||||
Self { command }
|
||||
}
|
||||
|
||||
fn stdin(&mut self, cfg: std::process::Stdio) -> &mut Self {
|
||||
self.command.stdin(cfg);
|
||||
self
|
||||
}
|
||||
|
||||
fn stdout(&mut self, cfg: std::process::Stdio) -> &mut Self {
|
||||
self.command.stdout(cfg);
|
||||
self
|
||||
}
|
||||
|
||||
fn stderr(&mut self, cfg: std::process::Stdio) -> &mut Self {
|
||||
self.command.stderr(cfg);
|
||||
self
|
||||
}
|
||||
|
||||
fn env<K, V>(&mut self, key: K, value: V) -> &mut Self
|
||||
where
|
||||
K: AsRef<OsStr>,
|
||||
V: AsRef<OsStr>,
|
||||
{
|
||||
self.command.env(key, value);
|
||||
self
|
||||
}
|
||||
|
||||
fn output_with_stdin(&mut self, stdin: &[u8]) -> std::io::Result<std::process::Output> {
|
||||
let mut child = self.command.spawn()?;
|
||||
if let Some(mut child_stdin) = child.stdin.take() {
|
||||
use std::io::Write as _;
|
||||
child_stdin.write_all(stdin)?;
|
||||
}
|
||||
child.wait_with_output()
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::{HookRunResult, HookRunner};
|
||||
use crate::{PluginManager, PluginManagerConfig};
|
||||
use std::fs;
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::time::{SystemTime, UNIX_EPOCH};
|
||||
|
||||
fn temp_dir(label: &str) -> PathBuf {
|
||||
let nanos = SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.expect("time should be after epoch")
|
||||
.as_nanos();
|
||||
std::env::temp_dir().join(format!("plugins-hook-runner-{label}-{nanos}"))
|
||||
}
|
||||
|
||||
fn write_hook_plugin(root: &Path, name: &str, pre_message: &str, post_message: &str) {
|
||||
fs::create_dir_all(root.join(".claw-plugin")).expect("manifest dir");
|
||||
fs::create_dir_all(root.join("hooks")).expect("hooks dir");
|
||||
fs::write(
|
||||
root.join("hooks").join("pre.sh"),
|
||||
format!("#!/bin/sh\nprintf '%s\\n' '{pre_message}'\n"),
|
||||
)
|
||||
.expect("write pre hook");
|
||||
fs::write(
|
||||
root.join("hooks").join("post.sh"),
|
||||
format!("#!/bin/sh\nprintf '%s\\n' '{post_message}'\n"),
|
||||
)
|
||||
.expect("write post hook");
|
||||
fs::write(
|
||||
root.join(".claw-plugin").join("plugin.json"),
|
||||
format!(
|
||||
"{{\n \"name\": \"{name}\",\n \"version\": \"1.0.0\",\n \"description\": \"hook plugin\",\n \"hooks\": {{\n \"PreToolUse\": [\"./hooks/pre.sh\"],\n \"PostToolUse\": [\"./hooks/post.sh\"]\n }}\n}}"
|
||||
),
|
||||
)
|
||||
.expect("write plugin manifest");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn collects_and_runs_hooks_from_enabled_plugins() {
|
||||
let config_home = temp_dir("config");
|
||||
let first_source_root = temp_dir("source-a");
|
||||
let second_source_root = temp_dir("source-b");
|
||||
write_hook_plugin(
|
||||
&first_source_root,
|
||||
"first",
|
||||
"plugin pre one",
|
||||
"plugin post one",
|
||||
);
|
||||
write_hook_plugin(
|
||||
&second_source_root,
|
||||
"second",
|
||||
"plugin pre two",
|
||||
"plugin post two",
|
||||
);
|
||||
|
||||
let mut manager = PluginManager::new(PluginManagerConfig::new(&config_home));
|
||||
manager
|
||||
.install(first_source_root.to_str().expect("utf8 path"))
|
||||
.expect("first plugin install should succeed");
|
||||
manager
|
||||
.install(second_source_root.to_str().expect("utf8 path"))
|
||||
.expect("second plugin install should succeed");
|
||||
let registry = manager.plugin_registry().expect("registry should build");
|
||||
|
||||
let runner = HookRunner::from_registry(®istry).expect("plugin hooks should load");
|
||||
|
||||
assert_eq!(
|
||||
runner.run_pre_tool_use("Read", r#"{"path":"README.md"}"#),
|
||||
HookRunResult::allow(vec![
|
||||
"plugin pre one".to_string(),
|
||||
"plugin pre two".to_string(),
|
||||
])
|
||||
);
|
||||
assert_eq!(
|
||||
runner.run_post_tool_use("Read", r#"{"path":"README.md"}"#, "ok", false),
|
||||
HookRunResult::allow(vec![
|
||||
"plugin post one".to_string(),
|
||||
"plugin post two".to_string(),
|
||||
])
|
||||
);
|
||||
|
||||
let _ = fs::remove_dir_all(config_home);
|
||||
let _ = fs::remove_dir_all(first_source_root);
|
||||
let _ = fs::remove_dir_all(second_source_root);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn pre_tool_use_denies_when_plugin_hook_exits_two() {
|
||||
let runner = HookRunner::new(crate::PluginHooks {
|
||||
pre_tool_use: vec!["printf 'blocked by plugin'; exit 2".to_string()],
|
||||
post_tool_use: Vec::new(),
|
||||
});
|
||||
|
||||
let result = runner.run_pre_tool_use("Bash", r#"{"command":"pwd"}"#);
|
||||
|
||||
assert!(result.is_denied());
|
||||
assert_eq!(result.messages(), &["blocked by plugin".to_string()]);
|
||||
}
|
||||
}
|
||||
2943
rust/crates/plugins/src/lib.rs
Normal file
2943
rust/crates/plugins/src/lib.rs
Normal file
File diff suppressed because it is too large
Load Diff
@ -8,9 +8,11 @@ publish.workspace = true
|
||||
[dependencies]
|
||||
sha2 = "0.10"
|
||||
glob = "0.3"
|
||||
lsp = { path = "../lsp" }
|
||||
plugins = { path = "../plugins" }
|
||||
regex = "1"
|
||||
serde = { version = "1", features = ["derive"] }
|
||||
serde_json = "1"
|
||||
serde_json.workspace = true
|
||||
tokio = { version = "1", features = ["io-util", "macros", "process", "rt", "rt-multi-thread", "time"] }
|
||||
walkdir = "2"
|
||||
|
||||
|
||||
@ -21,7 +21,7 @@ pub struct BootstrapPlan {
|
||||
|
||||
impl BootstrapPlan {
|
||||
#[must_use]
|
||||
pub fn claude_code_default() -> Self {
|
||||
pub fn claw_default() -> Self {
|
||||
Self::from_phases(vec![
|
||||
BootstrapPhase::CliEntry,
|
||||
BootstrapPhase::FastPathVersion,
|
||||
|
||||
@ -1,5 +1,10 @@
|
||||
use crate::session::{ContentBlock, ConversationMessage, MessageRole, Session};
|
||||
|
||||
const COMPACT_CONTINUATION_PREAMBLE: &str =
|
||||
"This session is being continued from a previous conversation that ran out of context. The summary below covers the earlier portion of the conversation.\n\n";
|
||||
const COMPACT_RECENT_MESSAGES_NOTE: &str = "Recent messages are preserved verbatim.";
|
||||
const COMPACT_DIRECT_RESUME_INSTRUCTION: &str = "Continue the conversation from where it left off without asking the user any further questions. Resume directly — do not acknowledge the summary, do not recap what was happening, and do not preface with continuation text.";
|
||||
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||
pub struct CompactionConfig {
|
||||
pub preserve_recent_messages: usize,
|
||||
@ -30,8 +35,15 @@ pub fn estimate_session_tokens(session: &Session) -> usize {
|
||||
|
||||
#[must_use]
|
||||
pub fn should_compact(session: &Session, config: CompactionConfig) -> bool {
|
||||
session.messages.len() > config.preserve_recent_messages
|
||||
&& estimate_session_tokens(session) >= config.max_estimated_tokens
|
||||
let start = compacted_summary_prefix_len(session);
|
||||
let compactable = &session.messages[start..];
|
||||
|
||||
compactable.len() > config.preserve_recent_messages
|
||||
&& compactable
|
||||
.iter()
|
||||
.map(estimate_message_tokens)
|
||||
.sum::<usize>()
|
||||
>= config.max_estimated_tokens
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
@ -56,16 +68,18 @@ pub fn get_compact_continuation_message(
|
||||
recent_messages_preserved: bool,
|
||||
) -> String {
|
||||
let mut base = format!(
|
||||
"This session is being continued from a previous conversation that ran out of context. The summary below covers the earlier portion of the conversation.\n\n{}",
|
||||
"{COMPACT_CONTINUATION_PREAMBLE}{}",
|
||||
format_compact_summary(summary)
|
||||
);
|
||||
|
||||
if recent_messages_preserved {
|
||||
base.push_str("\n\nRecent messages are preserved verbatim.");
|
||||
base.push_str("\n\n");
|
||||
base.push_str(COMPACT_RECENT_MESSAGES_NOTE);
|
||||
}
|
||||
|
||||
if suppress_follow_up_questions {
|
||||
base.push_str("\nContinue the conversation from where it left off without asking the user any further questions. Resume directly — do not acknowledge the summary, do not recap what was happening, and do not preface with continuation text.");
|
||||
base.push('\n');
|
||||
base.push_str(COMPACT_DIRECT_RESUME_INSTRUCTION);
|
||||
}
|
||||
|
||||
base
|
||||
@ -82,13 +96,19 @@ pub fn compact_session(session: &Session, config: CompactionConfig) -> Compactio
|
||||
};
|
||||
}
|
||||
|
||||
let existing_summary = session
|
||||
.messages
|
||||
.first()
|
||||
.and_then(extract_existing_compacted_summary);
|
||||
let compacted_prefix_len = usize::from(existing_summary.is_some());
|
||||
let keep_from = session
|
||||
.messages
|
||||
.len()
|
||||
.saturating_sub(config.preserve_recent_messages);
|
||||
let removed = &session.messages[..keep_from];
|
||||
let removed = &session.messages[compacted_prefix_len..keep_from];
|
||||
let preserved = session.messages[keep_from..].to_vec();
|
||||
let summary = summarize_messages(removed);
|
||||
let summary =
|
||||
merge_compact_summaries(existing_summary.as_deref(), &summarize_messages(removed));
|
||||
let formatted_summary = format_compact_summary(&summary);
|
||||
let continuation = get_compact_continuation_message(&summary, true, !preserved.is_empty());
|
||||
|
||||
@ -110,6 +130,16 @@ pub fn compact_session(session: &Session, config: CompactionConfig) -> Compactio
|
||||
}
|
||||
}
|
||||
|
||||
fn compacted_summary_prefix_len(session: &Session) -> usize {
|
||||
usize::from(
|
||||
session
|
||||
.messages
|
||||
.first()
|
||||
.and_then(extract_existing_compacted_summary)
|
||||
.is_some(),
|
||||
)
|
||||
}
|
||||
|
||||
fn summarize_messages(messages: &[ConversationMessage]) -> String {
|
||||
let user_messages = messages
|
||||
.iter()
|
||||
@ -197,6 +227,41 @@ fn summarize_messages(messages: &[ConversationMessage]) -> String {
|
||||
lines.join("\n")
|
||||
}
|
||||
|
||||
fn merge_compact_summaries(existing_summary: Option<&str>, new_summary: &str) -> String {
|
||||
let Some(existing_summary) = existing_summary else {
|
||||
return new_summary.to_string();
|
||||
};
|
||||
|
||||
let previous_highlights = extract_summary_highlights(existing_summary);
|
||||
let new_formatted_summary = format_compact_summary(new_summary);
|
||||
let new_highlights = extract_summary_highlights(&new_formatted_summary);
|
||||
let new_timeline = extract_summary_timeline(&new_formatted_summary);
|
||||
|
||||
let mut lines = vec!["<summary>".to_string(), "Conversation summary:".to_string()];
|
||||
|
||||
if !previous_highlights.is_empty() {
|
||||
lines.push("- Previously compacted context:".to_string());
|
||||
lines.extend(
|
||||
previous_highlights
|
||||
.into_iter()
|
||||
.map(|line| format!(" {line}")),
|
||||
);
|
||||
}
|
||||
|
||||
if !new_highlights.is_empty() {
|
||||
lines.push("- Newly compacted context:".to_string());
|
||||
lines.extend(new_highlights.into_iter().map(|line| format!(" {line}")));
|
||||
}
|
||||
|
||||
if !new_timeline.is_empty() {
|
||||
lines.push("- Key timeline:".to_string());
|
||||
lines.extend(new_timeline.into_iter().map(|line| format!(" {line}")));
|
||||
}
|
||||
|
||||
lines.push("</summary>".to_string());
|
||||
lines.join("\n")
|
||||
}
|
||||
|
||||
fn summarize_block(block: &ContentBlock) -> String {
|
||||
let raw = match block {
|
||||
ContentBlock::Text { text } => text.clone(),
|
||||
@ -374,11 +439,71 @@ fn collapse_blank_lines(content: &str) -> String {
|
||||
result
|
||||
}
|
||||
|
||||
fn extract_existing_compacted_summary(message: &ConversationMessage) -> Option<String> {
|
||||
if message.role != MessageRole::System {
|
||||
return None;
|
||||
}
|
||||
|
||||
let text = first_text_block(message)?;
|
||||
let summary = text.strip_prefix(COMPACT_CONTINUATION_PREAMBLE)?;
|
||||
let summary = summary
|
||||
.split_once(&format!("\n\n{COMPACT_RECENT_MESSAGES_NOTE}"))
|
||||
.map_or(summary, |(value, _)| value);
|
||||
let summary = summary
|
||||
.split_once(&format!("\n{COMPACT_DIRECT_RESUME_INSTRUCTION}"))
|
||||
.map_or(summary, |(value, _)| value);
|
||||
Some(summary.trim().to_string())
|
||||
}
|
||||
|
||||
fn extract_summary_highlights(summary: &str) -> Vec<String> {
|
||||
let mut lines = Vec::new();
|
||||
let mut in_timeline = false;
|
||||
|
||||
for line in format_compact_summary(summary).lines() {
|
||||
let trimmed = line.trim_end();
|
||||
if trimmed.is_empty() || trimmed == "Summary:" || trimmed == "Conversation summary:" {
|
||||
continue;
|
||||
}
|
||||
if trimmed == "- Key timeline:" {
|
||||
in_timeline = true;
|
||||
continue;
|
||||
}
|
||||
if in_timeline {
|
||||
continue;
|
||||
}
|
||||
lines.push(trimmed.to_string());
|
||||
}
|
||||
|
||||
lines
|
||||
}
|
||||
|
||||
fn extract_summary_timeline(summary: &str) -> Vec<String> {
|
||||
let mut lines = Vec::new();
|
||||
let mut in_timeline = false;
|
||||
|
||||
for line in format_compact_summary(summary).lines() {
|
||||
let trimmed = line.trim_end();
|
||||
if trimmed == "- Key timeline:" {
|
||||
in_timeline = true;
|
||||
continue;
|
||||
}
|
||||
if !in_timeline {
|
||||
continue;
|
||||
}
|
||||
if trimmed.is_empty() {
|
||||
break;
|
||||
}
|
||||
lines.push(trimmed.to_string());
|
||||
}
|
||||
|
||||
lines
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::{
|
||||
collect_key_files, compact_session, estimate_session_tokens, format_compact_summary,
|
||||
infer_pending_work, should_compact, CompactionConfig,
|
||||
get_compact_continuation_message, infer_pending_work, should_compact, CompactionConfig,
|
||||
};
|
||||
use crate::session::{ContentBlock, ConversationMessage, MessageRole, Session};
|
||||
|
||||
@ -453,6 +578,98 @@ mod tests {
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn keeps_previous_compacted_context_when_compacting_again() {
|
||||
let initial_session = Session {
|
||||
version: 1,
|
||||
messages: vec![
|
||||
ConversationMessage::user_text("Investigate rust/crates/runtime/src/compact.rs"),
|
||||
ConversationMessage::assistant(vec![ContentBlock::Text {
|
||||
text: "I will inspect the compact flow.".to_string(),
|
||||
}]),
|
||||
ConversationMessage::user_text(
|
||||
"Also update rust/crates/runtime/src/conversation.rs",
|
||||
),
|
||||
ConversationMessage::assistant(vec![ContentBlock::Text {
|
||||
text: "Next: preserve prior summary context during auto compact.".to_string(),
|
||||
}]),
|
||||
],
|
||||
};
|
||||
let config = CompactionConfig {
|
||||
preserve_recent_messages: 2,
|
||||
max_estimated_tokens: 1,
|
||||
};
|
||||
|
||||
let first = compact_session(&initial_session, config);
|
||||
let mut follow_up_messages = first.compacted_session.messages.clone();
|
||||
follow_up_messages.extend([
|
||||
ConversationMessage::user_text("Please add regression tests for compaction."),
|
||||
ConversationMessage::assistant(vec![ContentBlock::Text {
|
||||
text: "Working on regression coverage now.".to_string(),
|
||||
}]),
|
||||
]);
|
||||
|
||||
let second = compact_session(
|
||||
&Session {
|
||||
version: 1,
|
||||
messages: follow_up_messages,
|
||||
},
|
||||
config,
|
||||
);
|
||||
|
||||
assert!(second
|
||||
.formatted_summary
|
||||
.contains("Previously compacted context:"));
|
||||
assert!(second
|
||||
.formatted_summary
|
||||
.contains("Scope: 2 earlier messages compacted"));
|
||||
assert!(second
|
||||
.formatted_summary
|
||||
.contains("Newly compacted context:"));
|
||||
assert!(second
|
||||
.formatted_summary
|
||||
.contains("Also update rust/crates/runtime/src/conversation.rs"));
|
||||
assert!(matches!(
|
||||
&second.compacted_session.messages[0].blocks[0],
|
||||
ContentBlock::Text { text }
|
||||
if text.contains("Previously compacted context:")
|
||||
&& text.contains("Newly compacted context:")
|
||||
));
|
||||
assert!(matches!(
|
||||
&second.compacted_session.messages[1].blocks[0],
|
||||
ContentBlock::Text { text } if text.contains("Please add regression tests for compaction.")
|
||||
));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn ignores_existing_compacted_summary_when_deciding_to_recompact() {
|
||||
let summary = "<summary>Conversation summary:\n- Scope: earlier work preserved.\n- Key timeline:\n - user: large preserved context\n</summary>";
|
||||
let session = Session {
|
||||
version: 1,
|
||||
messages: vec![
|
||||
ConversationMessage {
|
||||
role: MessageRole::System,
|
||||
blocks: vec![ContentBlock::Text {
|
||||
text: get_compact_continuation_message(summary, true, true),
|
||||
}],
|
||||
usage: None,
|
||||
},
|
||||
ConversationMessage::user_text("tiny"),
|
||||
ConversationMessage::assistant(vec![ContentBlock::Text {
|
||||
text: "recent".to_string(),
|
||||
}]),
|
||||
],
|
||||
};
|
||||
|
||||
assert!(!should_compact(
|
||||
&session,
|
||||
CompactionConfig {
|
||||
preserve_recent_messages: 2,
|
||||
max_estimated_tokens: 1,
|
||||
}
|
||||
));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn truncates_long_blocks_in_summary() {
|
||||
let summary = super::summarize_block(&ContentBlock::Text {
|
||||
@ -465,10 +682,10 @@ mod tests {
|
||||
#[test]
|
||||
fn extracts_key_files_from_message_content() {
|
||||
let files = collect_key_files(&[ConversationMessage::user_text(
|
||||
"Update rust/crates/runtime/src/compact.rs and rust/crates/rusty-claude-cli/src/main.rs next.",
|
||||
"Update rust/crates/runtime/src/compact.rs and rust/crates/tools/src/lib.rs next.",
|
||||
)]);
|
||||
assert!(files.contains(&"rust/crates/runtime/src/compact.rs".to_string()));
|
||||
assert!(files.contains(&"rust/crates/rusty-claude-cli/src/main.rs".to_string()));
|
||||
assert!(files.contains(&"rust/crates/tools/src/lib.rs".to_string()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
||||
@ -6,7 +6,7 @@ use std::path::{Path, PathBuf};
|
||||
use crate::json::JsonValue;
|
||||
use crate::sandbox::{FilesystemIsolationMode, SandboxConfig};
|
||||
|
||||
pub const CLAUDE_CODE_SETTINGS_SCHEMA_NAME: &str = "SettingsSchema";
|
||||
pub const CLAW_SETTINGS_SCHEMA_NAME: &str = "SettingsSchema";
|
||||
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
|
||||
pub enum ConfigSource {
|
||||
@ -35,8 +35,19 @@ pub struct RuntimeConfig {
|
||||
feature_config: RuntimeFeatureConfig,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Default)]
|
||||
pub struct RuntimePluginConfig {
|
||||
enabled_plugins: BTreeMap<String, bool>,
|
||||
external_directories: Vec<String>,
|
||||
install_root: Option<String>,
|
||||
registry_path: Option<String>,
|
||||
bundled_root: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Default)]
|
||||
pub struct RuntimeFeatureConfig {
|
||||
hooks: RuntimeHookConfig,
|
||||
plugins: RuntimePluginConfig,
|
||||
mcp: McpConfigCollection,
|
||||
oauth: Option<OAuthConfig>,
|
||||
model: Option<String>,
|
||||
@ -44,6 +55,12 @@ pub struct RuntimeFeatureConfig {
|
||||
sandbox: SandboxConfig,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Default)]
|
||||
pub struct RuntimeHookConfig {
|
||||
pre_tool_use: Vec<String>,
|
||||
post_tool_use: Vec<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Default)]
|
||||
pub struct McpConfigCollection {
|
||||
servers: BTreeMap<String, ScopedMcpServerConfig>,
|
||||
@ -62,7 +79,7 @@ pub enum McpTransport {
|
||||
Http,
|
||||
Ws,
|
||||
Sdk,
|
||||
ClaudeAiProxy,
|
||||
ManagedProxy,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
@ -72,7 +89,7 @@ pub enum McpServerConfig {
|
||||
Http(McpRemoteServerConfig),
|
||||
Ws(McpWebSocketServerConfig),
|
||||
Sdk(McpSdkServerConfig),
|
||||
ClaudeAiProxy(McpClaudeAiProxyServerConfig),
|
||||
ManagedProxy(McpManagedProxyServerConfig),
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
@ -103,7 +120,7 @@ pub struct McpSdkServerConfig {
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub struct McpClaudeAiProxyServerConfig {
|
||||
pub struct McpManagedProxyServerConfig {
|
||||
pub url: String,
|
||||
pub id: String,
|
||||
}
|
||||
@ -167,25 +184,20 @@ impl ConfigLoader {
|
||||
#[must_use]
|
||||
pub fn default_for(cwd: impl Into<PathBuf>) -> Self {
|
||||
let cwd = cwd.into();
|
||||
let config_home = std::env::var_os("CLAUDE_CONFIG_HOME")
|
||||
.map(PathBuf::from)
|
||||
.or_else(|| std::env::var_os("HOME").map(|home| PathBuf::from(home).join(".claude")))
|
||||
.or_else(|| {
|
||||
if cfg!(target_os = "windows") {
|
||||
std::env::var_os("USERPROFILE").map(|home| PathBuf::from(home).join(".claude"))
|
||||
} else {
|
||||
None
|
||||
}
|
||||
})
|
||||
.unwrap_or_else(|| PathBuf::from(".claude"));
|
||||
let config_home = default_config_home();
|
||||
Self { cwd, config_home }
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn config_home(&self) -> &Path {
|
||||
&self.config_home
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn discover(&self) -> Vec<ConfigEntry> {
|
||||
let user_legacy_path = self.config_home.parent().map_or_else(
|
||||
|| PathBuf::from(".claude.json"),
|
||||
|parent| parent.join(".claude.json"),
|
||||
|| PathBuf::from(".claw.json"),
|
||||
|parent| parent.join(".claw.json"),
|
||||
);
|
||||
vec![
|
||||
ConfigEntry {
|
||||
@ -198,15 +210,15 @@ impl ConfigLoader {
|
||||
},
|
||||
ConfigEntry {
|
||||
source: ConfigSource::Project,
|
||||
path: self.cwd.join(".claude.json"),
|
||||
path: self.cwd.join(".claw.json"),
|
||||
},
|
||||
ConfigEntry {
|
||||
source: ConfigSource::Project,
|
||||
path: self.cwd.join(".claude").join("settings.json"),
|
||||
path: self.cwd.join(".claw").join("settings.json"),
|
||||
},
|
||||
ConfigEntry {
|
||||
source: ConfigSource::Local,
|
||||
path: self.cwd.join(".claude").join("settings.local.json"),
|
||||
path: self.cwd.join(".claw").join("settings.local.json"),
|
||||
},
|
||||
]
|
||||
}
|
||||
@ -228,6 +240,8 @@ impl ConfigLoader {
|
||||
let merged_value = JsonValue::Object(merged.clone());
|
||||
|
||||
let feature_config = RuntimeFeatureConfig {
|
||||
hooks: parse_optional_hooks_config(&merged_value)?,
|
||||
plugins: parse_optional_plugin_config(&merged_value)?,
|
||||
mcp: McpConfigCollection {
|
||||
servers: mcp_servers,
|
||||
},
|
||||
@ -285,6 +299,16 @@ impl RuntimeConfig {
|
||||
&self.feature_config.mcp
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn hooks(&self) -> &RuntimeHookConfig {
|
||||
&self.feature_config.hooks
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn plugins(&self) -> &RuntimePluginConfig {
|
||||
&self.feature_config.plugins
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn oauth(&self) -> Option<&OAuthConfig> {
|
||||
self.feature_config.oauth.as_ref()
|
||||
@ -307,6 +331,28 @@ impl RuntimeConfig {
|
||||
}
|
||||
|
||||
impl RuntimeFeatureConfig {
|
||||
#[must_use]
|
||||
pub fn with_hooks(mut self, hooks: RuntimeHookConfig) -> Self {
|
||||
self.hooks = hooks;
|
||||
self
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn with_plugins(mut self, plugins: RuntimePluginConfig) -> Self {
|
||||
self.plugins = plugins;
|
||||
self
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn hooks(&self) -> &RuntimeHookConfig {
|
||||
&self.hooks
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn plugins(&self) -> &RuntimePluginConfig {
|
||||
&self.plugins
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn mcp(&self) -> &McpConfigCollection {
|
||||
&self.mcp
|
||||
@ -333,6 +379,85 @@ impl RuntimeFeatureConfig {
|
||||
}
|
||||
}
|
||||
|
||||
impl RuntimePluginConfig {
|
||||
#[must_use]
|
||||
pub fn enabled_plugins(&self) -> &BTreeMap<String, bool> {
|
||||
&self.enabled_plugins
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn external_directories(&self) -> &[String] {
|
||||
&self.external_directories
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn install_root(&self) -> Option<&str> {
|
||||
self.install_root.as_deref()
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn registry_path(&self) -> Option<&str> {
|
||||
self.registry_path.as_deref()
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn bundled_root(&self) -> Option<&str> {
|
||||
self.bundled_root.as_deref()
|
||||
}
|
||||
|
||||
pub fn set_plugin_state(&mut self, plugin_id: String, enabled: bool) {
|
||||
self.enabled_plugins.insert(plugin_id, enabled);
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn state_for(&self, plugin_id: &str, default_enabled: bool) -> bool {
|
||||
self.enabled_plugins
|
||||
.get(plugin_id)
|
||||
.copied()
|
||||
.unwrap_or(default_enabled)
|
||||
}
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn default_config_home() -> PathBuf {
|
||||
std::env::var_os("CLAW_CONFIG_HOME")
|
||||
.map(PathBuf::from)
|
||||
.or_else(|| std::env::var_os("HOME").map(|home| PathBuf::from(home).join(".claw")))
|
||||
.unwrap_or_else(|| PathBuf::from(".claw"))
|
||||
}
|
||||
|
||||
impl RuntimeHookConfig {
|
||||
#[must_use]
|
||||
pub fn new(pre_tool_use: Vec<String>, post_tool_use: Vec<String>) -> Self {
|
||||
Self {
|
||||
pre_tool_use,
|
||||
post_tool_use,
|
||||
}
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn pre_tool_use(&self) -> &[String] {
|
||||
&self.pre_tool_use
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn post_tool_use(&self) -> &[String] {
|
||||
&self.post_tool_use
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn merged(&self, other: &Self) -> Self {
|
||||
let mut merged = self.clone();
|
||||
merged.extend(other);
|
||||
merged
|
||||
}
|
||||
|
||||
pub fn extend(&mut self, other: &Self) {
|
||||
extend_unique(&mut self.pre_tool_use, other.pre_tool_use());
|
||||
extend_unique(&mut self.post_tool_use, other.post_tool_use());
|
||||
}
|
||||
}
|
||||
|
||||
impl McpConfigCollection {
|
||||
#[must_use]
|
||||
pub fn servers(&self) -> &BTreeMap<String, ScopedMcpServerConfig> {
|
||||
@ -361,7 +486,7 @@ impl McpServerConfig {
|
||||
Self::Http(_) => McpTransport::Http,
|
||||
Self::Ws(_) => McpTransport::Ws,
|
||||
Self::Sdk(_) => McpTransport::Sdk,
|
||||
Self::ClaudeAiProxy(_) => McpTransport::ClaudeAiProxy,
|
||||
Self::ManagedProxy(_) => McpTransport::ManagedProxy,
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -369,7 +494,7 @@ impl McpServerConfig {
|
||||
fn read_optional_json_object(
|
||||
path: &Path,
|
||||
) -> Result<Option<BTreeMap<String, JsonValue>>, ConfigError> {
|
||||
let is_legacy_config = path.file_name().and_then(|name| name.to_str()) == Some(".claude.json");
|
||||
let is_legacy_config = path.file_name().and_then(|name| name.to_str()) == Some(".claw.json");
|
||||
let contents = match fs::read_to_string(path) {
|
||||
Ok(contents) => contents,
|
||||
Err(error) if error.kind() == std::io::ErrorKind::NotFound => return Ok(None),
|
||||
@ -431,6 +556,52 @@ fn parse_optional_model(root: &JsonValue) -> Option<String> {
|
||||
.map(ToOwned::to_owned)
|
||||
}
|
||||
|
||||
fn parse_optional_hooks_config(root: &JsonValue) -> Result<RuntimeHookConfig, ConfigError> {
|
||||
let Some(object) = root.as_object() else {
|
||||
return Ok(RuntimeHookConfig::default());
|
||||
};
|
||||
let Some(hooks_value) = object.get("hooks") else {
|
||||
return Ok(RuntimeHookConfig::default());
|
||||
};
|
||||
let hooks = expect_object(hooks_value, "merged settings.hooks")?;
|
||||
Ok(RuntimeHookConfig {
|
||||
pre_tool_use: optional_string_array(hooks, "PreToolUse", "merged settings.hooks")?
|
||||
.unwrap_or_default(),
|
||||
post_tool_use: optional_string_array(hooks, "PostToolUse", "merged settings.hooks")?
|
||||
.unwrap_or_default(),
|
||||
})
|
||||
}
|
||||
|
||||
fn parse_optional_plugin_config(root: &JsonValue) -> Result<RuntimePluginConfig, ConfigError> {
|
||||
let Some(object) = root.as_object() else {
|
||||
return Ok(RuntimePluginConfig::default());
|
||||
};
|
||||
|
||||
let mut config = RuntimePluginConfig::default();
|
||||
if let Some(enabled_plugins) = object.get("enabledPlugins") {
|
||||
config.enabled_plugins = parse_bool_map(enabled_plugins, "merged settings.enabledPlugins")?;
|
||||
}
|
||||
|
||||
let Some(plugins_value) = object.get("plugins") else {
|
||||
return Ok(config);
|
||||
};
|
||||
let plugins = expect_object(plugins_value, "merged settings.plugins")?;
|
||||
|
||||
if let Some(enabled_value) = plugins.get("enabled") {
|
||||
config.enabled_plugins = parse_bool_map(enabled_value, "merged settings.plugins.enabled")?;
|
||||
}
|
||||
config.external_directories =
|
||||
optional_string_array(plugins, "externalDirectories", "merged settings.plugins")?
|
||||
.unwrap_or_default();
|
||||
config.install_root =
|
||||
optional_string(plugins, "installRoot", "merged settings.plugins")?.map(str::to_string);
|
||||
config.registry_path =
|
||||
optional_string(plugins, "registryPath", "merged settings.plugins")?.map(str::to_string);
|
||||
config.bundled_root =
|
||||
optional_string(plugins, "bundledRoot", "merged settings.plugins")?.map(str::to_string);
|
||||
Ok(config)
|
||||
}
|
||||
|
||||
fn parse_optional_permission_mode(
|
||||
root: &JsonValue,
|
||||
) -> Result<Option<ResolvedPermissionMode>, ConfigError> {
|
||||
@ -553,12 +724,10 @@ fn parse_mcp_server_config(
|
||||
"sdk" => Ok(McpServerConfig::Sdk(McpSdkServerConfig {
|
||||
name: expect_string(object, "name", context)?.to_string(),
|
||||
})),
|
||||
"claudeai-proxy" => Ok(McpServerConfig::ClaudeAiProxy(
|
||||
McpClaudeAiProxyServerConfig {
|
||||
url: expect_string(object, "url", context)?.to_string(),
|
||||
id: expect_string(object, "id", context)?.to_string(),
|
||||
},
|
||||
)),
|
||||
"claudeai-proxy" => Ok(McpServerConfig::ManagedProxy(McpManagedProxyServerConfig {
|
||||
url: expect_string(object, "url", context)?.to_string(),
|
||||
id: expect_string(object, "id", context)?.to_string(),
|
||||
})),
|
||||
other => Err(ConfigError::Parse(format!(
|
||||
"{context}: unsupported MCP server type for {server_name}: {other}"
|
||||
))),
|
||||
@ -663,6 +832,24 @@ fn optional_u16(
|
||||
}
|
||||
}
|
||||
|
||||
fn parse_bool_map(value: &JsonValue, context: &str) -> Result<BTreeMap<String, bool>, ConfigError> {
|
||||
let Some(map) = value.as_object() else {
|
||||
return Err(ConfigError::Parse(format!(
|
||||
"{context}: expected JSON object"
|
||||
)));
|
||||
};
|
||||
map.iter()
|
||||
.map(|(key, value)| {
|
||||
value
|
||||
.as_bool()
|
||||
.map(|enabled| (key.clone(), enabled))
|
||||
.ok_or_else(|| {
|
||||
ConfigError::Parse(format!("{context}: field {key} must be a boolean"))
|
||||
})
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
fn optional_string_array(
|
||||
object: &BTreeMap<String, JsonValue>,
|
||||
key: &str,
|
||||
@ -737,11 +924,23 @@ fn deep_merge_objects(
|
||||
}
|
||||
}
|
||||
|
||||
fn extend_unique(target: &mut Vec<String>, values: &[String]) {
|
||||
for value in values {
|
||||
push_unique(target, value.clone());
|
||||
}
|
||||
}
|
||||
|
||||
fn push_unique(target: &mut Vec<String>, value: String) {
|
||||
if !target.iter().any(|existing| existing == &value) {
|
||||
target.push(value);
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::{
|
||||
ConfigLoader, ConfigSource, McpServerConfig, McpTransport, ResolvedPermissionMode,
|
||||
CLAUDE_CODE_SETTINGS_SCHEMA_NAME,
|
||||
CLAW_SETTINGS_SCHEMA_NAME,
|
||||
};
|
||||
use crate::json::JsonValue;
|
||||
use crate::sandbox::FilesystemIsolationMode;
|
||||
@ -760,7 +959,7 @@ mod tests {
|
||||
fn rejects_non_object_settings_files() {
|
||||
let root = temp_dir();
|
||||
let cwd = root.join("project");
|
||||
let home = root.join("home").join(".claude");
|
||||
let home = root.join("home").join(".claw");
|
||||
fs::create_dir_all(&home).expect("home config dir");
|
||||
fs::create_dir_all(&cwd).expect("project dir");
|
||||
fs::write(home.join("settings.json"), "[]").expect("write bad settings");
|
||||
@ -776,15 +975,15 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn loads_and_merges_claude_code_config_files_by_precedence() {
|
||||
fn loads_and_merges_claw_code_config_files_by_precedence() {
|
||||
let root = temp_dir();
|
||||
let cwd = root.join("project");
|
||||
let home = root.join("home").join(".claude");
|
||||
fs::create_dir_all(cwd.join(".claude")).expect("project config dir");
|
||||
let home = root.join("home").join(".claw");
|
||||
fs::create_dir_all(cwd.join(".claw")).expect("project config dir");
|
||||
fs::create_dir_all(&home).expect("home config dir");
|
||||
|
||||
fs::write(
|
||||
home.parent().expect("home parent").join(".claude.json"),
|
||||
home.parent().expect("home parent").join(".claw.json"),
|
||||
r#"{"model":"haiku","env":{"A":"1"},"mcpServers":{"home":{"command":"uvx","args":["home"]}}}"#,
|
||||
)
|
||||
.expect("write user compat config");
|
||||
@ -794,17 +993,17 @@ mod tests {
|
||||
)
|
||||
.expect("write user settings");
|
||||
fs::write(
|
||||
cwd.join(".claude.json"),
|
||||
cwd.join(".claw.json"),
|
||||
r#"{"model":"project-compat","env":{"B":"2"}}"#,
|
||||
)
|
||||
.expect("write project compat config");
|
||||
fs::write(
|
||||
cwd.join(".claude").join("settings.json"),
|
||||
cwd.join(".claw").join("settings.json"),
|
||||
r#"{"env":{"C":"3"},"hooks":{"PostToolUse":["project"]},"mcpServers":{"project":{"command":"uvx","args":["project"]}}}"#,
|
||||
)
|
||||
.expect("write project settings");
|
||||
fs::write(
|
||||
cwd.join(".claude").join("settings.local.json"),
|
||||
cwd.join(".claw").join("settings.local.json"),
|
||||
r#"{"model":"opus","permissionMode":"acceptEdits"}"#,
|
||||
)
|
||||
.expect("write local settings");
|
||||
@ -813,7 +1012,7 @@ mod tests {
|
||||
.load()
|
||||
.expect("config should load");
|
||||
|
||||
assert_eq!(CLAUDE_CODE_SETTINGS_SCHEMA_NAME, "SettingsSchema");
|
||||
assert_eq!(CLAW_SETTINGS_SCHEMA_NAME, "SettingsSchema");
|
||||
assert_eq!(loaded.loaded_entries().len(), 5);
|
||||
assert_eq!(loaded.loaded_entries()[0].source, ConfigSource::User);
|
||||
assert_eq!(
|
||||
@ -843,6 +1042,8 @@ mod tests {
|
||||
.and_then(JsonValue::as_object)
|
||||
.expect("hooks object")
|
||||
.contains_key("PostToolUse"));
|
||||
assert_eq!(loaded.hooks().pre_tool_use(), &["base".to_string()]);
|
||||
assert_eq!(loaded.hooks().post_tool_use(), &["project".to_string()]);
|
||||
assert!(loaded.mcp().get("home").is_some());
|
||||
assert!(loaded.mcp().get("project").is_some());
|
||||
|
||||
@ -853,12 +1054,12 @@ mod tests {
|
||||
fn parses_sandbox_config() {
|
||||
let root = temp_dir();
|
||||
let cwd = root.join("project");
|
||||
let home = root.join("home").join(".claude");
|
||||
fs::create_dir_all(cwd.join(".claude")).expect("project config dir");
|
||||
let home = root.join("home").join(".claw");
|
||||
fs::create_dir_all(cwd.join(".claw")).expect("project config dir");
|
||||
fs::create_dir_all(&home).expect("home config dir");
|
||||
|
||||
fs::write(
|
||||
cwd.join(".claude").join("settings.local.json"),
|
||||
cwd.join(".claw").join("settings.local.json"),
|
||||
r#"{
|
||||
"sandbox": {
|
||||
"enabled": true,
|
||||
@ -891,8 +1092,8 @@ mod tests {
|
||||
fn parses_typed_mcp_and_oauth_config() {
|
||||
let root = temp_dir();
|
||||
let cwd = root.join("project");
|
||||
let home = root.join("home").join(".claude");
|
||||
fs::create_dir_all(cwd.join(".claude")).expect("project config dir");
|
||||
let home = root.join("home").join(".claw");
|
||||
fs::create_dir_all(cwd.join(".claw")).expect("project config dir");
|
||||
fs::create_dir_all(&home).expect("home config dir");
|
||||
|
||||
fs::write(
|
||||
@ -929,7 +1130,7 @@ mod tests {
|
||||
)
|
||||
.expect("write user settings");
|
||||
fs::write(
|
||||
cwd.join(".claude").join("settings.local.json"),
|
||||
cwd.join(".claw").join("settings.local.json"),
|
||||
r#"{
|
||||
"mcpServers": {
|
||||
"remote-server": {
|
||||
@ -978,11 +1179,101 @@ mod tests {
|
||||
fs::remove_dir_all(root).expect("cleanup temp dir");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parses_plugin_config_from_enabled_plugins() {
|
||||
let root = temp_dir();
|
||||
let cwd = root.join("project");
|
||||
let home = root.join("home").join(".claw");
|
||||
fs::create_dir_all(cwd.join(".claw")).expect("project config dir");
|
||||
fs::create_dir_all(&home).expect("home config dir");
|
||||
|
||||
fs::write(
|
||||
home.join("settings.json"),
|
||||
r#"{
|
||||
"enabledPlugins": {
|
||||
"tool-guard@builtin": true,
|
||||
"sample-plugin@external": false
|
||||
}
|
||||
}"#,
|
||||
)
|
||||
.expect("write user settings");
|
||||
|
||||
let loaded = ConfigLoader::new(&cwd, &home)
|
||||
.load()
|
||||
.expect("config should load");
|
||||
|
||||
assert_eq!(
|
||||
loaded.plugins().enabled_plugins().get("tool-guard@builtin"),
|
||||
Some(&true)
|
||||
);
|
||||
assert_eq!(
|
||||
loaded
|
||||
.plugins()
|
||||
.enabled_plugins()
|
||||
.get("sample-plugin@external"),
|
||||
Some(&false)
|
||||
);
|
||||
|
||||
fs::remove_dir_all(root).expect("cleanup temp dir");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parses_plugin_config() {
|
||||
let root = temp_dir();
|
||||
let cwd = root.join("project");
|
||||
let home = root.join("home").join(".claw");
|
||||
fs::create_dir_all(cwd.join(".claw")).expect("project config dir");
|
||||
fs::create_dir_all(&home).expect("home config dir");
|
||||
|
||||
fs::write(
|
||||
home.join("settings.json"),
|
||||
r#"{
|
||||
"enabledPlugins": {
|
||||
"core-helpers@builtin": true
|
||||
},
|
||||
"plugins": {
|
||||
"externalDirectories": ["./external-plugins"],
|
||||
"installRoot": "plugin-cache/installed",
|
||||
"registryPath": "plugin-cache/installed.json",
|
||||
"bundledRoot": "./bundled-plugins"
|
||||
}
|
||||
}"#,
|
||||
)
|
||||
.expect("write plugin settings");
|
||||
|
||||
let loaded = ConfigLoader::new(&cwd, &home)
|
||||
.load()
|
||||
.expect("config should load");
|
||||
|
||||
assert_eq!(
|
||||
loaded
|
||||
.plugins()
|
||||
.enabled_plugins()
|
||||
.get("core-helpers@builtin"),
|
||||
Some(&true)
|
||||
);
|
||||
assert_eq!(
|
||||
loaded.plugins().external_directories(),
|
||||
&["./external-plugins".to_string()]
|
||||
);
|
||||
assert_eq!(
|
||||
loaded.plugins().install_root(),
|
||||
Some("plugin-cache/installed")
|
||||
);
|
||||
assert_eq!(
|
||||
loaded.plugins().registry_path(),
|
||||
Some("plugin-cache/installed.json")
|
||||
);
|
||||
assert_eq!(loaded.plugins().bundled_root(), Some("./bundled-plugins"));
|
||||
|
||||
fs::remove_dir_all(root).expect("cleanup temp dir");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn rejects_invalid_mcp_server_shapes() {
|
||||
let root = temp_dir();
|
||||
let cwd = root.join("project");
|
||||
let home = root.join("home").join(".claude");
|
||||
let home = root.join("home").join(".claw");
|
||||
fs::create_dir_all(&home).expect("home config dir");
|
||||
fs::create_dir_all(&cwd).expect("project dir");
|
||||
fs::write(
|
||||
|
||||
@ -4,6 +4,8 @@ use std::fmt::{Display, Formatter};
|
||||
use crate::compact::{
|
||||
compact_session, estimate_session_tokens, CompactionConfig, CompactionResult,
|
||||
};
|
||||
use crate::config::RuntimeFeatureConfig;
|
||||
use crate::hooks::{HookRunResult, HookRunner};
|
||||
use crate::permissions::{PermissionOutcome, PermissionPolicy, PermissionPrompter};
|
||||
use crate::session::{ContentBlock, ConversationMessage, Session};
|
||||
use crate::usage::{TokenUsage, UsageTracker};
|
||||
@ -94,6 +96,7 @@ pub struct ConversationRuntime<C, T> {
|
||||
system_prompt: Vec<String>,
|
||||
max_iterations: usize,
|
||||
usage_tracker: UsageTracker,
|
||||
hook_runner: HookRunner,
|
||||
}
|
||||
|
||||
impl<C, T> ConversationRuntime<C, T>
|
||||
@ -108,6 +111,25 @@ where
|
||||
tool_executor: T,
|
||||
permission_policy: PermissionPolicy,
|
||||
system_prompt: Vec<String>,
|
||||
) -> Self {
|
||||
Self::new_with_features(
|
||||
session,
|
||||
api_client,
|
||||
tool_executor,
|
||||
permission_policy,
|
||||
system_prompt,
|
||||
RuntimeFeatureConfig::default(),
|
||||
)
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn new_with_features(
|
||||
session: Session,
|
||||
api_client: C,
|
||||
tool_executor: T,
|
||||
permission_policy: PermissionPolicy,
|
||||
system_prompt: Vec<String>,
|
||||
feature_config: RuntimeFeatureConfig,
|
||||
) -> Self {
|
||||
let usage_tracker = UsageTracker::from_session(&session);
|
||||
Self {
|
||||
@ -116,8 +138,9 @@ where
|
||||
tool_executor,
|
||||
permission_policy,
|
||||
system_prompt,
|
||||
max_iterations: 16,
|
||||
max_iterations: usize::MAX,
|
||||
usage_tracker,
|
||||
hook_runner: HookRunner::from_feature_config(&feature_config),
|
||||
}
|
||||
}
|
||||
|
||||
@ -185,19 +208,41 @@ where
|
||||
|
||||
let result_message = match permission_outcome {
|
||||
PermissionOutcome::Allow => {
|
||||
match self.tool_executor.execute(&tool_name, &input) {
|
||||
Ok(output) => ConversationMessage::tool_result(
|
||||
let pre_hook_result = self.hook_runner.run_pre_tool_use(&tool_name, &input);
|
||||
if pre_hook_result.is_denied() {
|
||||
let deny_message = format!("PreToolUse hook denied tool `{tool_name}`");
|
||||
ConversationMessage::tool_result(
|
||||
tool_use_id,
|
||||
tool_name,
|
||||
format_hook_message(&pre_hook_result, &deny_message),
|
||||
true,
|
||||
)
|
||||
} else {
|
||||
let (mut output, mut is_error) =
|
||||
match self.tool_executor.execute(&tool_name, &input) {
|
||||
Ok(output) => (output, false),
|
||||
Err(error) => (error.to_string(), true),
|
||||
};
|
||||
output = merge_hook_feedback(pre_hook_result.messages(), output, false);
|
||||
|
||||
let post_hook_result = self
|
||||
.hook_runner
|
||||
.run_post_tool_use(&tool_name, &input, &output, is_error);
|
||||
if post_hook_result.is_denied() {
|
||||
is_error = true;
|
||||
}
|
||||
output = merge_hook_feedback(
|
||||
post_hook_result.messages(),
|
||||
output,
|
||||
post_hook_result.is_denied(),
|
||||
);
|
||||
|
||||
ConversationMessage::tool_result(
|
||||
tool_use_id,
|
||||
tool_name,
|
||||
output,
|
||||
false,
|
||||
),
|
||||
Err(error) => ConversationMessage::tool_result(
|
||||
tool_use_id,
|
||||
tool_name,
|
||||
error.to_string(),
|
||||
true,
|
||||
),
|
||||
is_error,
|
||||
)
|
||||
}
|
||||
}
|
||||
PermissionOutcome::Deny { reason } => {
|
||||
@ -290,6 +335,32 @@ fn flush_text_block(text: &mut String, blocks: &mut Vec<ContentBlock>) {
|
||||
}
|
||||
}
|
||||
|
||||
fn format_hook_message(result: &HookRunResult, fallback: &str) -> String {
|
||||
if result.messages().is_empty() {
|
||||
fallback.to_string()
|
||||
} else {
|
||||
result.messages().join("\n")
|
||||
}
|
||||
}
|
||||
|
||||
fn merge_hook_feedback(messages: &[String], output: String, denied: bool) -> String {
|
||||
if messages.is_empty() {
|
||||
return output;
|
||||
}
|
||||
|
||||
let mut sections = Vec::new();
|
||||
if !output.trim().is_empty() {
|
||||
sections.push(output);
|
||||
}
|
||||
let label = if denied {
|
||||
"Hook feedback (denied)"
|
||||
} else {
|
||||
"Hook feedback"
|
||||
};
|
||||
sections.push(format!("{label}:\n{}", messages.join("\n")));
|
||||
sections.join("\n\n")
|
||||
}
|
||||
|
||||
type ToolHandler = Box<dyn FnMut(&str) -> Result<String, ToolError>>;
|
||||
|
||||
#[derive(Default)]
|
||||
@ -329,6 +400,7 @@ mod tests {
|
||||
StaticToolExecutor,
|
||||
};
|
||||
use crate::compact::CompactionConfig;
|
||||
use crate::config::{RuntimeFeatureConfig, RuntimeHookConfig};
|
||||
use crate::permissions::{
|
||||
PermissionMode, PermissionPolicy, PermissionPromptDecision, PermissionPrompter,
|
||||
PermissionRequest,
|
||||
@ -503,6 +575,141 @@ mod tests {
|
||||
));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn denies_tool_use_when_pre_tool_hook_blocks() {
|
||||
struct SingleCallApiClient;
|
||||
impl ApiClient for SingleCallApiClient {
|
||||
fn stream(&mut self, request: ApiRequest) -> Result<Vec<AssistantEvent>, RuntimeError> {
|
||||
if request
|
||||
.messages
|
||||
.iter()
|
||||
.any(|message| message.role == MessageRole::Tool)
|
||||
{
|
||||
return Ok(vec![
|
||||
AssistantEvent::TextDelta("blocked".to_string()),
|
||||
AssistantEvent::MessageStop,
|
||||
]);
|
||||
}
|
||||
Ok(vec![
|
||||
AssistantEvent::ToolUse {
|
||||
id: "tool-1".to_string(),
|
||||
name: "blocked".to_string(),
|
||||
input: r#"{"path":"secret.txt"}"#.to_string(),
|
||||
},
|
||||
AssistantEvent::MessageStop,
|
||||
])
|
||||
}
|
||||
}
|
||||
|
||||
let mut runtime = ConversationRuntime::new_with_features(
|
||||
Session::new(),
|
||||
SingleCallApiClient,
|
||||
StaticToolExecutor::new().register("blocked", |_input| {
|
||||
panic!("tool should not execute when hook denies")
|
||||
}),
|
||||
PermissionPolicy::new(PermissionMode::DangerFullAccess),
|
||||
vec!["system".to_string()],
|
||||
RuntimeFeatureConfig::default().with_hooks(RuntimeHookConfig::new(
|
||||
vec![shell_snippet("printf 'blocked by hook'; exit 2")],
|
||||
Vec::new(),
|
||||
)),
|
||||
);
|
||||
|
||||
let summary = runtime
|
||||
.run_turn("use the tool", None)
|
||||
.expect("conversation should continue after hook denial");
|
||||
|
||||
assert_eq!(summary.tool_results.len(), 1);
|
||||
let ContentBlock::ToolResult {
|
||||
is_error, output, ..
|
||||
} = &summary.tool_results[0].blocks[0]
|
||||
else {
|
||||
panic!("expected tool result block");
|
||||
};
|
||||
assert!(
|
||||
*is_error,
|
||||
"hook denial should produce an error result: {output}"
|
||||
);
|
||||
assert!(
|
||||
output.contains("denied tool") || output.contains("blocked by hook"),
|
||||
"unexpected hook denial output: {output:?}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn appends_post_tool_hook_feedback_to_tool_result() {
|
||||
struct TwoCallApiClient {
|
||||
calls: usize,
|
||||
}
|
||||
|
||||
impl ApiClient for TwoCallApiClient {
|
||||
fn stream(&mut self, request: ApiRequest) -> Result<Vec<AssistantEvent>, RuntimeError> {
|
||||
self.calls += 1;
|
||||
match self.calls {
|
||||
1 => Ok(vec![
|
||||
AssistantEvent::ToolUse {
|
||||
id: "tool-1".to_string(),
|
||||
name: "add".to_string(),
|
||||
input: r#"{"lhs":2,"rhs":2}"#.to_string(),
|
||||
},
|
||||
AssistantEvent::MessageStop,
|
||||
]),
|
||||
2 => {
|
||||
assert!(request
|
||||
.messages
|
||||
.iter()
|
||||
.any(|message| message.role == MessageRole::Tool));
|
||||
Ok(vec![
|
||||
AssistantEvent::TextDelta("done".to_string()),
|
||||
AssistantEvent::MessageStop,
|
||||
])
|
||||
}
|
||||
_ => Err(RuntimeError::new("unexpected extra API call")),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let mut runtime = ConversationRuntime::new_with_features(
|
||||
Session::new(),
|
||||
TwoCallApiClient { calls: 0 },
|
||||
StaticToolExecutor::new().register("add", |_input| Ok("4".to_string())),
|
||||
PermissionPolicy::new(PermissionMode::DangerFullAccess),
|
||||
vec!["system".to_string()],
|
||||
RuntimeFeatureConfig::default().with_hooks(RuntimeHookConfig::new(
|
||||
vec![shell_snippet("printf 'pre hook ran'")],
|
||||
vec![shell_snippet("printf 'post hook ran'")],
|
||||
)),
|
||||
);
|
||||
|
||||
let summary = runtime
|
||||
.run_turn("use add", None)
|
||||
.expect("tool loop succeeds");
|
||||
|
||||
assert_eq!(summary.tool_results.len(), 1);
|
||||
let ContentBlock::ToolResult {
|
||||
is_error, output, ..
|
||||
} = &summary.tool_results[0].blocks[0]
|
||||
else {
|
||||
panic!("expected tool result block");
|
||||
};
|
||||
assert!(
|
||||
!*is_error,
|
||||
"post hook should preserve non-error result: {output:?}"
|
||||
);
|
||||
assert!(
|
||||
output.contains('4'),
|
||||
"tool output missing value: {output:?}"
|
||||
);
|
||||
assert!(
|
||||
output.contains("pre hook ran"),
|
||||
"tool output missing pre hook feedback: {output:?}"
|
||||
);
|
||||
assert!(
|
||||
output.contains("post hook ran"),
|
||||
"tool output missing post hook feedback: {output:?}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn reconstructs_usage_tracker_from_restored_session() {
|
||||
struct SimpleApi;
|
||||
@ -581,4 +788,14 @@ mod tests {
|
||||
MessageRole::System
|
||||
);
|
||||
}
|
||||
|
||||
#[cfg(windows)]
|
||||
fn shell_snippet(script: &str) -> String {
|
||||
script.replace('\'', "\"")
|
||||
}
|
||||
|
||||
#[cfg(not(windows))]
|
||||
fn shell_snippet(script: &str) -> String {
|
||||
script.to_string()
|
||||
}
|
||||
}
|
||||
|
||||
@ -488,7 +488,7 @@ mod tests {
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.expect("time should move forward")
|
||||
.as_nanos();
|
||||
std::env::temp_dir().join(format!("clawd-native-{name}-{unique}"))
|
||||
std::env::temp_dir().join(format!("claw-native-{name}-{unique}"))
|
||||
}
|
||||
|
||||
#[test]
|
||||
|
||||
357
rust/crates/runtime/src/hooks.rs
Normal file
357
rust/crates/runtime/src/hooks.rs
Normal file
@ -0,0 +1,357 @@
|
||||
use std::ffi::OsStr;
|
||||
use std::process::Command;
|
||||
|
||||
use serde_json::json;
|
||||
|
||||
use crate::config::{RuntimeFeatureConfig, RuntimeHookConfig};
|
||||
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||
pub enum HookEvent {
|
||||
PreToolUse,
|
||||
PostToolUse,
|
||||
}
|
||||
|
||||
impl HookEvent {
|
||||
fn as_str(self) -> &'static str {
|
||||
match self {
|
||||
Self::PreToolUse => "PreToolUse",
|
||||
Self::PostToolUse => "PostToolUse",
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub struct HookRunResult {
|
||||
denied: bool,
|
||||
messages: Vec<String>,
|
||||
}
|
||||
|
||||
impl HookRunResult {
|
||||
#[must_use]
|
||||
pub fn allow(messages: Vec<String>) -> Self {
|
||||
Self {
|
||||
denied: false,
|
||||
messages,
|
||||
}
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn is_denied(&self) -> bool {
|
||||
self.denied
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn messages(&self) -> &[String] {
|
||||
&self.messages
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq, Default)]
|
||||
pub struct HookRunner {
|
||||
config: RuntimeHookConfig,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Copy)]
|
||||
struct HookCommandRequest<'a> {
|
||||
event: HookEvent,
|
||||
tool_name: &'a str,
|
||||
tool_input: &'a str,
|
||||
tool_output: Option<&'a str>,
|
||||
is_error: bool,
|
||||
payload: &'a str,
|
||||
}
|
||||
|
||||
impl HookRunner {
|
||||
#[must_use]
|
||||
pub fn new(config: RuntimeHookConfig) -> Self {
|
||||
Self { config }
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn from_feature_config(feature_config: &RuntimeFeatureConfig) -> Self {
|
||||
Self::new(feature_config.hooks().clone())
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn run_pre_tool_use(&self, tool_name: &str, tool_input: &str) -> HookRunResult {
|
||||
self.run_commands(
|
||||
HookEvent::PreToolUse,
|
||||
self.config.pre_tool_use(),
|
||||
tool_name,
|
||||
tool_input,
|
||||
None,
|
||||
false,
|
||||
)
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn run_post_tool_use(
|
||||
&self,
|
||||
tool_name: &str,
|
||||
tool_input: &str,
|
||||
tool_output: &str,
|
||||
is_error: bool,
|
||||
) -> HookRunResult {
|
||||
self.run_commands(
|
||||
HookEvent::PostToolUse,
|
||||
self.config.post_tool_use(),
|
||||
tool_name,
|
||||
tool_input,
|
||||
Some(tool_output),
|
||||
is_error,
|
||||
)
|
||||
}
|
||||
|
||||
fn run_commands(
|
||||
&self,
|
||||
event: HookEvent,
|
||||
commands: &[String],
|
||||
tool_name: &str,
|
||||
tool_input: &str,
|
||||
tool_output: Option<&str>,
|
||||
is_error: bool,
|
||||
) -> HookRunResult {
|
||||
if commands.is_empty() {
|
||||
return HookRunResult::allow(Vec::new());
|
||||
}
|
||||
|
||||
let payload = json!({
|
||||
"hook_event_name": event.as_str(),
|
||||
"tool_name": tool_name,
|
||||
"tool_input": parse_tool_input(tool_input),
|
||||
"tool_input_json": tool_input,
|
||||
"tool_output": tool_output,
|
||||
"tool_result_is_error": is_error,
|
||||
})
|
||||
.to_string();
|
||||
|
||||
let mut messages = Vec::new();
|
||||
|
||||
for command in commands {
|
||||
match Self::run_command(
|
||||
command,
|
||||
HookCommandRequest {
|
||||
event,
|
||||
tool_name,
|
||||
tool_input,
|
||||
tool_output,
|
||||
is_error,
|
||||
payload: &payload,
|
||||
},
|
||||
) {
|
||||
HookCommandOutcome::Allow { message } => {
|
||||
if let Some(message) = message {
|
||||
messages.push(message);
|
||||
}
|
||||
}
|
||||
HookCommandOutcome::Deny { message } => {
|
||||
let message = message.unwrap_or_else(|| {
|
||||
format!("{} hook denied tool `{tool_name}`", event.as_str())
|
||||
});
|
||||
messages.push(message);
|
||||
return HookRunResult {
|
||||
denied: true,
|
||||
messages,
|
||||
};
|
||||
}
|
||||
HookCommandOutcome::Warn { message } => messages.push(message),
|
||||
}
|
||||
}
|
||||
|
||||
HookRunResult::allow(messages)
|
||||
}
|
||||
|
||||
fn run_command(command: &str, request: HookCommandRequest<'_>) -> HookCommandOutcome {
|
||||
let mut child = shell_command(command);
|
||||
child.stdin(std::process::Stdio::piped());
|
||||
child.stdout(std::process::Stdio::piped());
|
||||
child.stderr(std::process::Stdio::piped());
|
||||
child.env("HOOK_EVENT", request.event.as_str());
|
||||
child.env("HOOK_TOOL_NAME", request.tool_name);
|
||||
child.env("HOOK_TOOL_INPUT", request.tool_input);
|
||||
child.env(
|
||||
"HOOK_TOOL_IS_ERROR",
|
||||
if request.is_error { "1" } else { "0" },
|
||||
);
|
||||
if let Some(tool_output) = request.tool_output {
|
||||
child.env("HOOK_TOOL_OUTPUT", tool_output);
|
||||
}
|
||||
|
||||
match child.output_with_stdin(request.payload.as_bytes()) {
|
||||
Ok(output) => {
|
||||
let stdout = String::from_utf8_lossy(&output.stdout).trim().to_string();
|
||||
let stderr = String::from_utf8_lossy(&output.stderr).trim().to_string();
|
||||
let message = (!stdout.is_empty()).then_some(stdout);
|
||||
match output.status.code() {
|
||||
Some(0) => HookCommandOutcome::Allow { message },
|
||||
Some(2) => HookCommandOutcome::Deny { message },
|
||||
Some(code) => HookCommandOutcome::Warn {
|
||||
message: format_hook_warning(
|
||||
command,
|
||||
code,
|
||||
message.as_deref(),
|
||||
stderr.as_str(),
|
||||
),
|
||||
},
|
||||
None => HookCommandOutcome::Warn {
|
||||
message: format!(
|
||||
"{} hook `{command}` terminated by signal while handling `{}`",
|
||||
request.event.as_str(),
|
||||
request.tool_name
|
||||
),
|
||||
},
|
||||
}
|
||||
}
|
||||
Err(error) => HookCommandOutcome::Warn {
|
||||
message: format!(
|
||||
"{} hook `{command}` failed to start for `{}`: {error}",
|
||||
request.event.as_str(),
|
||||
request.tool_name
|
||||
),
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
enum HookCommandOutcome {
|
||||
Allow { message: Option<String> },
|
||||
Deny { message: Option<String> },
|
||||
Warn { message: String },
|
||||
}
|
||||
|
||||
fn parse_tool_input(tool_input: &str) -> serde_json::Value {
|
||||
serde_json::from_str(tool_input).unwrap_or_else(|_| json!({ "raw": tool_input }))
|
||||
}
|
||||
|
||||
fn format_hook_warning(command: &str, code: i32, stdout: Option<&str>, stderr: &str) -> String {
|
||||
let mut message =
|
||||
format!("Hook `{command}` exited with status {code}; allowing tool execution to continue");
|
||||
if let Some(stdout) = stdout.filter(|stdout| !stdout.is_empty()) {
|
||||
message.push_str(": ");
|
||||
message.push_str(stdout);
|
||||
} else if !stderr.is_empty() {
|
||||
message.push_str(": ");
|
||||
message.push_str(stderr);
|
||||
}
|
||||
message
|
||||
}
|
||||
|
||||
fn shell_command(command: &str) -> CommandWithStdin {
|
||||
#[cfg(windows)]
|
||||
let mut command_builder = {
|
||||
let mut command_builder = Command::new("cmd");
|
||||
command_builder.arg("/C").arg(command);
|
||||
CommandWithStdin::new(command_builder)
|
||||
};
|
||||
|
||||
#[cfg(not(windows))]
|
||||
let command_builder = {
|
||||
let mut command_builder = Command::new("sh");
|
||||
command_builder.arg("-lc").arg(command);
|
||||
CommandWithStdin::new(command_builder)
|
||||
};
|
||||
|
||||
command_builder
|
||||
}
|
||||
|
||||
struct CommandWithStdin {
|
||||
command: Command,
|
||||
}
|
||||
|
||||
impl CommandWithStdin {
|
||||
fn new(command: Command) -> Self {
|
||||
Self { command }
|
||||
}
|
||||
|
||||
fn stdin(&mut self, cfg: std::process::Stdio) -> &mut Self {
|
||||
self.command.stdin(cfg);
|
||||
self
|
||||
}
|
||||
|
||||
fn stdout(&mut self, cfg: std::process::Stdio) -> &mut Self {
|
||||
self.command.stdout(cfg);
|
||||
self
|
||||
}
|
||||
|
||||
fn stderr(&mut self, cfg: std::process::Stdio) -> &mut Self {
|
||||
self.command.stderr(cfg);
|
||||
self
|
||||
}
|
||||
|
||||
fn env<K, V>(&mut self, key: K, value: V) -> &mut Self
|
||||
where
|
||||
K: AsRef<OsStr>,
|
||||
V: AsRef<OsStr>,
|
||||
{
|
||||
self.command.env(key, value);
|
||||
self
|
||||
}
|
||||
|
||||
fn output_with_stdin(&mut self, stdin: &[u8]) -> std::io::Result<std::process::Output> {
|
||||
let mut child = self.command.spawn()?;
|
||||
if let Some(mut child_stdin) = child.stdin.take() {
|
||||
use std::io::Write;
|
||||
child_stdin.write_all(stdin)?;
|
||||
}
|
||||
child.wait_with_output()
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::{HookRunResult, HookRunner};
|
||||
use crate::config::{RuntimeFeatureConfig, RuntimeHookConfig};
|
||||
|
||||
#[test]
|
||||
fn allows_exit_code_zero_and_captures_stdout() {
|
||||
let runner = HookRunner::new(RuntimeHookConfig::new(
|
||||
vec![shell_snippet("printf 'pre ok'")],
|
||||
Vec::new(),
|
||||
));
|
||||
|
||||
let result = runner.run_pre_tool_use("Read", r#"{"path":"README.md"}"#);
|
||||
|
||||
assert_eq!(result, HookRunResult::allow(vec!["pre ok".to_string()]));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn denies_exit_code_two() {
|
||||
let runner = HookRunner::new(RuntimeHookConfig::new(
|
||||
vec![shell_snippet("printf 'blocked by hook'; exit 2")],
|
||||
Vec::new(),
|
||||
));
|
||||
|
||||
let result = runner.run_pre_tool_use("Bash", r#"{"command":"pwd"}"#);
|
||||
|
||||
assert!(result.is_denied());
|
||||
assert_eq!(result.messages(), &["blocked by hook".to_string()]);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn warns_for_other_non_zero_statuses() {
|
||||
let runner = HookRunner::from_feature_config(&RuntimeFeatureConfig::default().with_hooks(
|
||||
RuntimeHookConfig::new(
|
||||
vec![shell_snippet("printf 'warning hook'; exit 1")],
|
||||
Vec::new(),
|
||||
),
|
||||
));
|
||||
|
||||
let result = runner.run_pre_tool_use("Edit", r#"{"file":"src/lib.rs"}"#);
|
||||
|
||||
assert!(!result.is_denied());
|
||||
assert!(result
|
||||
.messages()
|
||||
.iter()
|
||||
.any(|message| message.contains("allowing tool execution to continue")));
|
||||
}
|
||||
|
||||
#[cfg(windows)]
|
||||
fn shell_snippet(script: &str) -> String {
|
||||
script.replace('\'', "\"")
|
||||
}
|
||||
|
||||
#[cfg(not(windows))]
|
||||
fn shell_snippet(script: &str) -> String {
|
||||
script.to_string()
|
||||
}
|
||||
}
|
||||
@ -4,6 +4,7 @@ mod compact;
|
||||
mod config;
|
||||
mod conversation;
|
||||
mod file_ops;
|
||||
mod hooks;
|
||||
mod json;
|
||||
mod mcp;
|
||||
mod mcp_client;
|
||||
@ -16,6 +17,10 @@ pub mod sandbox;
|
||||
mod session;
|
||||
mod usage;
|
||||
|
||||
pub use lsp::{
|
||||
FileDiagnostics, LspContextEnrichment, LspError, LspManager, LspServerConfig,
|
||||
SymbolLocation, WorkspaceDiagnostics,
|
||||
};
|
||||
pub use bash::{execute_bash, BashCommandInput, BashCommandOutput};
|
||||
pub use bootstrap::{BootstrapPhase, BootstrapPlan};
|
||||
pub use compact::{
|
||||
@ -23,11 +28,11 @@ pub use compact::{
|
||||
get_compact_continuation_message, should_compact, CompactionConfig, CompactionResult,
|
||||
};
|
||||
pub use config::{
|
||||
ConfigEntry, ConfigError, ConfigLoader, ConfigSource, McpClaudeAiProxyServerConfig,
|
||||
ConfigEntry, ConfigError, ConfigLoader, ConfigSource, McpManagedProxyServerConfig,
|
||||
McpConfigCollection, McpOAuthConfig, McpRemoteServerConfig, McpSdkServerConfig,
|
||||
McpServerConfig, McpStdioServerConfig, McpTransport, McpWebSocketServerConfig, OAuthConfig,
|
||||
ResolvedPermissionMode, RuntimeConfig, RuntimeFeatureConfig, ScopedMcpServerConfig,
|
||||
CLAUDE_CODE_SETTINGS_SCHEMA_NAME,
|
||||
ResolvedPermissionMode, RuntimeConfig, RuntimeFeatureConfig, RuntimeHookConfig,
|
||||
RuntimePluginConfig, ScopedMcpServerConfig, CLAW_SETTINGS_SCHEMA_NAME,
|
||||
};
|
||||
pub use conversation::{
|
||||
ApiClient, ApiRequest, AssistantEvent, ConversationRuntime, RuntimeError, StaticToolExecutor,
|
||||
@ -38,12 +43,13 @@ pub use file_ops::{
|
||||
GrepSearchInput, GrepSearchOutput, ReadFileOutput, StructuredPatchHunk, TextFilePayload,
|
||||
WriteFileOutput,
|
||||
};
|
||||
pub use hooks::{HookEvent, HookRunResult, HookRunner};
|
||||
pub use mcp::{
|
||||
mcp_server_signature, mcp_tool_name, mcp_tool_prefix, normalize_name_for_mcp,
|
||||
scoped_mcp_config_hash, unwrap_ccr_proxy_url,
|
||||
};
|
||||
pub use mcp_client::{
|
||||
McpClaudeAiProxyTransport, McpClientAuth, McpClientBootstrap, McpClientTransport,
|
||||
McpManagedProxyTransport, McpClientAuth, McpClientBootstrap, McpClientTransport,
|
||||
McpRemoteTransport, McpSdkTransport, McpStdioTransport,
|
||||
};
|
||||
pub use mcp_stdio::{
|
||||
|
||||
@ -73,7 +73,7 @@ pub fn mcp_server_signature(config: &McpServerConfig) -> Option<String> {
|
||||
Some(format!("url:{}", unwrap_ccr_proxy_url(&config.url)))
|
||||
}
|
||||
McpServerConfig::Ws(config) => Some(format!("url:{}", unwrap_ccr_proxy_url(&config.url))),
|
||||
McpServerConfig::ClaudeAiProxy(config) => {
|
||||
McpServerConfig::ManagedProxy(config) => {
|
||||
Some(format!("url:{}", unwrap_ccr_proxy_url(&config.url)))
|
||||
}
|
||||
McpServerConfig::Sdk(_) => None,
|
||||
@ -110,7 +110,7 @@ pub fn scoped_mcp_config_hash(config: &ScopedMcpServerConfig) -> String {
|
||||
ws.headers_helper.as_deref().unwrap_or("")
|
||||
),
|
||||
McpServerConfig::Sdk(sdk) => format!("sdk|{}", sdk.name),
|
||||
McpServerConfig::ClaudeAiProxy(proxy) => {
|
||||
McpServerConfig::ManagedProxy(proxy) => {
|
||||
format!("claudeai-proxy|{}|{}", proxy.url, proxy.id)
|
||||
}
|
||||
};
|
||||
|
||||
@ -10,7 +10,7 @@ pub enum McpClientTransport {
|
||||
Http(McpRemoteTransport),
|
||||
WebSocket(McpRemoteTransport),
|
||||
Sdk(McpSdkTransport),
|
||||
ClaudeAiProxy(McpClaudeAiProxyTransport),
|
||||
ManagedProxy(McpManagedProxyTransport),
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
@ -34,7 +34,7 @@ pub struct McpSdkTransport {
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub struct McpClaudeAiProxyTransport {
|
||||
pub struct McpManagedProxyTransport {
|
||||
pub url: String,
|
||||
pub id: String,
|
||||
}
|
||||
@ -97,12 +97,10 @@ impl McpClientTransport {
|
||||
McpServerConfig::Sdk(config) => Self::Sdk(McpSdkTransport {
|
||||
name: config.name.clone(),
|
||||
}),
|
||||
McpServerConfig::ClaudeAiProxy(config) => {
|
||||
Self::ClaudeAiProxy(McpClaudeAiProxyTransport {
|
||||
url: config.url.clone(),
|
||||
id: config.id.clone(),
|
||||
})
|
||||
}
|
||||
McpServerConfig::ManagedProxy(config) => Self::ManagedProxy(McpManagedProxyTransport {
|
||||
url: config.url.clone(),
|
||||
id: config.id.clone(),
|
||||
}),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -809,6 +809,7 @@ mod tests {
|
||||
use std::io::ErrorKind;
|
||||
use std::os::unix::fs::PermissionsExt;
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::process::Command;
|
||||
use std::time::{SystemTime, UNIX_EPOCH};
|
||||
|
||||
use serde_json::json;
|
||||
@ -1137,15 +1138,37 @@ mod tests {
|
||||
|
||||
fn script_transport(script_path: &Path) -> crate::mcp_client::McpStdioTransport {
|
||||
crate::mcp_client::McpStdioTransport {
|
||||
command: "python3".to_string(),
|
||||
command: python_command(),
|
||||
args: vec![script_path.to_string_lossy().into_owned()],
|
||||
env: BTreeMap::new(),
|
||||
}
|
||||
}
|
||||
|
||||
fn python_command() -> String {
|
||||
for key in ["MCP_TEST_PYTHON", "PYTHON3", "PYTHON"] {
|
||||
if let Ok(value) = std::env::var(key) {
|
||||
if !value.trim().is_empty() {
|
||||
return value;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for candidate in ["python3", "python"] {
|
||||
if Command::new(candidate).arg("--version").output().is_ok() {
|
||||
return candidate.to_string();
|
||||
}
|
||||
}
|
||||
|
||||
panic!("expected a Python interpreter for MCP stdio tests")
|
||||
}
|
||||
|
||||
fn cleanup_script(script_path: &Path) {
|
||||
fs::remove_file(script_path).expect("cleanup script");
|
||||
fs::remove_dir_all(script_path.parent().expect("script parent")).expect("cleanup dir");
|
||||
if let Err(error) = fs::remove_file(script_path) {
|
||||
assert_eq!(error.kind(), std::io::ErrorKind::NotFound, "cleanup script");
|
||||
}
|
||||
if let Err(error) = fs::remove_dir_all(script_path.parent().expect("script parent")) {
|
||||
assert_eq!(error.kind(), std::io::ErrorKind::NotFound, "cleanup dir");
|
||||
}
|
||||
}
|
||||
|
||||
fn manager_server_config(
|
||||
@ -1156,7 +1179,7 @@ mod tests {
|
||||
ScopedMcpServerConfig {
|
||||
scope: ConfigSource::Local,
|
||||
config: McpServerConfig::Stdio(McpStdioServerConfig {
|
||||
command: "python3".to_string(),
|
||||
command: python_command(),
|
||||
args: vec![script_path.to_string_lossy().into_owned()],
|
||||
env: BTreeMap::from([
|
||||
("MCP_SERVER_LABEL".to_string(), label.to_string()),
|
||||
|
||||
@ -324,15 +324,15 @@ fn generate_random_token(bytes: usize) -> io::Result<String> {
|
||||
}
|
||||
|
||||
fn credentials_home_dir() -> io::Result<PathBuf> {
|
||||
if let Some(path) = std::env::var_os("CLAUDE_CONFIG_HOME") {
|
||||
if let Some(path) = std::env::var_os("CLAW_CONFIG_HOME") {
|
||||
return Ok(PathBuf::from(path));
|
||||
}
|
||||
if let Some(path) = std::env::var_os("HOME") {
|
||||
return Ok(PathBuf::from(path).join(".claude"));
|
||||
return Ok(PathBuf::from(path).join(".claw"));
|
||||
}
|
||||
if cfg!(target_os = "windows") {
|
||||
if let Some(path) = std::env::var_os("USERPROFILE") {
|
||||
return Ok(PathBuf::from(path).join(".claude"));
|
||||
return Ok(PathBuf::from(path).join(".claw"));
|
||||
}
|
||||
}
|
||||
Err(io::Error::new(io::ErrorKind::NotFound, "HOME or USERPROFILE is not set"))
|
||||
@ -547,7 +547,7 @@ mod tests {
|
||||
fn oauth_credentials_round_trip_and_clear_preserves_other_fields() {
|
||||
let _guard = env_lock();
|
||||
let config_home = temp_config_home();
|
||||
std::env::set_var("CLAUDE_CONFIG_HOME", &config_home);
|
||||
std::env::set_var("CLAW_CONFIG_HOME", &config_home);
|
||||
let path = credentials_path().expect("credentials path");
|
||||
std::fs::create_dir_all(path.parent().expect("parent")).expect("create parent");
|
||||
std::fs::write(&path, "{\"other\":\"value\"}\n").expect("seed credentials");
|
||||
@ -573,7 +573,7 @@ mod tests {
|
||||
assert!(cleared.contains("\"other\": \"value\""));
|
||||
assert!(!cleared.contains("\"oauth\""));
|
||||
|
||||
std::env::remove_var("CLAUDE_CONFIG_HOME");
|
||||
std::env::remove_var("CLAW_CONFIG_HOME");
|
||||
std::fs::remove_dir_all(config_home).expect("cleanup temp dir");
|
||||
}
|
||||
|
||||
|
||||
@ -4,6 +4,7 @@ use std::path::{Path, PathBuf};
|
||||
use std::process::Command;
|
||||
|
||||
use crate::config::{ConfigError, ConfigLoader, RuntimeConfig};
|
||||
use lsp::LspContextEnrichment;
|
||||
|
||||
#[derive(Debug)]
|
||||
pub enum PromptBuildError {
|
||||
@ -35,7 +36,7 @@ impl From<ConfigError> for PromptBuildError {
|
||||
}
|
||||
|
||||
pub const SYSTEM_PROMPT_DYNAMIC_BOUNDARY: &str = "__SYSTEM_PROMPT_DYNAMIC_BOUNDARY__";
|
||||
pub const FRONTIER_MODEL_NAME: &str = "Claude Opus 4.6";
|
||||
pub const FRONTIER_MODEL_NAME: &str = "Opus 4.6";
|
||||
const MAX_INSTRUCTION_FILE_CHARS: usize = 4_000;
|
||||
const MAX_TOTAL_INSTRUCTION_CHARS: usize = 12_000;
|
||||
|
||||
@ -130,6 +131,15 @@ impl SystemPromptBuilder {
|
||||
self
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn with_lsp_context(mut self, enrichment: &LspContextEnrichment) -> Self {
|
||||
if !enrichment.is_empty() {
|
||||
self.append_sections
|
||||
.push(enrichment.render_prompt_section());
|
||||
}
|
||||
self
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn build(&self) -> Vec<String> {
|
||||
let mut sections = Vec::new();
|
||||
@ -201,10 +211,10 @@ fn discover_instruction_files(cwd: &Path) -> std::io::Result<Vec<ContextFile>> {
|
||||
let mut files = Vec::new();
|
||||
for dir in directories {
|
||||
for candidate in [
|
||||
dir.join("CLAUDE.md"),
|
||||
dir.join("CLAUDE.local.md"),
|
||||
dir.join(".claude").join("CLAUDE.md"),
|
||||
dir.join(".claude").join("instructions.md"),
|
||||
dir.join("CLAW.md"),
|
||||
dir.join("CLAW.local.md"),
|
||||
dir.join(".claw").join("CLAW.md"),
|
||||
dir.join(".claw").join("instructions.md"),
|
||||
] {
|
||||
push_context_file(&mut files, candidate)?;
|
||||
}
|
||||
@ -282,7 +292,7 @@ fn render_project_context(project_context: &ProjectContext) -> String {
|
||||
];
|
||||
if !project_context.instruction_files.is_empty() {
|
||||
bullets.push(format!(
|
||||
"Claude instruction files discovered: {}.",
|
||||
"Claw instruction files discovered: {}.",
|
||||
project_context.instruction_files.len()
|
||||
));
|
||||
}
|
||||
@ -301,7 +311,7 @@ fn render_project_context(project_context: &ProjectContext) -> String {
|
||||
}
|
||||
|
||||
fn render_instruction_files(files: &[ContextFile]) -> String {
|
||||
let mut sections = vec!["# Claude instructions".to_string()];
|
||||
let mut sections = vec!["# Claw instructions".to_string()];
|
||||
let mut remaining_chars = MAX_TOTAL_INSTRUCTION_CHARS;
|
||||
for file in files {
|
||||
if remaining_chars == 0 {
|
||||
@ -421,7 +431,7 @@ fn render_config_section(config: &RuntimeConfig) -> String {
|
||||
let mut lines = vec!["# Runtime config".to_string()];
|
||||
if config.loaded_entries().is_empty() {
|
||||
lines.extend(prepend_bullets(vec![
|
||||
"No Claude Code settings files loaded.".to_string(),
|
||||
"No Claw Code settings files loaded.".to_string()
|
||||
]));
|
||||
return lines.join("\n");
|
||||
}
|
||||
@ -517,23 +527,23 @@ mod tests {
|
||||
fn discovers_instruction_files_from_ancestor_chain() {
|
||||
let root = temp_dir();
|
||||
let nested = root.join("apps").join("api");
|
||||
fs::create_dir_all(nested.join(".claude")).expect("nested claude dir");
|
||||
fs::write(root.join("CLAUDE.md"), "root instructions").expect("write root instructions");
|
||||
fs::write(root.join("CLAUDE.local.md"), "local instructions")
|
||||
fs::create_dir_all(nested.join(".claw")).expect("nested claw dir");
|
||||
fs::write(root.join("CLAW.md"), "root instructions").expect("write root instructions");
|
||||
fs::write(root.join("CLAW.local.md"), "local instructions")
|
||||
.expect("write local instructions");
|
||||
fs::create_dir_all(root.join("apps")).expect("apps dir");
|
||||
fs::create_dir_all(root.join("apps").join(".claude")).expect("apps claude dir");
|
||||
fs::write(root.join("apps").join("CLAUDE.md"), "apps instructions")
|
||||
fs::create_dir_all(root.join("apps").join(".claw")).expect("apps claw dir");
|
||||
fs::write(root.join("apps").join("CLAW.md"), "apps instructions")
|
||||
.expect("write apps instructions");
|
||||
fs::write(
|
||||
root.join("apps").join(".claude").join("instructions.md"),
|
||||
"apps dot claude instructions",
|
||||
root.join("apps").join(".claw").join("instructions.md"),
|
||||
"apps dot claw instructions",
|
||||
)
|
||||
.expect("write apps dot claude instructions");
|
||||
fs::write(nested.join(".claude").join("CLAUDE.md"), "nested rules")
|
||||
.expect("write apps dot claw instructions");
|
||||
fs::write(nested.join(".claw").join("CLAW.md"), "nested rules")
|
||||
.expect("write nested rules");
|
||||
fs::write(
|
||||
nested.join(".claude").join("instructions.md"),
|
||||
nested.join(".claw").join("instructions.md"),
|
||||
"nested instructions",
|
||||
)
|
||||
.expect("write nested instructions");
|
||||
@ -551,7 +561,7 @@ mod tests {
|
||||
"root instructions",
|
||||
"local instructions",
|
||||
"apps instructions",
|
||||
"apps dot claude instructions",
|
||||
"apps dot claw instructions",
|
||||
"nested rules",
|
||||
"nested instructions"
|
||||
]
|
||||
@ -564,8 +574,8 @@ mod tests {
|
||||
let root = temp_dir();
|
||||
let nested = root.join("apps").join("api");
|
||||
fs::create_dir_all(&nested).expect("nested dir");
|
||||
fs::write(root.join("CLAUDE.md"), "same rules\n\n").expect("write root");
|
||||
fs::write(nested.join("CLAUDE.md"), "same rules\n").expect("write nested");
|
||||
fs::write(root.join("CLAW.md"), "same rules\n\n").expect("write root");
|
||||
fs::write(nested.join("CLAW.md"), "same rules\n").expect("write nested");
|
||||
|
||||
let context = ProjectContext::discover(&nested, "2026-03-31").expect("context should load");
|
||||
assert_eq!(context.instruction_files.len(), 1);
|
||||
@ -593,13 +603,14 @@ mod tests {
|
||||
#[test]
|
||||
fn displays_context_paths_compactly() {
|
||||
assert_eq!(
|
||||
display_context_path(Path::new("/tmp/project/.claude/CLAUDE.md")),
|
||||
"CLAUDE.md"
|
||||
display_context_path(Path::new("/tmp/project/.claw/CLAW.md")),
|
||||
"CLAW.md"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn discover_with_git_includes_status_snapshot() {
|
||||
let _guard = env_lock();
|
||||
let root = temp_dir();
|
||||
fs::create_dir_all(&root).expect("root dir");
|
||||
std::process::Command::new("git")
|
||||
@ -607,7 +618,7 @@ mod tests {
|
||||
.current_dir(&root)
|
||||
.status()
|
||||
.expect("git init should run");
|
||||
fs::write(root.join("CLAUDE.md"), "rules").expect("write instructions");
|
||||
fs::write(root.join("CLAW.md"), "rules").expect("write instructions");
|
||||
fs::write(root.join("tracked.txt"), "hello").expect("write tracked file");
|
||||
|
||||
let context =
|
||||
@ -615,7 +626,7 @@ mod tests {
|
||||
|
||||
let status = context.git_status.expect("git status should be present");
|
||||
assert!(status.contains("## No commits yet on") || status.contains("## "));
|
||||
assert!(status.contains("?? CLAUDE.md"));
|
||||
assert!(status.contains("?? CLAW.md"));
|
||||
assert!(status.contains("?? tracked.txt"));
|
||||
assert!(context.git_diff.is_none());
|
||||
|
||||
@ -624,6 +635,7 @@ mod tests {
|
||||
|
||||
#[test]
|
||||
fn discover_with_git_includes_diff_snapshot_for_tracked_changes() {
|
||||
let _guard = env_lock();
|
||||
let root = temp_dir();
|
||||
fs::create_dir_all(&root).expect("root dir");
|
||||
std::process::Command::new("git")
|
||||
@ -665,12 +677,12 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn load_system_prompt_reads_claude_files_and_config() {
|
||||
fn load_system_prompt_reads_claw_files_and_config() {
|
||||
let root = temp_dir();
|
||||
fs::create_dir_all(root.join(".claude")).expect("claude dir");
|
||||
fs::write(root.join("CLAUDE.md"), "Project rules").expect("write instructions");
|
||||
fs::create_dir_all(root.join(".claw")).expect("claw dir");
|
||||
fs::write(root.join("CLAW.md"), "Project rules").expect("write instructions");
|
||||
fs::write(
|
||||
root.join(".claude").join("settings.json"),
|
||||
root.join(".claw").join("settings.json"),
|
||||
r#"{"permissionMode":"acceptEdits"}"#,
|
||||
)
|
||||
.expect("write settings");
|
||||
@ -678,9 +690,9 @@ mod tests {
|
||||
let _guard = env_lock();
|
||||
let previous = std::env::current_dir().expect("cwd");
|
||||
let original_home = std::env::var("HOME").ok();
|
||||
let original_claude_home = std::env::var("CLAUDE_CONFIG_HOME").ok();
|
||||
let original_claw_home = std::env::var("CLAW_CONFIG_HOME").ok();
|
||||
std::env::set_var("HOME", &root);
|
||||
std::env::set_var("CLAUDE_CONFIG_HOME", root.join("missing-home"));
|
||||
std::env::set_var("CLAW_CONFIG_HOME", root.join("missing-home"));
|
||||
std::env::set_current_dir(&root).expect("change cwd");
|
||||
let prompt = super::load_system_prompt(&root, "2026-03-31", "linux", "6.8")
|
||||
.expect("system prompt should load")
|
||||
@ -695,10 +707,10 @@ mod tests {
|
||||
} else {
|
||||
std::env::remove_var("HOME");
|
||||
}
|
||||
if let Some(value) = original_claude_home {
|
||||
std::env::set_var("CLAUDE_CONFIG_HOME", value);
|
||||
if let Some(value) = original_claw_home {
|
||||
std::env::set_var("CLAW_CONFIG_HOME", value);
|
||||
} else {
|
||||
std::env::remove_var("CLAUDE_CONFIG_HOME");
|
||||
std::env::remove_var("CLAW_CONFIG_HOME");
|
||||
}
|
||||
|
||||
assert!(prompt.contains("Project rules"));
|
||||
@ -707,12 +719,12 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn renders_claude_code_style_sections_with_project_context() {
|
||||
fn renders_claw_code_style_sections_with_project_context() {
|
||||
let root = temp_dir();
|
||||
fs::create_dir_all(root.join(".claude")).expect("claude dir");
|
||||
fs::write(root.join("CLAUDE.md"), "Project rules").expect("write CLAUDE.md");
|
||||
fs::create_dir_all(root.join(".claw")).expect("claw dir");
|
||||
fs::write(root.join("CLAW.md"), "Project rules").expect("write CLAW.md");
|
||||
fs::write(
|
||||
root.join(".claude").join("settings.json"),
|
||||
root.join(".claw").join("settings.json"),
|
||||
r#"{"permissionMode":"acceptEdits"}"#,
|
||||
)
|
||||
.expect("write settings");
|
||||
@ -731,7 +743,7 @@ mod tests {
|
||||
|
||||
assert!(prompt.contains("# System"));
|
||||
assert!(prompt.contains("# Project context"));
|
||||
assert!(prompt.contains("# Claude instructions"));
|
||||
assert!(prompt.contains("# Claw instructions"));
|
||||
assert!(prompt.contains("Project rules"));
|
||||
assert!(prompt.contains("permissionMode"));
|
||||
assert!(prompt.contains(SYSTEM_PROMPT_DYNAMIC_BOUNDARY));
|
||||
@ -748,12 +760,12 @@ mod tests {
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn discovers_dot_claude_instructions_markdown() {
|
||||
fn discovers_dot_claw_instructions_markdown() {
|
||||
let root = temp_dir();
|
||||
let nested = root.join("apps").join("api");
|
||||
fs::create_dir_all(nested.join(".claude")).expect("nested claude dir");
|
||||
fs::create_dir_all(nested.join(".claw")).expect("nested claw dir");
|
||||
fs::write(
|
||||
nested.join(".claude").join("instructions.md"),
|
||||
nested.join(".claw").join("instructions.md"),
|
||||
"instruction markdown",
|
||||
)
|
||||
.expect("write instructions.md");
|
||||
@ -762,7 +774,7 @@ mod tests {
|
||||
assert!(context
|
||||
.instruction_files
|
||||
.iter()
|
||||
.any(|file| file.path.ends_with(".claude/instructions.md")));
|
||||
.any(|file| file.path.ends_with(".claw/instructions.md")));
|
||||
assert!(
|
||||
render_instruction_files(&context.instruction_files).contains("instruction markdown")
|
||||
);
|
||||
@ -773,10 +785,10 @@ mod tests {
|
||||
#[test]
|
||||
fn renders_instruction_file_metadata() {
|
||||
let rendered = render_instruction_files(&[ContextFile {
|
||||
path: PathBuf::from("/tmp/project/CLAUDE.md"),
|
||||
path: PathBuf::from("/tmp/project/CLAW.md"),
|
||||
content: "Project rules".to_string(),
|
||||
}]);
|
||||
assert!(rendered.contains("# Claude instructions"));
|
||||
assert!(rendered.contains("# Claw instructions"));
|
||||
assert!(rendered.contains("scope: /tmp/project"));
|
||||
assert!(rendered.contains("Project rules"));
|
||||
}
|
||||
|
||||
@ -72,9 +72,9 @@ impl RemoteSessionContext {
|
||||
#[must_use]
|
||||
pub fn from_env_map(env_map: &BTreeMap<String, String>) -> Self {
|
||||
Self {
|
||||
enabled: env_truthy(env_map.get("CLAUDE_CODE_REMOTE")),
|
||||
enabled: env_truthy(env_map.get("CLAW_CODE_REMOTE")),
|
||||
session_id: env_map
|
||||
.get("CLAUDE_CODE_REMOTE_SESSION_ID")
|
||||
.get("CLAW_CODE_REMOTE_SESSION_ID")
|
||||
.filter(|value| !value.is_empty())
|
||||
.cloned(),
|
||||
base_url: env_map
|
||||
@ -272,9 +272,9 @@ mod tests {
|
||||
#[test]
|
||||
fn remote_context_reads_env_state() {
|
||||
let env = BTreeMap::from([
|
||||
("CLAUDE_CODE_REMOTE".to_string(), "true".to_string()),
|
||||
("CLAW_CODE_REMOTE".to_string(), "true".to_string()),
|
||||
(
|
||||
"CLAUDE_CODE_REMOTE_SESSION_ID".to_string(),
|
||||
"CLAW_CODE_REMOTE_SESSION_ID".to_string(),
|
||||
"session-123".to_string(),
|
||||
),
|
||||
(
|
||||
@ -291,7 +291,7 @@ mod tests {
|
||||
#[test]
|
||||
fn bootstrap_fails_open_when_token_or_session_is_missing() {
|
||||
let env = BTreeMap::from([
|
||||
("CLAUDE_CODE_REMOTE".to_string(), "1".to_string()),
|
||||
("CLAW_CODE_REMOTE".to_string(), "1".to_string()),
|
||||
("CCR_UPSTREAM_PROXY_ENABLED".to_string(), "true".to_string()),
|
||||
]);
|
||||
let bootstrap = UpstreamProxyBootstrap::from_env_map(&env);
|
||||
@ -307,10 +307,10 @@ mod tests {
|
||||
fs::write(&token_path, "secret-token\n").expect("write token");
|
||||
|
||||
let env = BTreeMap::from([
|
||||
("CLAUDE_CODE_REMOTE".to_string(), "1".to_string()),
|
||||
("CLAW_CODE_REMOTE".to_string(), "1".to_string()),
|
||||
("CCR_UPSTREAM_PROXY_ENABLED".to_string(), "true".to_string()),
|
||||
(
|
||||
"CLAUDE_CODE_REMOTE_SESSION_ID".to_string(),
|
||||
"CLAW_CODE_REMOTE_SESSION_ID".to_string(),
|
||||
"session-123".to_string(),
|
||||
),
|
||||
(
|
||||
|
||||
@ -3,10 +3,13 @@ use std::fmt::{Display, Formatter};
|
||||
use std::fs;
|
||||
use std::path::Path;
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
use crate::json::{JsonError, JsonValue};
|
||||
use crate::usage::TokenUsage;
|
||||
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]
|
||||
#[serde(rename_all = "snake_case")]
|
||||
pub enum MessageRole {
|
||||
System,
|
||||
User,
|
||||
@ -14,7 +17,8 @@ pub enum MessageRole {
|
||||
Tool,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
|
||||
#[serde(tag = "type", rename_all = "snake_case")]
|
||||
pub enum ContentBlock {
|
||||
Text {
|
||||
text: String,
|
||||
@ -32,14 +36,14 @@ pub enum ContentBlock {
|
||||
},
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
|
||||
pub struct ConversationMessage {
|
||||
pub role: MessageRole,
|
||||
pub blocks: Vec<ContentBlock>,
|
||||
pub usage: Option<TokenUsage>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
|
||||
pub struct Session {
|
||||
pub version: u32,
|
||||
pub messages: Vec<ConversationMessage>,
|
||||
|
||||
@ -1,4 +1,5 @@
|
||||
use crate::session::Session;
|
||||
use serde::{Deserialize, Serialize};
|
||||
|
||||
const DEFAULT_INPUT_COST_PER_MILLION: f64 = 15.0;
|
||||
const DEFAULT_OUTPUT_COST_PER_MILLION: f64 = 75.0;
|
||||
@ -25,7 +26,7 @@ impl ModelPricing {
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Copy, Default, PartialEq, Eq)]
|
||||
#[derive(Debug, Clone, Copy, Serialize, Deserialize, Default, PartialEq, Eq)]
|
||||
pub struct TokenUsage {
|
||||
pub input_tokens: u32,
|
||||
pub output_tokens: u32,
|
||||
@ -249,9 +250,9 @@ mod tests {
|
||||
let cost = usage.estimate_cost_usd();
|
||||
assert_eq!(format_usd(cost.input_cost_usd), "$15.0000");
|
||||
assert_eq!(format_usd(cost.output_cost_usd), "$37.5000");
|
||||
let lines = usage.summary_lines_for_model("usage", Some("claude-sonnet-4-20250514"));
|
||||
let lines = usage.summary_lines_for_model("usage", Some("claude-sonnet-4-6"));
|
||||
assert!(lines[0].contains("estimated_cost=$54.6750"));
|
||||
assert!(lines[0].contains("model=claude-sonnet-4-20250514"));
|
||||
assert!(lines[0].contains("model=claude-sonnet-4-6"));
|
||||
assert!(lines[1].contains("cache_read=$0.3000"));
|
||||
}
|
||||
|
||||
@ -264,7 +265,7 @@ mod tests {
|
||||
cache_read_input_tokens: 0,
|
||||
};
|
||||
|
||||
let haiku = pricing_for_model("claude-haiku-4-5-20251001").expect("haiku pricing");
|
||||
let haiku = pricing_for_model("claude-haiku-4-5-20251213").expect("haiku pricing");
|
||||
let opus = pricing_for_model("claude-opus-4-6").expect("opus pricing");
|
||||
let haiku_cost = usage.estimate_cost_usd_with_pricing(haiku);
|
||||
let opus_cost = usage.estimate_cost_usd_with_pricing(opus);
|
||||
|
||||
20
rust/crates/server/Cargo.toml
Normal file
20
rust/crates/server/Cargo.toml
Normal file
@ -0,0 +1,20 @@
|
||||
[package]
|
||||
name = "server"
|
||||
version.workspace = true
|
||||
edition.workspace = true
|
||||
license.workspace = true
|
||||
publish.workspace = true
|
||||
|
||||
[dependencies]
|
||||
async-stream = "0.3"
|
||||
axum = "0.8"
|
||||
runtime = { path = "../runtime" }
|
||||
serde = { version = "1", features = ["derive"] }
|
||||
serde_json.workspace = true
|
||||
tokio = { version = "1", features = ["macros", "rt-multi-thread", "sync", "net", "time"] }
|
||||
|
||||
[dev-dependencies]
|
||||
reqwest = { version = "0.12", default-features = false, features = ["json", "rustls-tls", "stream"] }
|
||||
|
||||
[lints]
|
||||
workspace = true
|
||||
442
rust/crates/server/src/lib.rs
Normal file
442
rust/crates/server/src/lib.rs
Normal file
@ -0,0 +1,442 @@
|
||||
use std::collections::HashMap;
|
||||
use std::convert::Infallible;
|
||||
use std::sync::atomic::{AtomicU64, Ordering};
|
||||
use std::sync::Arc;
|
||||
use std::time::{Duration, SystemTime, UNIX_EPOCH};
|
||||
|
||||
use async_stream::stream;
|
||||
use axum::extract::{Path, State};
|
||||
use axum::http::StatusCode;
|
||||
use axum::response::sse::{Event, KeepAlive, Sse};
|
||||
use axum::response::IntoResponse;
|
||||
use axum::routing::{get, post};
|
||||
use axum::{Json, Router};
|
||||
use runtime::{ConversationMessage, Session as RuntimeSession};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use tokio::sync::{broadcast, RwLock};
|
||||
|
||||
pub type SessionId = String;
|
||||
pub type SessionStore = Arc<RwLock<HashMap<SessionId, Session>>>;
|
||||
|
||||
const BROADCAST_CAPACITY: usize = 64;
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct AppState {
|
||||
sessions: SessionStore,
|
||||
next_session_id: Arc<AtomicU64>,
|
||||
}
|
||||
|
||||
impl AppState {
|
||||
#[must_use]
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
sessions: Arc::new(RwLock::new(HashMap::new())),
|
||||
next_session_id: Arc::new(AtomicU64::new(1)),
|
||||
}
|
||||
}
|
||||
|
||||
fn allocate_session_id(&self) -> SessionId {
|
||||
let id = self.next_session_id.fetch_add(1, Ordering::Relaxed);
|
||||
format!("session-{id}")
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for AppState {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct Session {
|
||||
pub id: SessionId,
|
||||
pub created_at: u64,
|
||||
pub conversation: RuntimeSession,
|
||||
events: broadcast::Sender<SessionEvent>,
|
||||
}
|
||||
|
||||
impl Session {
|
||||
fn new(id: SessionId) -> Self {
|
||||
let (events, _) = broadcast::channel(BROADCAST_CAPACITY);
|
||||
Self {
|
||||
id,
|
||||
created_at: unix_timestamp_millis(),
|
||||
conversation: RuntimeSession::new(),
|
||||
events,
|
||||
}
|
||||
}
|
||||
|
||||
fn subscribe(&self) -> broadcast::Receiver<SessionEvent> {
|
||||
self.events.subscribe()
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
|
||||
#[serde(tag = "type", rename_all = "snake_case")]
|
||||
enum SessionEvent {
|
||||
Snapshot {
|
||||
session_id: SessionId,
|
||||
session: RuntimeSession,
|
||||
},
|
||||
Message {
|
||||
session_id: SessionId,
|
||||
message: ConversationMessage,
|
||||
},
|
||||
}
|
||||
|
||||
impl SessionEvent {
|
||||
fn event_name(&self) -> &'static str {
|
||||
match self {
|
||||
Self::Snapshot { .. } => "snapshot",
|
||||
Self::Message { .. } => "message",
|
||||
}
|
||||
}
|
||||
|
||||
fn to_sse_event(&self) -> Result<Event, serde_json::Error> {
|
||||
Ok(Event::default()
|
||||
.event(self.event_name())
|
||||
.data(serde_json::to_string(self)?))
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize)]
|
||||
struct ErrorResponse {
|
||||
error: String,
|
||||
}
|
||||
|
||||
type ApiError = (StatusCode, Json<ErrorResponse>);
|
||||
type ApiResult<T> = Result<T, ApiError>;
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
|
||||
pub struct CreateSessionResponse {
|
||||
pub session_id: SessionId,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
|
||||
pub struct SessionSummary {
|
||||
pub id: SessionId,
|
||||
pub created_at: u64,
|
||||
pub message_count: usize,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
|
||||
pub struct ListSessionsResponse {
|
||||
pub sessions: Vec<SessionSummary>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
|
||||
pub struct SessionDetailsResponse {
|
||||
pub id: SessionId,
|
||||
pub created_at: u64,
|
||||
pub session: RuntimeSession,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
|
||||
pub struct SendMessageRequest {
|
||||
pub message: String,
|
||||
}
|
||||
|
||||
#[must_use]
|
||||
pub fn app(state: AppState) -> Router {
|
||||
Router::new()
|
||||
.route("/sessions", post(create_session).get(list_sessions))
|
||||
.route("/sessions/{id}", get(get_session))
|
||||
.route("/sessions/{id}/events", get(stream_session_events))
|
||||
.route("/sessions/{id}/message", post(send_message))
|
||||
.with_state(state)
|
||||
}
|
||||
|
||||
async fn create_session(
|
||||
State(state): State<AppState>,
|
||||
) -> (StatusCode, Json<CreateSessionResponse>) {
|
||||
let session_id = state.allocate_session_id();
|
||||
let session = Session::new(session_id.clone());
|
||||
|
||||
state
|
||||
.sessions
|
||||
.write()
|
||||
.await
|
||||
.insert(session_id.clone(), session);
|
||||
|
||||
(
|
||||
StatusCode::CREATED,
|
||||
Json(CreateSessionResponse { session_id }),
|
||||
)
|
||||
}
|
||||
|
||||
async fn list_sessions(State(state): State<AppState>) -> Json<ListSessionsResponse> {
|
||||
let sessions = state.sessions.read().await;
|
||||
let mut summaries = sessions
|
||||
.values()
|
||||
.map(|session| SessionSummary {
|
||||
id: session.id.clone(),
|
||||
created_at: session.created_at,
|
||||
message_count: session.conversation.messages.len(),
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
summaries.sort_by(|left, right| left.id.cmp(&right.id));
|
||||
|
||||
Json(ListSessionsResponse {
|
||||
sessions: summaries,
|
||||
})
|
||||
}
|
||||
|
||||
async fn get_session(
|
||||
State(state): State<AppState>,
|
||||
Path(id): Path<SessionId>,
|
||||
) -> ApiResult<Json<SessionDetailsResponse>> {
|
||||
let sessions = state.sessions.read().await;
|
||||
let session = sessions
|
||||
.get(&id)
|
||||
.ok_or_else(|| not_found(format!("session `{id}` not found")))?;
|
||||
|
||||
Ok(Json(SessionDetailsResponse {
|
||||
id: session.id.clone(),
|
||||
created_at: session.created_at,
|
||||
session: session.conversation.clone(),
|
||||
}))
|
||||
}
|
||||
|
||||
async fn send_message(
|
||||
State(state): State<AppState>,
|
||||
Path(id): Path<SessionId>,
|
||||
Json(payload): Json<SendMessageRequest>,
|
||||
) -> ApiResult<StatusCode> {
|
||||
let message = ConversationMessage::user_text(payload.message);
|
||||
let broadcaster = {
|
||||
let mut sessions = state.sessions.write().await;
|
||||
let session = sessions
|
||||
.get_mut(&id)
|
||||
.ok_or_else(|| not_found(format!("session `{id}` not found")))?;
|
||||
session.conversation.messages.push(message.clone());
|
||||
session.events.clone()
|
||||
};
|
||||
|
||||
let _ = broadcaster.send(SessionEvent::Message {
|
||||
session_id: id,
|
||||
message,
|
||||
});
|
||||
|
||||
Ok(StatusCode::NO_CONTENT)
|
||||
}
|
||||
|
||||
async fn stream_session_events(
|
||||
State(state): State<AppState>,
|
||||
Path(id): Path<SessionId>,
|
||||
) -> ApiResult<impl IntoResponse> {
|
||||
let (snapshot, mut receiver) = {
|
||||
let sessions = state.sessions.read().await;
|
||||
let session = sessions
|
||||
.get(&id)
|
||||
.ok_or_else(|| not_found(format!("session `{id}` not found")))?;
|
||||
(
|
||||
SessionEvent::Snapshot {
|
||||
session_id: session.id.clone(),
|
||||
session: session.conversation.clone(),
|
||||
},
|
||||
session.subscribe(),
|
||||
)
|
||||
};
|
||||
|
||||
let stream = stream! {
|
||||
if let Ok(event) = snapshot.to_sse_event() {
|
||||
yield Ok::<Event, Infallible>(event);
|
||||
}
|
||||
|
||||
loop {
|
||||
match receiver.recv().await {
|
||||
Ok(event) => {
|
||||
if let Ok(sse_event) = event.to_sse_event() {
|
||||
yield Ok::<Event, Infallible>(sse_event);
|
||||
}
|
||||
}
|
||||
Err(broadcast::error::RecvError::Lagged(_)) => continue,
|
||||
Err(broadcast::error::RecvError::Closed) => break,
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
Ok(Sse::new(stream).keep_alive(KeepAlive::new().interval(Duration::from_secs(15))))
|
||||
}
|
||||
|
||||
fn unix_timestamp_millis() -> u64 {
|
||||
SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.expect("system time should be after epoch")
|
||||
.as_millis() as u64
|
||||
}
|
||||
|
||||
fn not_found(message: String) -> ApiError {
|
||||
(
|
||||
StatusCode::NOT_FOUND,
|
||||
Json(ErrorResponse { error: message }),
|
||||
)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::{
|
||||
app, AppState, CreateSessionResponse, ListSessionsResponse, SessionDetailsResponse,
|
||||
};
|
||||
use reqwest::Client;
|
||||
use std::net::SocketAddr;
|
||||
use std::time::Duration;
|
||||
use tokio::net::TcpListener;
|
||||
use tokio::task::JoinHandle;
|
||||
use tokio::time::timeout;
|
||||
|
||||
struct TestServer {
|
||||
address: SocketAddr,
|
||||
handle: JoinHandle<()>,
|
||||
}
|
||||
|
||||
impl TestServer {
|
||||
async fn spawn() -> Self {
|
||||
let listener = TcpListener::bind("127.0.0.1:0")
|
||||
.await
|
||||
.expect("test listener should bind");
|
||||
let address = listener
|
||||
.local_addr()
|
||||
.expect("listener should report local address");
|
||||
let handle = tokio::spawn(async move {
|
||||
axum::serve(listener, app(AppState::default()))
|
||||
.await
|
||||
.expect("server should run");
|
||||
});
|
||||
|
||||
Self { address, handle }
|
||||
}
|
||||
|
||||
fn url(&self, path: &str) -> String {
|
||||
format!("http://{}{}", self.address, path)
|
||||
}
|
||||
}
|
||||
|
||||
impl Drop for TestServer {
|
||||
fn drop(&mut self) {
|
||||
self.handle.abort();
|
||||
}
|
||||
}
|
||||
|
||||
async fn create_session(client: &Client, server: &TestServer) -> CreateSessionResponse {
|
||||
client
|
||||
.post(server.url("/sessions"))
|
||||
.send()
|
||||
.await
|
||||
.expect("create request should succeed")
|
||||
.error_for_status()
|
||||
.expect("create request should return success")
|
||||
.json::<CreateSessionResponse>()
|
||||
.await
|
||||
.expect("create response should parse")
|
||||
}
|
||||
|
||||
async fn next_sse_frame(response: &mut reqwest::Response, buffer: &mut String) -> String {
|
||||
loop {
|
||||
if let Some(index) = buffer.find("\n\n") {
|
||||
let frame = buffer[..index].to_string();
|
||||
let remainder = buffer[index + 2..].to_string();
|
||||
*buffer = remainder;
|
||||
return frame;
|
||||
}
|
||||
|
||||
let next_chunk = timeout(Duration::from_secs(5), response.chunk())
|
||||
.await
|
||||
.expect("SSE stream should yield within timeout")
|
||||
.expect("SSE stream should remain readable")
|
||||
.expect("SSE stream should stay open");
|
||||
buffer.push_str(&String::from_utf8_lossy(&next_chunk));
|
||||
}
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn creates_and_lists_sessions() {
|
||||
let server = TestServer::spawn().await;
|
||||
let client = Client::new();
|
||||
|
||||
// given
|
||||
let created = create_session(&client, &server).await;
|
||||
|
||||
// when
|
||||
let sessions = client
|
||||
.get(server.url("/sessions"))
|
||||
.send()
|
||||
.await
|
||||
.expect("list request should succeed")
|
||||
.error_for_status()
|
||||
.expect("list request should return success")
|
||||
.json::<ListSessionsResponse>()
|
||||
.await
|
||||
.expect("list response should parse");
|
||||
let details = client
|
||||
.get(server.url(&format!("/sessions/{}", created.session_id)))
|
||||
.send()
|
||||
.await
|
||||
.expect("details request should succeed")
|
||||
.error_for_status()
|
||||
.expect("details request should return success")
|
||||
.json::<SessionDetailsResponse>()
|
||||
.await
|
||||
.expect("details response should parse");
|
||||
|
||||
// then
|
||||
assert_eq!(created.session_id, "session-1");
|
||||
assert_eq!(sessions.sessions.len(), 1);
|
||||
assert_eq!(sessions.sessions[0].id, created.session_id);
|
||||
assert_eq!(sessions.sessions[0].message_count, 0);
|
||||
assert_eq!(details.id, "session-1");
|
||||
assert!(details.session.messages.is_empty());
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn streams_message_events_and_persists_message_flow() {
|
||||
let server = TestServer::spawn().await;
|
||||
let client = Client::new();
|
||||
|
||||
// given
|
||||
let created = create_session(&client, &server).await;
|
||||
let mut response = client
|
||||
.get(server.url(&format!("/sessions/{}/events", created.session_id)))
|
||||
.send()
|
||||
.await
|
||||
.expect("events request should succeed")
|
||||
.error_for_status()
|
||||
.expect("events request should return success");
|
||||
let mut buffer = String::new();
|
||||
let snapshot_frame = next_sse_frame(&mut response, &mut buffer).await;
|
||||
|
||||
// when
|
||||
let send_status = client
|
||||
.post(server.url(&format!("/sessions/{}/message", created.session_id)))
|
||||
.json(&super::SendMessageRequest {
|
||||
message: "hello from test".to_string(),
|
||||
})
|
||||
.send()
|
||||
.await
|
||||
.expect("message request should succeed")
|
||||
.status();
|
||||
let message_frame = next_sse_frame(&mut response, &mut buffer).await;
|
||||
let details = client
|
||||
.get(server.url(&format!("/sessions/{}", created.session_id)))
|
||||
.send()
|
||||
.await
|
||||
.expect("details request should succeed")
|
||||
.error_for_status()
|
||||
.expect("details request should return success")
|
||||
.json::<SessionDetailsResponse>()
|
||||
.await
|
||||
.expect("details response should parse");
|
||||
|
||||
// then
|
||||
assert_eq!(send_status, reqwest::StatusCode::NO_CONTENT);
|
||||
assert!(snapshot_frame.contains("event: snapshot"));
|
||||
assert!(snapshot_frame.contains("\"session_id\":\"session-1\""));
|
||||
assert!(message_frame.contains("event: message"));
|
||||
assert!(message_frame.contains("hello from test"));
|
||||
assert_eq!(details.session.messages.len(), 1);
|
||||
assert_eq!(
|
||||
details.session.messages[0],
|
||||
runtime::ConversationMessage::user_text("hello from test")
|
||||
);
|
||||
}
|
||||
}
|
||||
@ -6,10 +6,13 @@ license.workspace = true
|
||||
publish.workspace = true
|
||||
|
||||
[dependencies]
|
||||
api = { path = "../api" }
|
||||
plugins = { path = "../plugins" }
|
||||
runtime = { path = "../runtime" }
|
||||
reqwest = { version = "0.12", default-features = false, features = ["blocking", "rustls-tls"] }
|
||||
serde = { version = "1", features = ["derive"] }
|
||||
serde_json = "1"
|
||||
serde_json.workspace = true
|
||||
tokio = { version = "1", features = ["rt-multi-thread"] }
|
||||
|
||||
[lints]
|
||||
workspace = true
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
51
rust/docs/releases/0.1.0.md
Normal file
51
rust/docs/releases/0.1.0.md
Normal file
@ -0,0 +1,51 @@
|
||||
# Claw Code 0.1.0 发行说明(草案)
|
||||
|
||||
## 摘要
|
||||
|
||||
Claw Code `0.1.0` 是当前 Rust 实现的第一个公开发布准备里程碑。Claw Code 的灵感来自 Claude Code,并作为一个净室(clean-room)Rust 实现构建;它不是直接的移植或复制。此版本专注于可用的本地 CLI 体验:交互式会话、非交互式提示词、工作区工具、配置加载、会话、插件以及本地代理/技能发现。
|
||||
|
||||
## 亮点
|
||||
|
||||
- Claw Code 的首个公开 `0.1.0` 发行候选版本
|
||||
- 作为当前主要产品界面的安全 Rust 实现
|
||||
- 用于交互式和单次编码代理工作流的 `claw` CLI
|
||||
- 内置工作区工具:用于 shell、文件操作、搜索、网页获取/搜索、待办事项跟踪和笔记本更新
|
||||
- 斜杠命令界面:用于状态、压缩、配置检查、会话、差异/导出以及版本信息
|
||||
- 本地插件、代理和技能的发现/管理界面
|
||||
- OAuth 登录/注销以及模型/提供商选择
|
||||
|
||||
## 安装与运行
|
||||
|
||||
此版本目前旨在通过源码构建:
|
||||
|
||||
```bash
|
||||
cargo install --path crates/claw-cli --locked
|
||||
# 或者
|
||||
cargo build --release -p claw-cli
|
||||
```
|
||||
|
||||
运行:
|
||||
|
||||
```bash
|
||||
claw
|
||||
claw prompt "总结此仓库"
|
||||
```
|
||||
|
||||
## 已知限制
|
||||
|
||||
- 仅限源码构建分发;尚未发布打包好的发行构件
|
||||
- CI 目前覆盖 Ubuntu 和 macOS 的发布构建、检查和测试
|
||||
- Windows 的发布就绪性尚未建立
|
||||
- 部分集成覆盖是可选的,因为需要实时提供商凭据和网络访问
|
||||
- 公开接口可能会在 `0.x` 版本系列期间继续演进
|
||||
|
||||
## 推荐的发行定位
|
||||
|
||||
将 `0.1.0` 定位为 Claw Code 当前 Rust 实现的首个公开发布版本,面向习惯于从源码构建的早期采用者。功能表面已足够广泛以支持实际使用,而打包和发布自动化可以在后续版本中继续改进。
|
||||
|
||||
## 用于此草案的验证
|
||||
|
||||
- 通过 `Cargo.toml` 验证了工作区版本
|
||||
- 通过 `cargo metadata` 验证了 `claw` 二进制文件/包路径
|
||||
- 通过 `cargo run --quiet --bin claw -- --help` 验证了 CLI 命令表面
|
||||
- 通过 `.github/workflows/ci.yml` 验证了 CI 覆盖范围
|
||||
@ -1,4 +1,4 @@
|
||||
"""Python porting workspace for the Claude Code rewrite effort."""
|
||||
"""Python porting workspace for the Claw Code rewrite effort."""
|
||||
|
||||
from .commands import PORTED_COMMANDS, build_command_backlog
|
||||
from .parity_audit import ParityAuditResult, run_parity_audit
|
||||
|
||||
@ -21,7 +21,7 @@ def build_port_context(base: Path | None = None) -> PortContext:
|
||||
source_root = root / 'src'
|
||||
tests_root = root / 'tests'
|
||||
assets_root = root / 'assets'
|
||||
archive_root = root / 'archive' / 'claude_code_ts_snapshot' / 'src'
|
||||
archive_root = root / 'archive' / 'claw_code_ts_snapshot' / 'src'
|
||||
return PortContext(
|
||||
source_root=source_root,
|
||||
tests_root=tests_root,
|
||||
|
||||
@ -19,7 +19,7 @@ from .tools import execute_tool, get_tool, get_tools, render_tool_index
|
||||
|
||||
|
||||
def build_parser() -> argparse.ArgumentParser:
|
||||
parser = argparse.ArgumentParser(description='Python porting workspace for the Claude Code rewrite effort')
|
||||
parser = argparse.ArgumentParser(description='Python porting workspace for the Claw Code rewrite effort')
|
||||
subparsers = parser.add_subparsers(dest='command', required=True)
|
||||
subparsers.add_parser('summary', help='render a Markdown summary of the Python porting workspace')
|
||||
subparsers.add_parser('manifest', help='print the current Python workspace manifest')
|
||||
|
||||
@ -4,7 +4,7 @@ import json
|
||||
from dataclasses import dataclass
|
||||
from pathlib import Path
|
||||
|
||||
ARCHIVE_ROOT = Path(__file__).resolve().parent.parent / 'archive' / 'claude_code_ts_snapshot' / 'src'
|
||||
ARCHIVE_ROOT = Path(__file__).resolve().parent.parent / 'archive' / 'claw_code_ts_snapshot' / 'src'
|
||||
CURRENT_ROOT = Path(__file__).resolve().parent
|
||||
REFERENCE_SURFACE_PATH = CURRENT_ROOT / 'reference_data' / 'archive_surface_snapshot.json'
|
||||
COMMAND_SNAPSHOT_PATH = CURRENT_ROOT / 'reference_data' / 'commands_snapshot.json'
|
||||
|
||||
@ -1,5 +1,5 @@
|
||||
{
|
||||
"archive_root": "archive/claude_code_ts_snapshot/src",
|
||||
"archive_root": "archive/claw_code_ts_snapshot/src",
|
||||
"root_files": [
|
||||
"QueryEngine.ts",
|
||||
"Task.ts",
|
||||
|
||||
@ -330,9 +330,9 @@
|
||||
"responsibility": "Command module mirrored from archived TypeScript path commands/files/index.ts"
|
||||
},
|
||||
{
|
||||
"name": "good-claude",
|
||||
"source_hint": "commands/good-claude/index.js",
|
||||
"responsibility": "Command module mirrored from archived TypeScript path commands/good-claude/index.js"
|
||||
"name": "good-claw",
|
||||
"source_hint": "commands/good-claw/index.js",
|
||||
"responsibility": "Command module mirrored from archived TypeScript path commands/good-claw/index.js"
|
||||
},
|
||||
{
|
||||
"name": "heapdump",
|
||||
|
||||
@ -15,9 +15,9 @@
|
||||
"components/BridgeDialog.tsx",
|
||||
"components/BypassPermissionsModeDialog.tsx",
|
||||
"components/ChannelDowngradeDialog.tsx",
|
||||
"components/ClaudeCodeHint/PluginHintMenu.tsx",
|
||||
"components/ClaudeInChromeOnboarding.tsx",
|
||||
"components/ClaudeMdExternalIncludesDialog.tsx",
|
||||
"components/ClawCodeHint/PluginHintMenu.tsx",
|
||||
"components/ClawInChromeOnboarding.tsx",
|
||||
"components/ClawMdExternalIncludesDialog.tsx",
|
||||
"components/ClickableImageRef.tsx",
|
||||
"components/CompactSummary.tsx",
|
||||
"components/ConfigurableShortcutHint.tsx",
|
||||
|
||||
@ -22,7 +22,7 @@
|
||||
"services/analytics/sinkKillswitch.ts",
|
||||
"services/api/adminRequests.ts",
|
||||
"services/api/bootstrap.ts",
|
||||
"services/api/claude.ts",
|
||||
"services/api/claw.ts",
|
||||
"services/api/client.ts",
|
||||
"services/api/dumpPrompts.ts",
|
||||
"services/api/emptyUsage.ts",
|
||||
|
||||
@ -4,9 +4,9 @@
|
||||
"module_count": 20,
|
||||
"sample_files": [
|
||||
"skills/bundled/batch.ts",
|
||||
"skills/bundled/claudeApi.ts",
|
||||
"skills/bundled/claudeApiContent.ts",
|
||||
"skills/bundled/claudeInChrome.ts",
|
||||
"skills/bundled/clawApi.ts",
|
||||
"skills/bundled/clawApiContent.ts",
|
||||
"skills/bundled/clawInChrome.ts",
|
||||
"skills/bundled/debug.ts",
|
||||
"skills/bundled/index.ts",
|
||||
"skills/bundled/keybindings.ts",
|
||||
|
||||
@ -4,7 +4,7 @@
|
||||
"module_count": 11,
|
||||
"sample_files": [
|
||||
"types/command.ts",
|
||||
"types/generated/events_mono/claude_code/v1/claude_code_internal_event.ts",
|
||||
"types/generated/events_mono/claw_code/v1/claw_code_internal_event.ts",
|
||||
"types/generated/events_mono/common/v1/auth.ts",
|
||||
"types/generated/events_mono/growthbook/v1/growthbook_experiment_event.ts",
|
||||
"types/generated/google/protobuf/timestamp.ts",
|
||||
|
||||
@ -35,9 +35,9 @@
|
||||
"responsibility": "Tool module mirrored from archived TypeScript path tools/AgentTool/agentToolUtils.ts"
|
||||
},
|
||||
{
|
||||
"name": "claudeCodeGuideAgent",
|
||||
"source_hint": "tools/AgentTool/built-in/claudeCodeGuideAgent.ts",
|
||||
"responsibility": "Tool module mirrored from archived TypeScript path tools/AgentTool/built-in/claudeCodeGuideAgent.ts"
|
||||
"name": "clawCodeGuideAgent",
|
||||
"source_hint": "tools/AgentTool/built-in/clawCodeGuideAgent.ts",
|
||||
"responsibility": "Tool module mirrored from archived TypeScript path tools/AgentTool/built-in/clawCodeGuideAgent.ts"
|
||||
},
|
||||
{
|
||||
"name": "exploreAgent",
|
||||
|
||||
Loading…
Reference in New Issue
Block a user