Compare commits
No commits in common. "main" and "v0.1.0" have entirely different histories.
16 changed files with 22 additions and 1297 deletions
260
README.md
260
README.md
|
|
@ -17,175 +17,6 @@
|
||||||
|
|
||||||
Works **alongside** `memory-core` (OpenClaw's built-in memory) — doesn't replace it.
|
Works **alongside** `memory-core` (OpenClaw's built-in memory) — doesn't replace it.
|
||||||
|
|
||||||
### Regex + LLM Hybrid (v0.2.0)
|
|
||||||
|
|
||||||
By default, Cortex uses fast regex patterns (zero cost, instant). Optionally, you can plug in **any OpenAI-compatible LLM** for deeper analysis:
|
|
||||||
|
|
||||||
- **Ollama** (local, free): `mistral:7b`, `qwen2.5:7b`, `llama3.1:8b`
|
|
||||||
- **OpenAI**: `gpt-4o-mini`, `gpt-4o`
|
|
||||||
- **OpenRouter / vLLM / any OpenAI-compatible API**
|
|
||||||
|
|
||||||
The LLM runs **on top of regex** — it enhances, never replaces. If the LLM is down, Cortex falls back silently to regex-only.
|
|
||||||
|
|
||||||
## 🎬 Demo
|
|
||||||
|
|
||||||
Try the interactive demo — it simulates a real bilingual dev conversation and shows every Cortex feature in action:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git clone https://github.com/alberthild/openclaw-cortex.git
|
|
||||||
cd openclaw-cortex && npm install
|
|
||||||
npx tsx demo/demo.ts
|
|
||||||
```
|
|
||||||
|
|
||||||
### What the demo shows
|
|
||||||
|
|
||||||
A 13-message conversation between a developer (Albert) and an AI assistant (Claudia) covering 3 topics in English and German. Cortex processes every message in real-time:
|
|
||||||
|
|
||||||
```
|
|
||||||
👤 Albert: Let's get back to the auth migration. We need to switch from JWT to OAuth2.
|
|
||||||
🤖 Claudia: I'll start with the token validation layer. The plan is to keep backward compatibility for 2 weeks.
|
|
||||||
👤 Albert: Agreed. We decided to use Auth0 as the provider.
|
|
||||||
👤 Albert: Also, jetzt zu dem Performance-Bug. Die API braucht 3 Sekunden für simple Queries.
|
|
||||||
🤖 Claudia: Ich hab den Profiler laufen lassen. Das Problem ist der N+1 Query im User-Resolver.
|
|
||||||
👤 Albert: Mist, das ist nervig. Wir brauchen das bis Freitag gefixt.
|
|
||||||
🤖 Claudia: Wir machen Batched DataLoader.
|
|
||||||
👤 Albert: Beschlossen. Und wir warten auf den Review von Alexey bevor wir deployen.
|
|
||||||
🤖 Claudia: Auth migration is done ✅ All tests green, backward compat verified.
|
|
||||||
👤 Albert: Nice! Perfekt gelaufen. 🚀
|
|
||||||
👤 Albert: Now about the Kubernetes cluster — we need to plan the migration from Docker Compose.
|
|
||||||
🤖 Claudia: I'll draft an architecture doc. Waiting for the cost estimate from Hetzner first.
|
|
||||||
👤 Albert: Guter Fortschritt heute. Lass uns morgen mit dem K8s-Plan weitermachen.
|
|
||||||
```
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary><b>🧵 Thread Tracking</b> — 3 threads detected, 1 auto-closed</summary>
|
|
||||||
|
|
||||||
```
|
|
||||||
Found 3 threads (2 open, 1 closed)
|
|
||||||
|
|
||||||
○ 🟠 the auth migration
|
|
||||||
Status: closed ← detected "done ✅" as closure signal
|
|
||||||
Priority: high
|
|
||||||
Mood: neutral
|
|
||||||
|
|
||||||
● 🟡 dem Performance-Bug
|
|
||||||
Status: open
|
|
||||||
Priority: medium
|
|
||||||
Mood: neutral
|
|
||||||
|
|
||||||
● 🟡 the Kubernetes cluster
|
|
||||||
Status: open
|
|
||||||
Priority: medium
|
|
||||||
Mood: neutral
|
|
||||||
Waiting for: cost estimate from Hetzner
|
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary><b>🎯 Decision Extraction</b> — 4 decisions found across 2 languages</summary>
|
|
||||||
|
|
||||||
```
|
|
||||||
🎯 The plan is to keep backward compatibility for 2 weeks
|
|
||||||
Impact: medium | Who: claudia
|
|
||||||
|
|
||||||
🎯 We decided to use Auth0 as the provider
|
|
||||||
Impact: medium | Who: albert
|
|
||||||
|
|
||||||
🎯 Wir machen Batched DataLoader
|
|
||||||
Impact: medium | Who: claudia
|
|
||||||
|
|
||||||
🎯 Beschlossen. Und wir warten auf den Review von Alexey bevor wir deployen.
|
|
||||||
Impact: high | Who: albert
|
|
||||||
```
|
|
||||||
|
|
||||||
Trigger patterns: `"the plan is"`, `"we decided"`, `"wir machen"`, `"beschlossen"`
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary><b>🔥 Mood Detection</b> — session mood tracked from patterns</summary>
|
|
||||||
|
|
||||||
```
|
|
||||||
Session mood: 🔥 excited
|
|
||||||
(Detected from "Nice!", "Perfekt gelaufen", "🚀")
|
|
||||||
```
|
|
||||||
|
|
||||||
Supported moods: `frustrated` 😤 · `excited` 🔥 · `tense` ⚡ · `productive` 🔧 · `exploratory` 🔬 · `neutral` 😐
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary><b>📸 Pre-Compaction Snapshot</b> — saves state before memory loss</summary>
|
|
||||||
|
|
||||||
```
|
|
||||||
Success: yes
|
|
||||||
Messages snapshotted: 13
|
|
||||||
Warnings: none
|
|
||||||
|
|
||||||
Hot Snapshot (memory/reboot/hot-snapshot.md):
|
|
||||||
# Hot Snapshot — 2026-02-17
|
|
||||||
## Last conversation before compaction
|
|
||||||
|
|
||||||
**Recent messages:**
|
|
||||||
- [user] Let's get back to the auth migration...
|
|
||||||
- [assistant] I'll start with the token validation layer...
|
|
||||||
- [user] Agreed. We decided to use Auth0 as the provider.
|
|
||||||
- [user] Also, jetzt zu dem Performance-Bug...
|
|
||||||
- ...
|
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary><b>📋 Boot Context (BOOTSTRAP.md)</b> — ~786 tokens, ready for next session</summary>
|
|
||||||
|
|
||||||
```markdown
|
|
||||||
# Context Briefing
|
|
||||||
Generated: 2026-02-17 | Local: 12:30
|
|
||||||
|
|
||||||
## ⚡ State
|
|
||||||
Mode: Afternoon — execution mode
|
|
||||||
Last session mood: excited 🔥
|
|
||||||
|
|
||||||
## 📖 Narrative (last 24h)
|
|
||||||
**Completed:**
|
|
||||||
- ✅ the auth migration: Topic detected from albert
|
|
||||||
|
|
||||||
**Open:**
|
|
||||||
- 🟡 dem Performance-Bug: Topic detected from albert
|
|
||||||
- 🟡 the Kubernetes cluster: Topic detected from albert
|
|
||||||
|
|
||||||
**Decisions:**
|
|
||||||
- 🎯 The plan is to keep backward compatibility for 2 weeks (claudia)
|
|
||||||
- 🎯 We decided to use Auth0 as the provider (albert)
|
|
||||||
- 🎯 Wir machen Batched DataLoader (claudia)
|
|
||||||
- 🎯 Beschlossen. Warten auf Review von Alexey (albert)
|
|
||||||
```
|
|
||||||
|
|
||||||
Total: 3,143 chars · ~786 tokens · regenerated every session start
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary><b>📁 Generated Files</b></summary>
|
|
||||||
|
|
||||||
```
|
|
||||||
{workspace}/
|
|
||||||
├── BOOTSTRAP.md 3,143 bytes
|
|
||||||
└── memory/reboot/
|
|
||||||
├── threads.json 1,354 bytes
|
|
||||||
├── decisions.json 1,619 bytes
|
|
||||||
├── narrative.md 866 bytes
|
|
||||||
└── hot-snapshot.md 1,199 bytes
|
|
||||||
```
|
|
||||||
|
|
||||||
All plain JSON + Markdown. No database, no external dependencies.
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
> 📝 Full raw output: [`demo/SAMPLE-OUTPUT.md`](demo/SAMPLE-OUTPUT.md)
|
|
||||||
|
|
||||||
## Install
|
## Install
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
|
@ -246,52 +77,6 @@ Add to your OpenClaw config:
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### LLM Enhancement (optional)
|
|
||||||
|
|
||||||
Add an `llm` section to enable AI-powered analysis on top of regex:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"plugins": {
|
|
||||||
"openclaw-cortex": {
|
|
||||||
"enabled": true,
|
|
||||||
"llm": {
|
|
||||||
"enabled": true,
|
|
||||||
"endpoint": "http://localhost:11434/v1",
|
|
||||||
"model": "mistral:7b",
|
|
||||||
"apiKey": "",
|
|
||||||
"timeoutMs": 15000,
|
|
||||||
"batchSize": 3
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
| Setting | Default | Description |
|
|
||||||
|---------|---------|-------------|
|
|
||||||
| `enabled` | `false` | Enable LLM enhancement |
|
|
||||||
| `endpoint` | `http://localhost:11434/v1` | Any OpenAI-compatible API endpoint |
|
|
||||||
| `model` | `mistral:7b` | Model identifier |
|
|
||||||
| `apiKey` | `""` | API key (optional, for cloud providers) |
|
|
||||||
| `timeoutMs` | `15000` | Timeout per LLM call |
|
|
||||||
| `batchSize` | `3` | Messages to buffer before calling the LLM |
|
|
||||||
|
|
||||||
**Examples:**
|
|
||||||
|
|
||||||
```jsonc
|
|
||||||
// Ollama (local, free)
|
|
||||||
{ "endpoint": "http://localhost:11434/v1", "model": "mistral:7b" }
|
|
||||||
|
|
||||||
// OpenAI
|
|
||||||
{ "endpoint": "https://api.openai.com/v1", "model": "gpt-4o-mini", "apiKey": "sk-..." }
|
|
||||||
|
|
||||||
// OpenRouter
|
|
||||||
{ "endpoint": "https://openrouter.ai/api/v1", "model": "meta-llama/llama-3.1-8b-instruct", "apiKey": "sk-or-..." }
|
|
||||||
```
|
|
||||||
|
|
||||||
The LLM receives batches of messages and returns structured JSON: detected threads, decisions, closures, and mood. Results are merged with regex findings — the LLM can catch things regex misses (nuance, implicit decisions, context-dependent closures).
|
|
||||||
|
|
||||||
Restart OpenClaw after configuring.
|
Restart OpenClaw after configuring.
|
||||||
|
|
||||||
## How It Works
|
## How It Works
|
||||||
|
|
@ -329,49 +114,28 @@ Thread and decision detection supports English, German, or both:
|
||||||
- **Topic patterns**: "back to", "now about", "jetzt zu", "bzgl."
|
- **Topic patterns**: "back to", "now about", "jetzt zu", "bzgl."
|
||||||
- **Mood detection**: frustrated, excited, tense, productive, exploratory
|
- **Mood detection**: frustrated, excited, tense, productive, exploratory
|
||||||
|
|
||||||
### LLM Enhancement Flow
|
|
||||||
|
|
||||||
When `llm.enabled: true`:
|
|
||||||
|
|
||||||
```
|
|
||||||
message_received → regex analysis (instant, always)
|
|
||||||
→ buffer message
|
|
||||||
→ batch full? → LLM call (async, fire-and-forget)
|
|
||||||
→ merge LLM results into threads + decisions
|
|
||||||
→ LLM down? → silent fallback to regex-only
|
|
||||||
```
|
|
||||||
|
|
||||||
The LLM sees a conversation snippet (configurable batch size) and returns:
|
|
||||||
- **Threads**: title, status (open/closed), summary
|
|
||||||
- **Decisions**: what was decided, who, impact level
|
|
||||||
- **Closures**: which threads were resolved
|
|
||||||
- **Mood**: overall conversation mood
|
|
||||||
|
|
||||||
### Graceful Degradation
|
### Graceful Degradation
|
||||||
|
|
||||||
- Read-only workspace → runs in-memory, skips writes
|
- Read-only workspace → runs in-memory, skips writes
|
||||||
- Corrupt JSON → starts fresh, next write recovers
|
- Corrupt JSON → starts fresh, next write recovers
|
||||||
- Missing directories → creates them automatically
|
- Missing directories → creates them automatically
|
||||||
- Hook errors → caught and logged, never crashes the gateway
|
- Hook errors → caught and logged, never crashes the gateway
|
||||||
- LLM timeout/error → falls back to regex-only, no data loss
|
|
||||||
|
|
||||||
## Development
|
## Development
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
npm install
|
npm install
|
||||||
npm test # 288 tests
|
npm test # 270 tests
|
||||||
npm run typecheck # TypeScript strict mode
|
npm run typecheck # TypeScript strict mode
|
||||||
npm run build # Compile to dist/
|
npm run build # Compile to dist/
|
||||||
```
|
```
|
||||||
|
|
||||||
## Performance
|
## Performance
|
||||||
|
|
||||||
- Zero runtime dependencies (Node built-ins only — even LLM calls use `node:http`)
|
- Zero runtime dependencies (Node built-ins only)
|
||||||
- Regex analysis: instant, runs on every message
|
- All hook handlers are non-blocking (fire-and-forget)
|
||||||
- LLM enhancement: async, batched, fire-and-forget (never blocks hooks)
|
|
||||||
- Atomic file writes via `.tmp` + rename
|
- Atomic file writes via `.tmp` + rename
|
||||||
- Noise filter prevents garbage threads from polluting state
|
- Tested with 270 unit + integration tests
|
||||||
- Tested with 288 unit + integration tests
|
|
||||||
|
|
||||||
## Architecture
|
## Architecture
|
||||||
|
|
||||||
|
|
@ -381,17 +145,7 @@ See [docs/ARCHITECTURE.md](docs/ARCHITECTURE.md) for the full design document in
|
||||||
|
|
||||||
MIT — see [LICENSE](LICENSE)
|
MIT — see [LICENSE](LICENSE)
|
||||||
|
|
||||||
## Part of the Vainplex Plugin Suite
|
## Related
|
||||||
|
|
||||||
| # | Plugin | Status | Description |
|
- [@vainplex/nats-eventstore](https://www.npmjs.com/package/@vainplex/nats-eventstore) — Publish OpenClaw events to NATS JetStream
|
||||||
|---|--------|--------|-------------|
|
- [OpenClaw](https://github.com/openclaw/openclaw) — Multi-channel AI gateway
|
||||||
| 1 | [@vainplex/nats-eventstore](https://github.com/alberthild/openclaw-nats-eventstore) | ✅ Published | NATS JetStream event persistence |
|
|
||||||
| 2 | **@vainplex/openclaw-cortex** | ✅ Published | Conversation intelligence — threads, decisions, boot context (this plugin) |
|
|
||||||
| 3 | [@vainplex/openclaw-knowledge-engine](https://github.com/alberthild/openclaw-knowledge-engine) | ✅ Published | Real-time knowledge extraction |
|
|
||||||
| 4 | @vainplex/openclaw-governance | 📋 Planned | Policy enforcement + guardrails |
|
|
||||||
| 5 | @vainplex/openclaw-memory-engine | 📋 Planned | Unified memory layer |
|
|
||||||
| 6 | @vainplex/openclaw-health-monitor | 📋 Planned | System health + auto-healing |
|
|
||||||
|
|
||||||
## License
|
|
||||||
|
|
||||||
MIT — see [LICENSE](LICENSE)
|
|
||||||
|
|
|
||||||
|
|
@ -1,167 +0,0 @@
|
||||||
|
|
||||||
╔══════════════════════════════════════════════════════════════╗
|
|
||||||
║ ║
|
|
||||||
║ 🧠 @vainplex/openclaw-cortex — Interactive Demo ║
|
|
||||||
║ ║
|
|
||||||
║ Conversation Intelligence for OpenClaw ║
|
|
||||||
║ Thread Tracking · Decision Extraction · Boot Context ║
|
|
||||||
║ ║
|
|
||||||
╚══════════════════════════════════════════════════════════════╝
|
|
||||||
|
|
||||||
Workspace: /tmp/cortex-demo-O6qhjF
|
|
||||||
|
|
||||||
|
|
||||||
━━━ Phase 1: Live Conversation Processing ━━━
|
|
||||||
|
|
||||||
Cortex listens to every message via OpenClaw hooks.
|
|
||||||
Here we simulate a bilingual dev conversation (EN/DE).
|
|
||||||
|
|
||||||
👤 Albert: Let's get back to the auth migration. We need to switch from JWT to OAuth2.
|
|
||||||
🤖 Claudia: I'll start with the token validation layer. The plan is to keep backward compatibility for 2 weeks.
|
|
||||||
👤 Albert: Agreed. We decided to use Auth0 as the provider.
|
|
||||||
👤 Albert: Also, jetzt zu dem Performance-Bug. Die API braucht 3 Sekunden für simple Queries.
|
|
||||||
🤖 Claudia: Ich hab den Profiler laufen lassen. Das Problem ist der N+1 Query im User-Resolver.
|
|
||||||
👤 Albert: Mist, das ist nervig. Wir brauchen das bis Freitag gefixt.
|
|
||||||
🤖 Claudia: Wir machen Batched DataLoader. Der plan ist erst den User-Resolver zu fixen, dann die restlichen.
|
|
||||||
👤 Albert: Beschlossen. Und wir warten auf den Review von Alexey bevor wir deployen.
|
|
||||||
🤖 Claudia: Auth migration is done ✅ All tests green, backward compat verified.
|
|
||||||
👤 Albert: Nice! Perfekt gelaufen. 🚀
|
|
||||||
👤 Albert: Now about the Kubernetes cluster — we need to plan the migration from Docker Compose.
|
|
||||||
🤖 Claudia: I'll draft an architecture doc. Waiting for the cost estimate from Hetzner first.
|
|
||||||
👤 Albert: Guter Fortschritt heute. Lass uns morgen mit dem K8s-Plan weitermachen.
|
|
||||||
|
|
||||||
━━━ Phase 2: Thread Tracking Results ━━━
|
|
||||||
|
|
||||||
Found 3 threads (2 open, 1 closed)
|
|
||||||
|
|
||||||
○ 🟠 the auth migration
|
|
||||||
Status: closed
|
|
||||||
Priority: high
|
|
||||||
Mood: neutral
|
|
||||||
|
|
||||||
● 🟡 dem Performance-Bug
|
|
||||||
Status: open
|
|
||||||
Priority: medium
|
|
||||||
Mood: neutral
|
|
||||||
|
|
||||||
● 🟡 the Kubernetes cluster
|
|
||||||
Status: open
|
|
||||||
Priority: medium
|
|
||||||
Mood: neutral
|
|
||||||
|
|
||||||
|
|
||||||
━━━ Phase 3: Decision Extraction ━━━
|
|
||||||
|
|
||||||
Extracted 4 decisions from the conversation:
|
|
||||||
|
|
||||||
🎯 I'll start with the token validation layer. The plan is to keep backward compati
|
|
||||||
Impact: medium
|
|
||||||
Who: claudia
|
|
||||||
Date: 2026-02-17
|
|
||||||
|
|
||||||
🎯 Agreed. We decided to use Auth0 as the provider.
|
|
||||||
Impact: medium
|
|
||||||
Who: albert
|
|
||||||
Date: 2026-02-17
|
|
||||||
|
|
||||||
🎯 Wir machen Batched DataLoader. Der plan ist erst den User-Resolver zu fixen, dan
|
|
||||||
Impact: medium
|
|
||||||
Who: claudia
|
|
||||||
Date: 2026-02-17
|
|
||||||
|
|
||||||
🎯 Beschlossen. Und wir warten auf den Review von Alexey bevor wir deployen.
|
|
||||||
Impact: high
|
|
||||||
Who: albert
|
|
||||||
Date: 2026-02-17
|
|
||||||
|
|
||||||
|
|
||||||
━━━ Phase 4: Mood Detection ━━━
|
|
||||||
|
|
||||||
Session mood: 🔥 excited
|
|
||||||
(Detected from conversation patterns — last mood match wins)
|
|
||||||
|
|
||||||
|
|
||||||
━━━ Phase 5: Pre-Compaction Snapshot ━━━
|
|
||||||
|
|
||||||
When OpenClaw compacts the session, Cortex saves everything first.
|
|
||||||
|
|
||||||
Success: yes
|
|
||||||
Messages snapshotted: 13
|
|
||||||
Warnings: none
|
|
||||||
|
|
||||||
▸ Hot Snapshot (memory/reboot/hot-snapshot.md):
|
|
||||||
# Hot Snapshot — 2026-02-17T11:30:02Z
|
|
||||||
## Last conversation before compaction
|
|
||||||
|
|
||||||
**Recent messages:**
|
|
||||||
- [user] Let's get back to the auth migration. We need to switch from JWT to OAuth2.
|
|
||||||
- [assistant] I'll start with the token validation layer. The plan is to keep backward compatibility for 2 weeks.
|
|
||||||
- [user] Agreed. We decided to use Auth0 as the provider.
|
|
||||||
- [user] Also, jetzt zu dem Performance-Bug. Die API braucht 3 Sekunden für simple Queries.
|
|
||||||
- [assistant] Ich hab den Profiler laufen lassen. Das Problem ist der N+1 Query im User-Resolver.
|
|
||||||
- [user] Mist, das ist nervig. Wir brauchen das bis Freitag gefixt.
|
|
||||||
|
|
||||||
|
|
||||||
━━━ Phase 6: Boot Context (BOOTSTRAP.md) ━━━
|
|
||||||
|
|
||||||
On next session start, Cortex assembles a dense briefing from all state.
|
|
||||||
|
|
||||||
│ # Context Briefing
|
|
||||||
│ Generated: 2026-02-17T11:30:02Z | Local: 12:30
|
|
||||||
│
|
|
||||||
│ ## ⚡ State
|
|
||||||
│ Mode: Afternoon — execution mode
|
|
||||||
│ Last session mood: excited 🔥
|
|
||||||
│
|
|
||||||
│ ## 🔥 Last Session Snapshot
|
|
||||||
│ # Hot Snapshot — 2026-02-17T11:30:02Z
|
|
||||||
│ ## Last conversation before compaction
|
|
||||||
│
|
|
||||||
│ **Recent messages:**
|
|
||||||
│ - [user] Let's get back to the auth migration. We need to switch from JWT to OAuth2.
|
|
||||||
│ - [assistant] I'll start with the token validation layer. The plan is to keep backward compatibility for 2 weeks.
|
|
||||||
│ - [user] Agreed. We decided to use Auth0 as the provider.
|
|
||||||
│ - [user] Also, jetzt zu dem Performance-Bug. Die API braucht 3 Sekunden für simple Queries.
|
|
||||||
│ - [assistant] Ich hab den Profiler laufen lassen. Das Problem ist der N+1 Query im User-Resolver.
|
|
||||||
│ - [user] Mist, das ist nervig. Wir brauchen das bis Freitag gefixt.
|
|
||||||
│ - [assistant] Wir machen Batched DataLoader. Der plan ist erst den User-Resolver zu fixen, dann die restlichen.
|
|
||||||
│ - [user] Beschlossen. Und wir warten auf den Review von Alexey bevor wir deployen.
|
|
||||||
│ - [assistant] Auth migration is done ✅ All tests green, backward compat verified.
|
|
||||||
│ - [user] Nice! Perfekt gelaufen. 🚀
|
|
||||||
│ - [user] Now about the Kubernetes cluster — we need to plan the migration
|
|
||||||
│
|
|
||||||
│ ## 📖 Narrative (last 24h)
|
|
||||||
│ *Tuesday, 17. February 2026 — Narrative*
|
|
||||||
│
|
|
||||||
│ **Completed:**
|
|
||||||
│ - ✅ the auth migration: Topic detected from albert
|
|
||||||
│
|
|
||||||
│ **Open:**
|
|
||||||
│ - 🟡 dem Performance-Bug: Topic detected from albert
|
|
||||||
│ - 🟡 the Kubernetes cluster: Topic detected from albert
|
|
||||||
│
|
|
||||||
│ **Decisions:**
|
|
||||||
│ ... (27 more lines)
|
|
||||||
|
|
||||||
Total chars: 3143
|
|
||||||
Approx tokens: 786
|
|
||||||
|
|
||||||
━━━ Phase 7: Generated Files ━━━
|
|
||||||
|
|
||||||
All output lives in {workspace}/memory/reboot/ — plain JSON + Markdown.
|
|
||||||
|
|
||||||
memory/reboot/threads.json: 1354 bytes
|
|
||||||
memory/reboot/decisions.json: 1619 bytes
|
|
||||||
memory/reboot/narrative.md: 866 bytes
|
|
||||||
memory/reboot/hot-snapshot.md: 1199 bytes
|
|
||||||
BOOTSTRAP.md: 3143 bytes
|
|
||||||
|
|
||||||
━━━ Demo Complete ━━━
|
|
||||||
|
|
||||||
All files written to: /tmp/cortex-demo-O6qhjF
|
|
||||||
Explore them: ls -la /tmp/cortex-demo-O6qhjF/memory/reboot/
|
|
||||||
|
|
||||||
Install: npm install @vainplex/openclaw-cortex
|
|
||||||
GitHub: https://github.com/alberthild/openclaw-cortex
|
|
||||||
Docs: docs/ARCHITECTURE.md
|
|
||||||
|
|
||||||
263
demo/demo.ts
263
demo/demo.ts
|
|
@ -1,263 +0,0 @@
|
||||||
#!/usr/bin/env npx tsx
|
|
||||||
/**
|
|
||||||
* @vainplex/openclaw-cortex — Interactive Demo
|
|
||||||
*
|
|
||||||
* Simulates a realistic conversation between a developer (Albert) and an AI assistant (Claudia).
|
|
||||||
* Shows how Cortex automatically tracks threads, extracts decisions, detects mood,
|
|
||||||
* and generates boot context — all from plain conversation text.
|
|
||||||
*
|
|
||||||
* Run: npx tsx demo/demo.ts
|
|
||||||
*/
|
|
||||||
|
|
||||||
import { mkdtempSync, readFileSync, existsSync } from "node:fs";
|
|
||||||
import { join } from "node:path";
|
|
||||||
import { tmpdir } from "node:os";
|
|
||||||
import { ThreadTracker } from "../src/thread-tracker.js";
|
|
||||||
import { DecisionTracker } from "../src/decision-tracker.js";
|
|
||||||
import { BootContextGenerator } from "../src/boot-context.js";
|
|
||||||
import { NarrativeGenerator } from "../src/narrative-generator.js";
|
|
||||||
import { PreCompaction } from "../src/pre-compaction.js";
|
|
||||||
import { resolveConfig } from "../src/config.js";
|
|
||||||
|
|
||||||
// ── Setup ──
|
|
||||||
|
|
||||||
const workspace = mkdtempSync(join(tmpdir(), "cortex-demo-"));
|
|
||||||
const config = resolveConfig({ workspace });
|
|
||||||
|
|
||||||
const logger = {
|
|
||||||
info: () => {},
|
|
||||||
warn: () => {},
|
|
||||||
error: () => {},
|
|
||||||
debug: () => {},
|
|
||||||
};
|
|
||||||
|
|
||||||
const threadTracker = new ThreadTracker(workspace, config.threadTracker, "both", logger);
|
|
||||||
const decisionTracker = new DecisionTracker(workspace, config.decisionTracker, "both", logger);
|
|
||||||
|
|
||||||
// ── Colors ──
|
|
||||||
|
|
||||||
const RESET = "\x1b[0m";
|
|
||||||
const BOLD = "\x1b[1m";
|
|
||||||
const DIM = "\x1b[2m";
|
|
||||||
const CYAN = "\x1b[36m";
|
|
||||||
const GREEN = "\x1b[32m";
|
|
||||||
const YELLOW = "\x1b[33m";
|
|
||||||
const MAGENTA = "\x1b[35m";
|
|
||||||
const BLUE = "\x1b[34m";
|
|
||||||
const RED = "\x1b[31m";
|
|
||||||
|
|
||||||
function heading(text: string) {
|
|
||||||
console.log(`\n${BOLD}${CYAN}━━━ ${text} ━━━${RESET}\n`);
|
|
||||||
}
|
|
||||||
|
|
||||||
function subheading(text: string) {
|
|
||||||
console.log(` ${BOLD}${YELLOW}▸ ${text}${RESET}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
function msg(sender: string, text: string) {
|
|
||||||
const color = sender === "albert" ? GREEN : MAGENTA;
|
|
||||||
const label = sender === "albert" ? "👤 Albert" : "🤖 Claudia";
|
|
||||||
console.log(` ${color}${label}:${RESET} ${DIM}${text}${RESET}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
function stat(label: string, value: string) {
|
|
||||||
console.log(` ${BLUE}${label}:${RESET} ${value}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
function pause(ms: number): Promise<void> {
|
|
||||||
return new Promise(r => setTimeout(r, ms));
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── Conversation ──
|
|
||||||
|
|
||||||
const CONVERSATION: Array<{ sender: string; text: string }> = [
|
|
||||||
// Thread 1: Auth Migration
|
|
||||||
{ sender: "albert", text: "Let's get back to the auth migration. We need to switch from JWT to OAuth2." },
|
|
||||||
{ sender: "claudia", text: "I'll start with the token validation layer. The plan is to keep backward compatibility for 2 weeks." },
|
|
||||||
{ sender: "albert", text: "Agreed. We decided to use Auth0 as the provider." },
|
|
||||||
|
|
||||||
// Thread 2: Performance Bug
|
|
||||||
{ sender: "albert", text: "Also, jetzt zu dem Performance-Bug. Die API braucht 3 Sekunden für simple Queries." },
|
|
||||||
{ sender: "claudia", text: "Ich hab den Profiler laufen lassen. Das Problem ist der N+1 Query im User-Resolver." },
|
|
||||||
{ sender: "albert", text: "Mist, das ist nervig. Wir brauchen das bis Freitag gefixt." },
|
|
||||||
|
|
||||||
// Decision on Performance
|
|
||||||
{ sender: "claudia", text: "Wir machen Batched DataLoader. Der plan ist erst den User-Resolver zu fixen, dann die restlichen." },
|
|
||||||
{ sender: "albert", text: "Beschlossen. Und wir warten auf den Review von Alexey bevor wir deployen." },
|
|
||||||
|
|
||||||
// Thread 1: Closure
|
|
||||||
{ sender: "claudia", text: "Auth migration is done ✅ All tests green, backward compat verified." },
|
|
||||||
{ sender: "albert", text: "Nice! Perfekt gelaufen. 🚀" },
|
|
||||||
|
|
||||||
// Thread 3: New topic
|
|
||||||
{ sender: "albert", text: "Now about the Kubernetes cluster — we need to plan the migration from Docker Compose." },
|
|
||||||
{ sender: "claudia", text: "I'll draft an architecture doc. Waiting for the cost estimate from Hetzner first." },
|
|
||||||
|
|
||||||
// Pre-compaction simulation
|
|
||||||
{ sender: "albert", text: "Guter Fortschritt heute. Lass uns morgen mit dem K8s-Plan weitermachen." },
|
|
||||||
];
|
|
||||||
|
|
||||||
// ── Main ──
|
|
||||||
|
|
||||||
async function run() {
|
|
||||||
console.log(`
|
|
||||||
${BOLD}${CYAN}╔══════════════════════════════════════════════════════════════╗
|
|
||||||
║ ║
|
|
||||||
║ 🧠 @vainplex/openclaw-cortex — Interactive Demo ║
|
|
||||||
║ ║
|
|
||||||
║ Conversation Intelligence for OpenClaw ║
|
|
||||||
║ Thread Tracking · Decision Extraction · Boot Context ║
|
|
||||||
║ ║
|
|
||||||
╚══════════════════════════════════════════════════════════════╝${RESET}
|
|
||||||
|
|
||||||
${DIM}Workspace: ${workspace}${RESET}
|
|
||||||
`);
|
|
||||||
|
|
||||||
// ── Phase 1: Simulate Conversation ──
|
|
||||||
|
|
||||||
heading("Phase 1: Live Conversation Processing");
|
|
||||||
console.log(`${DIM} Cortex listens to every message via OpenClaw hooks.${RESET}`);
|
|
||||||
console.log(`${DIM} Here we simulate a bilingual dev conversation (EN/DE).${RESET}\n`);
|
|
||||||
|
|
||||||
for (const { sender, text } of CONVERSATION) {
|
|
||||||
msg(sender, text);
|
|
||||||
threadTracker.processMessage(text, sender);
|
|
||||||
decisionTracker.processMessage(text, sender);
|
|
||||||
await pause(150);
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── Phase 2: Thread State ──
|
|
||||||
|
|
||||||
heading("Phase 2: Thread Tracking Results");
|
|
||||||
|
|
||||||
const threads = threadTracker.getThreads();
|
|
||||||
const openThreads = threads.filter(t => t.status === "open");
|
|
||||||
const closedThreads = threads.filter(t => t.status === "closed");
|
|
||||||
|
|
||||||
console.log(` Found ${BOLD}${threads.length} threads${RESET} (${GREEN}${openThreads.length} open${RESET}, ${DIM}${closedThreads.length} closed${RESET})\n`);
|
|
||||||
|
|
||||||
for (const t of threads) {
|
|
||||||
const statusIcon = t.status === "open" ? `${GREEN}●${RESET}` : `${DIM}○${RESET}`;
|
|
||||||
const prioEmoji: Record<string, string> = { critical: "🔴", high: "🟠", medium: "🟡", low: "🔵" };
|
|
||||||
console.log(` ${statusIcon} ${prioEmoji[t.priority] ?? "⚪"} ${BOLD}${t.title}${RESET}`);
|
|
||||||
stat("Status", t.status);
|
|
||||||
stat("Priority", t.priority);
|
|
||||||
stat("Mood", t.mood);
|
|
||||||
if (t.decisions.length > 0) stat("Decisions", t.decisions.join(" | "));
|
|
||||||
if (t.waiting_for) stat("Waiting for", t.waiting_for);
|
|
||||||
console.log();
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── Phase 3: Decision Log ──
|
|
||||||
|
|
||||||
heading("Phase 3: Decision Extraction");
|
|
||||||
|
|
||||||
const decisions = decisionTracker.getDecisions();
|
|
||||||
console.log(` Extracted ${BOLD}${decisions.length} decisions${RESET} from the conversation:\n`);
|
|
||||||
|
|
||||||
for (const d of decisions) {
|
|
||||||
const impactColor = d.impact === "high" ? RED : YELLOW;
|
|
||||||
console.log(` 🎯 ${BOLD}${d.what.slice(0, 80)}${RESET}`);
|
|
||||||
stat("Impact", `${impactColor}${d.impact}${RESET}`);
|
|
||||||
stat("Who", d.who);
|
|
||||||
stat("Date", d.date);
|
|
||||||
console.log();
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── Phase 4: Mood Detection ──
|
|
||||||
|
|
||||||
heading("Phase 4: Mood Detection");
|
|
||||||
|
|
||||||
const sessionMood = threadTracker.getSessionMood();
|
|
||||||
const moodEmoji: Record<string, string> = {
|
|
||||||
frustrated: "😤", excited: "🔥", tense: "⚡",
|
|
||||||
productive: "🔧", exploratory: "🔬", neutral: "😐",
|
|
||||||
};
|
|
||||||
console.log(` Session mood: ${BOLD}${moodEmoji[sessionMood] ?? "😐"} ${sessionMood}${RESET}`);
|
|
||||||
console.log(`${DIM} (Detected from conversation patterns — last mood match wins)${RESET}\n`);
|
|
||||||
|
|
||||||
// ── Phase 5: Pre-Compaction Snapshot ──
|
|
||||||
|
|
||||||
heading("Phase 5: Pre-Compaction Snapshot");
|
|
||||||
console.log(`${DIM} When OpenClaw compacts the session, Cortex saves everything first.${RESET}\n`);
|
|
||||||
|
|
||||||
const pipeline = new PreCompaction(workspace, config, logger, threadTracker);
|
|
||||||
const compactingMessages = CONVERSATION.map(c => ({
|
|
||||||
role: c.sender === "albert" ? "user" : "assistant",
|
|
||||||
content: c.text,
|
|
||||||
}));
|
|
||||||
const result = pipeline.run(compactingMessages);
|
|
||||||
|
|
||||||
stat("Success", result.success ? `${GREEN}yes${RESET}` : `${RED}no${RESET}`);
|
|
||||||
stat("Messages snapshotted", String(result.messagesSnapshotted));
|
|
||||||
stat("Warnings", result.warnings.length === 0 ? "none" : result.warnings.join(", "));
|
|
||||||
console.log();
|
|
||||||
|
|
||||||
// Show hot snapshot
|
|
||||||
const snapshotPath = join(workspace, "memory", "reboot", "hot-snapshot.md");
|
|
||||||
if (existsSync(snapshotPath)) {
|
|
||||||
subheading("Hot Snapshot (memory/reboot/hot-snapshot.md):");
|
|
||||||
const snapshot = readFileSync(snapshotPath, "utf-8");
|
|
||||||
for (const line of snapshot.split("\n").slice(0, 10)) {
|
|
||||||
console.log(` ${DIM}${line}${RESET}`);
|
|
||||||
}
|
|
||||||
console.log();
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── Phase 6: Boot Context Generation ──
|
|
||||||
|
|
||||||
heading("Phase 6: Boot Context (BOOTSTRAP.md)");
|
|
||||||
console.log(`${DIM} On next session start, Cortex assembles a dense briefing from all state.${RESET}\n`);
|
|
||||||
|
|
||||||
const bootContext = new BootContextGenerator(workspace, config.bootContext, logger);
|
|
||||||
const bootstrap = bootContext.generate();
|
|
||||||
bootContext.write();
|
|
||||||
|
|
||||||
// Show first 30 lines
|
|
||||||
const lines = bootstrap.split("\n");
|
|
||||||
for (const line of lines.slice(0, 35)) {
|
|
||||||
console.log(` ${DIM}│${RESET} ${line}`);
|
|
||||||
}
|
|
||||||
if (lines.length > 35) {
|
|
||||||
console.log(` ${DIM}│ ... (${lines.length - 35} more lines)${RESET}`);
|
|
||||||
}
|
|
||||||
console.log();
|
|
||||||
stat("Total chars", String(bootstrap.length));
|
|
||||||
stat("Approx tokens", String(Math.round(bootstrap.length / 4)));
|
|
||||||
|
|
||||||
// ── Phase 7: Generated Files ──
|
|
||||||
|
|
||||||
heading("Phase 7: Generated Files");
|
|
||||||
console.log(`${DIM} All output lives in {workspace}/memory/reboot/ — plain JSON + Markdown.${RESET}\n`);
|
|
||||||
|
|
||||||
const files = [
|
|
||||||
"memory/reboot/threads.json",
|
|
||||||
"memory/reboot/decisions.json",
|
|
||||||
"memory/reboot/narrative.md",
|
|
||||||
"memory/reboot/hot-snapshot.md",
|
|
||||||
"BOOTSTRAP.md",
|
|
||||||
];
|
|
||||||
|
|
||||||
for (const file of files) {
|
|
||||||
const fullPath = join(workspace, file);
|
|
||||||
if (existsSync(fullPath)) {
|
|
||||||
const content = readFileSync(fullPath, "utf-8");
|
|
||||||
stat(file, `${content.length} bytes`);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// ── Footer ──
|
|
||||||
|
|
||||||
console.log(`
|
|
||||||
${BOLD}${CYAN}━━━ Demo Complete ━━━${RESET}
|
|
||||||
|
|
||||||
${DIM}All files written to: ${workspace}
|
|
||||||
Explore them: ls -la ${workspace}/memory/reboot/${RESET}
|
|
||||||
|
|
||||||
${BOLD}Install:${RESET} npm install @vainplex/openclaw-cortex
|
|
||||||
${BOLD}GitHub:${RESET} https://github.com/alberthild/openclaw-cortex
|
|
||||||
${BOLD}Docs:${RESET} docs/ARCHITECTURE.md
|
|
||||||
`);
|
|
||||||
}
|
|
||||||
|
|
||||||
run().catch(console.error);
|
|
||||||
|
|
@ -148,47 +148,6 @@
|
||||||
"description": "Language for regex pattern matching: English, German, or both"
|
"description": "Language for regex pattern matching: English, German, or both"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
},
|
|
||||||
"llm": {
|
|
||||||
"type": "object",
|
|
||||||
"additionalProperties": false,
|
|
||||||
"description": "Optional LLM enhancement — any OpenAI-compatible API (Ollama, OpenAI, OpenRouter, vLLM, etc.)",
|
|
||||||
"properties": {
|
|
||||||
"enabled": {
|
|
||||||
"type": "boolean",
|
|
||||||
"default": false,
|
|
||||||
"description": "Enable LLM-powered analysis on top of regex patterns"
|
|
||||||
},
|
|
||||||
"endpoint": {
|
|
||||||
"type": "string",
|
|
||||||
"default": "http://localhost:11434/v1",
|
|
||||||
"description": "OpenAI-compatible API endpoint"
|
|
||||||
},
|
|
||||||
"model": {
|
|
||||||
"type": "string",
|
|
||||||
"default": "mistral:7b",
|
|
||||||
"description": "Model identifier (e.g. mistral:7b, gpt-4o-mini)"
|
|
||||||
},
|
|
||||||
"apiKey": {
|
|
||||||
"type": "string",
|
|
||||||
"default": "",
|
|
||||||
"description": "API key (optional, for cloud providers)"
|
|
||||||
},
|
|
||||||
"timeoutMs": {
|
|
||||||
"type": "integer",
|
|
||||||
"minimum": 1000,
|
|
||||||
"maximum": 60000,
|
|
||||||
"default": 15000,
|
|
||||||
"description": "Timeout per LLM call in milliseconds"
|
|
||||||
},
|
|
||||||
"batchSize": {
|
|
||||||
"type": "integer",
|
|
||||||
"minimum": 1,
|
|
||||||
"maximum": 20,
|
|
||||||
"default": 3,
|
|
||||||
"description": "Number of messages to buffer before calling the LLM"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
||||||
4
package-lock.json
generated
4
package-lock.json
generated
|
|
@ -1,12 +1,12 @@
|
||||||
{
|
{
|
||||||
"name": "@vainplex/openclaw-cortex",
|
"name": "@vainplex/openclaw-cortex",
|
||||||
"version": "0.2.1",
|
"version": "0.1.0",
|
||||||
"lockfileVersion": 3,
|
"lockfileVersion": 3,
|
||||||
"requires": true,
|
"requires": true,
|
||||||
"packages": {
|
"packages": {
|
||||||
"": {
|
"": {
|
||||||
"name": "@vainplex/openclaw-cortex",
|
"name": "@vainplex/openclaw-cortex",
|
||||||
"version": "0.2.1",
|
"version": "0.1.0",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"devDependencies": {
|
"devDependencies": {
|
||||||
"@types/node": "^22.0.0",
|
"@types/node": "^22.0.0",
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,6 @@
|
||||||
{
|
{
|
||||||
"name": "@vainplex/openclaw-cortex",
|
"name": "@vainplex/openclaw-cortex",
|
||||||
"version": "0.2.1",
|
"version": "0.1.0",
|
||||||
"description": "OpenClaw plugin: conversation intelligence — thread tracking, decision extraction, boot context, pre-compaction snapshots",
|
"description": "OpenClaw plugin: conversation intelligence — thread tracking, decision extraction, boot context, pre-compaction snapshots",
|
||||||
"type": "module",
|
"type": "module",
|
||||||
"main": "dist/index.js",
|
"main": "dist/index.js",
|
||||||
|
|
@ -27,8 +27,7 @@
|
||||||
"openclaw": {
|
"openclaw": {
|
||||||
"extensions": [
|
"extensions": [
|
||||||
"./dist/index.js"
|
"./dist/index.js"
|
||||||
],
|
]
|
||||||
"id": "openclaw-cortex"
|
|
||||||
},
|
},
|
||||||
"keywords": [
|
"keywords": [
|
||||||
"openclaw",
|
"openclaw",
|
||||||
|
|
|
||||||
|
|
@ -31,14 +31,6 @@ export const DEFAULTS: CortexConfig = {
|
||||||
patterns: {
|
patterns: {
|
||||||
language: "both",
|
language: "both",
|
||||||
},
|
},
|
||||||
llm: {
|
|
||||||
enabled: false,
|
|
||||||
endpoint: "http://localhost:11434/v1",
|
|
||||||
model: "mistral:7b",
|
|
||||||
apiKey: "",
|
|
||||||
timeoutMs: 15000,
|
|
||||||
batchSize: 3,
|
|
||||||
},
|
|
||||||
};
|
};
|
||||||
|
|
||||||
function bool(value: unknown, fallback: boolean): boolean {
|
function bool(value: unknown, fallback: boolean): boolean {
|
||||||
|
|
@ -67,7 +59,6 @@ export function resolveConfig(pluginConfig?: Record<string, unknown>): CortexCon
|
||||||
const pc = (raw.preCompaction ?? {}) as Record<string, unknown>;
|
const pc = (raw.preCompaction ?? {}) as Record<string, unknown>;
|
||||||
const nr = (raw.narrative ?? {}) as Record<string, unknown>;
|
const nr = (raw.narrative ?? {}) as Record<string, unknown>;
|
||||||
const pt = (raw.patterns ?? {}) as Record<string, unknown>;
|
const pt = (raw.patterns ?? {}) as Record<string, unknown>;
|
||||||
const lm = (raw.llm ?? {}) as Record<string, unknown>;
|
|
||||||
|
|
||||||
return {
|
return {
|
||||||
enabled: bool(raw.enabled, DEFAULTS.enabled),
|
enabled: bool(raw.enabled, DEFAULTS.enabled),
|
||||||
|
|
@ -100,14 +91,6 @@ export function resolveConfig(pluginConfig?: Record<string, unknown>): CortexCon
|
||||||
patterns: {
|
patterns: {
|
||||||
language: lang(pt.language),
|
language: lang(pt.language),
|
||||||
},
|
},
|
||||||
llm: {
|
|
||||||
enabled: bool(lm.enabled, DEFAULTS.llm.enabled),
|
|
||||||
endpoint: str(lm.endpoint, DEFAULTS.llm.endpoint),
|
|
||||||
model: str(lm.model, DEFAULTS.llm.model),
|
|
||||||
apiKey: str(lm.apiKey, DEFAULTS.llm.apiKey),
|
|
||||||
timeoutMs: int(lm.timeoutMs, DEFAULTS.llm.timeoutMs),
|
|
||||||
batchSize: int(lm.batchSize, DEFAULTS.llm.batchSize),
|
|
||||||
},
|
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -156,29 +156,6 @@ export class DecisionTracker {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Add a decision directly (from LLM analysis). Deduplicates and persists.
|
|
||||||
*/
|
|
||||||
addDecision(what: string, who: string, impact: ImpactLevel | string): void {
|
|
||||||
const now = new Date();
|
|
||||||
if (this.isDuplicate(what, now)) return;
|
|
||||||
|
|
||||||
const validImpact = (["critical", "high", "medium", "low"].includes(impact) ? impact : "medium") as ImpactLevel;
|
|
||||||
|
|
||||||
this.decisions.push({
|
|
||||||
id: randomUUID(),
|
|
||||||
what: what.slice(0, 200),
|
|
||||||
date: now.toISOString().slice(0, 10),
|
|
||||||
why: `LLM-detected decision (${who})`,
|
|
||||||
impact: validImpact,
|
|
||||||
who,
|
|
||||||
extracted_at: now.toISOString(),
|
|
||||||
});
|
|
||||||
|
|
||||||
this.enforceMax();
|
|
||||||
this.persist();
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get all decisions (in-memory).
|
* Get all decisions (in-memory).
|
||||||
*/
|
*/
|
||||||
|
|
|
||||||
28
src/hooks.ts
28
src/hooks.ts
|
|
@ -9,7 +9,6 @@ import { ThreadTracker } from "./thread-tracker.js";
|
||||||
import { DecisionTracker } from "./decision-tracker.js";
|
import { DecisionTracker } from "./decision-tracker.js";
|
||||||
import { BootContextGenerator } from "./boot-context.js";
|
import { BootContextGenerator } from "./boot-context.js";
|
||||||
import { PreCompaction } from "./pre-compaction.js";
|
import { PreCompaction } from "./pre-compaction.js";
|
||||||
import { LlmEnhancer, resolveLlmConfig } from "./llm-enhance.js";
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Extract message content from a hook event using the fallback chain.
|
* Extract message content from a hook event using the fallback chain.
|
||||||
|
|
@ -30,7 +29,6 @@ type HookState = {
|
||||||
workspace: string | null;
|
workspace: string | null;
|
||||||
threadTracker: ThreadTracker | null;
|
threadTracker: ThreadTracker | null;
|
||||||
decisionTracker: DecisionTracker | null;
|
decisionTracker: DecisionTracker | null;
|
||||||
llmEnhancer: LlmEnhancer | null;
|
|
||||||
};
|
};
|
||||||
|
|
||||||
function ensureInit(state: HookState, config: CortexConfig, logger: OpenClawPluginApi["logger"], ctx?: HookContext): void {
|
function ensureInit(state: HookState, config: CortexConfig, logger: OpenClawPluginApi["logger"], ctx?: HookContext): void {
|
||||||
|
|
@ -43,40 +41,20 @@ function ensureInit(state: HookState, config: CortexConfig, logger: OpenClawPlug
|
||||||
if (!state.decisionTracker && config.decisionTracker.enabled) {
|
if (!state.decisionTracker && config.decisionTracker.enabled) {
|
||||||
state.decisionTracker = new DecisionTracker(state.workspace, config.decisionTracker, config.patterns.language, logger);
|
state.decisionTracker = new DecisionTracker(state.workspace, config.decisionTracker, config.patterns.language, logger);
|
||||||
}
|
}
|
||||||
if (!state.llmEnhancer && config.llm.enabled) {
|
|
||||||
state.llmEnhancer = new LlmEnhancer(config.llm, logger);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Register message hooks (message_received + message_sent). */
|
/** Register message hooks (message_received + message_sent). */
|
||||||
function registerMessageHooks(api: OpenClawPluginApi, config: CortexConfig, state: HookState): void {
|
function registerMessageHooks(api: OpenClawPluginApi, config: CortexConfig, state: HookState): void {
|
||||||
if (!config.threadTracker.enabled && !config.decisionTracker.enabled) return;
|
if (!config.threadTracker.enabled && !config.decisionTracker.enabled) return;
|
||||||
|
|
||||||
const handler = async (event: HookEvent, ctx: HookContext, senderOverride?: string) => {
|
const handler = (event: HookEvent, ctx: HookContext, senderOverride?: string) => {
|
||||||
try {
|
try {
|
||||||
ensureInit(state, config, api.logger, ctx);
|
ensureInit(state, config, api.logger, ctx);
|
||||||
const content = extractContent(event);
|
const content = extractContent(event);
|
||||||
const sender = senderOverride ?? extractSender(event);
|
const sender = senderOverride ?? extractSender(event);
|
||||||
if (!content) return;
|
if (!content) return;
|
||||||
|
|
||||||
// Regex-based processing (always runs — zero cost)
|
|
||||||
if (config.threadTracker.enabled && state.threadTracker) state.threadTracker.processMessage(content, sender);
|
if (config.threadTracker.enabled && state.threadTracker) state.threadTracker.processMessage(content, sender);
|
||||||
if (config.decisionTracker.enabled && state.decisionTracker) state.decisionTracker.processMessage(content, sender);
|
if (config.decisionTracker.enabled && state.decisionTracker) state.decisionTracker.processMessage(content, sender);
|
||||||
|
|
||||||
// LLM enhancement (optional — batched, async, fire-and-forget)
|
|
||||||
if (state.llmEnhancer) {
|
|
||||||
const role = senderOverride ? "assistant" as const : "user" as const;
|
|
||||||
const analysis = await state.llmEnhancer.addMessage(content, sender, role);
|
|
||||||
if (analysis) {
|
|
||||||
// Apply LLM findings on top of regex results
|
|
||||||
if (state.threadTracker) state.threadTracker.applyLlmAnalysis(analysis);
|
|
||||||
if (state.decisionTracker) {
|
|
||||||
for (const dec of analysis.decisions) {
|
|
||||||
state.decisionTracker.addDecision(dec.what, dec.who, dec.impact);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
api.logger.warn(`[cortex] message hook error: ${err}`);
|
api.logger.warn(`[cortex] message hook error: ${err}`);
|
||||||
}
|
}
|
||||||
|
|
@ -131,13 +109,13 @@ function registerCompactionHooks(api: OpenClawPluginApi, config: CortexConfig, s
|
||||||
* Each handler is wrapped in try/catch — never throws.
|
* Each handler is wrapped in try/catch — never throws.
|
||||||
*/
|
*/
|
||||||
export function registerCortexHooks(api: OpenClawPluginApi, config: CortexConfig): void {
|
export function registerCortexHooks(api: OpenClawPluginApi, config: CortexConfig): void {
|
||||||
const state: HookState = { workspace: null, threadTracker: null, decisionTracker: null, llmEnhancer: null };
|
const state: HookState = { workspace: null, threadTracker: null, decisionTracker: null };
|
||||||
|
|
||||||
registerMessageHooks(api, config, state);
|
registerMessageHooks(api, config, state);
|
||||||
registerSessionHooks(api, config, state);
|
registerSessionHooks(api, config, state);
|
||||||
registerCompactionHooks(api, config, state);
|
registerCompactionHooks(api, config, state);
|
||||||
|
|
||||||
api.logger.info(
|
api.logger.info(
|
||||||
`[cortex] Hooks registered — threads:${config.threadTracker.enabled} decisions:${config.decisionTracker.enabled} boot:${config.bootContext.enabled} compaction:${config.preCompaction.enabled} llm:${config.llm.enabled}${config.llm.enabled ? ` (${config.llm.model}@${config.llm.endpoint})` : ""}`,
|
`[cortex] Hooks registered — threads:${config.threadTracker.enabled} decisions:${config.decisionTracker.enabled} boot:${config.bootContext.enabled} compaction:${config.preCompaction.enabled}`,
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -1,258 +0,0 @@
|
||||||
import { request } from "node:http";
|
|
||||||
import { URL } from "node:url";
|
|
||||||
import type { PluginLogger } from "./types.js";
|
|
||||||
|
|
||||||
/**
|
|
||||||
* LLM Enhancement — optional AI-powered analysis layered on top of regex patterns.
|
|
||||||
*
|
|
||||||
* When enabled, sends conversation snippets to a local or remote LLM for deeper
|
|
||||||
* thread/decision/closure detection. Falls back gracefully to regex-only on failure.
|
|
||||||
*
|
|
||||||
* Supports any OpenAI-compatible API (Ollama, vLLM, OpenRouter, OpenAI, etc.)
|
|
||||||
*/
|
|
||||||
|
|
||||||
export type LlmConfig = {
|
|
||||||
enabled: boolean;
|
|
||||||
/** OpenAI-compatible endpoint, e.g. "http://localhost:11434/v1" */
|
|
||||||
endpoint: string;
|
|
||||||
/** Model identifier, e.g. "mistral:7b" or "gpt-4o-mini" */
|
|
||||||
model: string;
|
|
||||||
/** API key (optional, for cloud providers) */
|
|
||||||
apiKey: string;
|
|
||||||
/** Timeout in ms for LLM calls */
|
|
||||||
timeoutMs: number;
|
|
||||||
/** Minimum message count before triggering LLM (batches for efficiency) */
|
|
||||||
batchSize: number;
|
|
||||||
};
|
|
||||||
|
|
||||||
export const LLM_DEFAULTS: LlmConfig = {
|
|
||||||
enabled: false,
|
|
||||||
endpoint: "http://localhost:11434/v1",
|
|
||||||
model: "mistral:7b",
|
|
||||||
apiKey: "",
|
|
||||||
timeoutMs: 15000,
|
|
||||||
batchSize: 3,
|
|
||||||
};
|
|
||||||
|
|
||||||
export type LlmAnalysis = {
|
|
||||||
threads: Array<{
|
|
||||||
title: string;
|
|
||||||
status: "open" | "closed";
|
|
||||||
summary?: string;
|
|
||||||
}>;
|
|
||||||
decisions: Array<{
|
|
||||||
what: string;
|
|
||||||
who: string;
|
|
||||||
impact: "high" | "medium" | "low";
|
|
||||||
}>;
|
|
||||||
closures: string[];
|
|
||||||
mood: string;
|
|
||||||
};
|
|
||||||
|
|
||||||
const SYSTEM_PROMPT = `You are a conversation analyst. Given a snippet of conversation between a user and an AI assistant, extract:
|
|
||||||
|
|
||||||
1. **threads**: Active topics being discussed. Each has a title (short, specific) and status (open/closed).
|
|
||||||
2. **decisions**: Any decisions made. Include what was decided, who decided, and impact (high/medium/low).
|
|
||||||
3. **closures**: Thread titles that were completed/resolved in this snippet.
|
|
||||||
4. **mood**: Overall conversation mood (neutral/frustrated/excited/tense/productive/exploratory).
|
|
||||||
|
|
||||||
Rules:
|
|
||||||
- Only extract REAL topics, not meta-conversation ("how are you", greetings, etc.)
|
|
||||||
- Thread titles should be specific and actionable ("auth migration to OAuth2", not "the thing")
|
|
||||||
- Decisions must be actual commitments, not questions or suggestions
|
|
||||||
- Be conservative — when in doubt, don't extract
|
|
||||||
|
|
||||||
Respond ONLY with valid JSON matching this schema:
|
|
||||||
{"threads":[{"title":"...","status":"open|closed","summary":"..."}],"decisions":[{"what":"...","who":"...","impact":"high|medium|low"}],"closures":["thread title"],"mood":"neutral"}`;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Call an OpenAI-compatible chat completion API.
|
|
||||||
*/
|
|
||||||
function callLlm(
|
|
||||||
config: LlmConfig,
|
|
||||||
messages: Array<{ role: string; content: string }>,
|
|
||||||
logger: PluginLogger,
|
|
||||||
): Promise<string | null> {
|
|
||||||
return new Promise((resolve) => {
|
|
||||||
try {
|
|
||||||
const url = new URL(`${config.endpoint}/chat/completions`);
|
|
||||||
const body = JSON.stringify({
|
|
||||||
model: config.model,
|
|
||||||
messages,
|
|
||||||
temperature: 0.1,
|
|
||||||
max_tokens: 1000,
|
|
||||||
response_format: { type: "json_object" },
|
|
||||||
});
|
|
||||||
|
|
||||||
const headers: Record<string, string> = {
|
|
||||||
"Content-Type": "application/json",
|
|
||||||
"Content-Length": String(Buffer.byteLength(body)),
|
|
||||||
};
|
|
||||||
if (config.apiKey) {
|
|
||||||
headers["Authorization"] = `Bearer ${config.apiKey}`;
|
|
||||||
}
|
|
||||||
|
|
||||||
const proto = url.protocol === "https:" ? require("node:https") : require("node:http");
|
|
||||||
const req = proto.request(
|
|
||||||
{
|
|
||||||
hostname: url.hostname,
|
|
||||||
port: url.port || (url.protocol === "https:" ? 443 : 80),
|
|
||||||
path: url.pathname,
|
|
||||||
method: "POST",
|
|
||||||
headers,
|
|
||||||
timeout: config.timeoutMs,
|
|
||||||
},
|
|
||||||
(res: any) => {
|
|
||||||
let data = "";
|
|
||||||
res.on("data", (chunk: string) => (data += chunk));
|
|
||||||
res.on("end", () => {
|
|
||||||
try {
|
|
||||||
const parsed = JSON.parse(data);
|
|
||||||
const content = parsed?.choices?.[0]?.message?.content;
|
|
||||||
resolve(content ?? null);
|
|
||||||
} catch {
|
|
||||||
logger.warn(`[cortex-llm] Failed to parse LLM response`);
|
|
||||||
resolve(null);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
},
|
|
||||||
);
|
|
||||||
|
|
||||||
req.on("error", (err: Error) => {
|
|
||||||
logger.warn(`[cortex-llm] Request error: ${err.message}`);
|
|
||||||
resolve(null);
|
|
||||||
});
|
|
||||||
|
|
||||||
req.on("timeout", () => {
|
|
||||||
req.destroy();
|
|
||||||
logger.warn(`[cortex-llm] Request timed out (${config.timeoutMs}ms)`);
|
|
||||||
resolve(null);
|
|
||||||
});
|
|
||||||
|
|
||||||
req.write(body);
|
|
||||||
req.end();
|
|
||||||
} catch (err) {
|
|
||||||
logger.warn(`[cortex-llm] Exception: ${err}`);
|
|
||||||
resolve(null);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Parse LLM JSON response into structured analysis.
|
|
||||||
* Returns null on any parse failure (graceful degradation).
|
|
||||||
*/
|
|
||||||
function parseAnalysis(raw: string, logger: PluginLogger): LlmAnalysis | null {
|
|
||||||
try {
|
|
||||||
const parsed = JSON.parse(raw);
|
|
||||||
return {
|
|
||||||
threads: Array.isArray(parsed.threads)
|
|
||||||
? parsed.threads.filter(
|
|
||||||
(t: any) => typeof t.title === "string" && t.title.length > 2,
|
|
||||||
)
|
|
||||||
: [],
|
|
||||||
decisions: Array.isArray(parsed.decisions)
|
|
||||||
? parsed.decisions.filter(
|
|
||||||
(d: any) => typeof d.what === "string" && d.what.length > 5,
|
|
||||||
)
|
|
||||||
: [],
|
|
||||||
closures: Array.isArray(parsed.closures)
|
|
||||||
? parsed.closures.filter((c: any) => typeof c === "string")
|
|
||||||
: [],
|
|
||||||
mood: typeof parsed.mood === "string" ? parsed.mood : "neutral",
|
|
||||||
};
|
|
||||||
} catch {
|
|
||||||
logger.warn(`[cortex-llm] Failed to parse analysis JSON`);
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Message buffer for batching LLM calls.
|
|
||||||
*/
|
|
||||||
export class LlmEnhancer {
|
|
||||||
private buffer: Array<{ role: string; content: string; sender: string }> = [];
|
|
||||||
private readonly config: LlmConfig;
|
|
||||||
private readonly logger: PluginLogger;
|
|
||||||
private lastCallMs = 0;
|
|
||||||
private readonly cooldownMs = 5000;
|
|
||||||
|
|
||||||
constructor(config: LlmConfig, logger: PluginLogger) {
|
|
||||||
this.config = config;
|
|
||||||
this.logger = logger;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Buffer a message. Returns analysis when batch is full, null otherwise.
|
|
||||||
*/
|
|
||||||
async addMessage(
|
|
||||||
content: string,
|
|
||||||
sender: string,
|
|
||||||
role: "user" | "assistant",
|
|
||||||
): Promise<LlmAnalysis | null> {
|
|
||||||
if (!this.config.enabled) return null;
|
|
||||||
|
|
||||||
this.buffer.push({ role, content, sender });
|
|
||||||
|
|
||||||
if (this.buffer.length < this.config.batchSize) return null;
|
|
||||||
|
|
||||||
// Cooldown check
|
|
||||||
const now = Date.now();
|
|
||||||
if (now - this.lastCallMs < this.cooldownMs) return null;
|
|
||||||
this.lastCallMs = now;
|
|
||||||
|
|
||||||
// Flush buffer
|
|
||||||
const batch = this.buffer.splice(0);
|
|
||||||
return this.analyze(batch);
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Force-analyze remaining buffer (e.g. before compaction).
|
|
||||||
*/
|
|
||||||
async flush(): Promise<LlmAnalysis | null> {
|
|
||||||
if (!this.config.enabled || this.buffer.length === 0) return null;
|
|
||||||
const batch = this.buffer.splice(0);
|
|
||||||
return this.analyze(batch);
|
|
||||||
}
|
|
||||||
|
|
||||||
private async analyze(
|
|
||||||
messages: Array<{ role: string; content: string; sender: string }>,
|
|
||||||
): Promise<LlmAnalysis | null> {
|
|
||||||
const snippet = messages
|
|
||||||
.map((m) => `[${m.sender}]: ${m.content}`)
|
|
||||||
.join("\n\n");
|
|
||||||
|
|
||||||
const raw = await callLlm(
|
|
||||||
this.config,
|
|
||||||
[
|
|
||||||
{ role: "system", content: SYSTEM_PROMPT },
|
|
||||||
{ role: "user", content: snippet },
|
|
||||||
],
|
|
||||||
this.logger,
|
|
||||||
);
|
|
||||||
|
|
||||||
if (!raw) return null;
|
|
||||||
|
|
||||||
const analysis = parseAnalysis(raw, this.logger);
|
|
||||||
if (analysis) {
|
|
||||||
const stats = `threads=${analysis.threads.length} decisions=${analysis.decisions.length} closures=${analysis.closures.length}`;
|
|
||||||
this.logger.info(`[cortex-llm] Analysis: ${stats}`);
|
|
||||||
}
|
|
||||||
return analysis;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Resolve LLM config from plugin config.
|
|
||||||
*/
|
|
||||||
export function resolveLlmConfig(raw?: Record<string, unknown>): LlmConfig {
|
|
||||||
if (!raw) return { ...LLM_DEFAULTS };
|
|
||||||
return {
|
|
||||||
enabled: typeof raw.enabled === "boolean" ? raw.enabled : LLM_DEFAULTS.enabled,
|
|
||||||
endpoint: typeof raw.endpoint === "string" ? raw.endpoint : LLM_DEFAULTS.endpoint,
|
|
||||||
model: typeof raw.model === "string" ? raw.model : LLM_DEFAULTS.model,
|
|
||||||
apiKey: typeof raw.apiKey === "string" ? raw.apiKey : LLM_DEFAULTS.apiKey,
|
|
||||||
timeoutMs: typeof raw.timeoutMs === "number" ? raw.timeoutMs : LLM_DEFAULTS.timeoutMs,
|
|
||||||
batchSize: typeof raw.batchSize === "number" ? raw.batchSize : LLM_DEFAULTS.batchSize,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
@ -32,23 +32,13 @@ const WAIT_PATTERNS_DE = [
|
||||||
];
|
];
|
||||||
|
|
||||||
const TOPIC_PATTERNS_EN = [
|
const TOPIC_PATTERNS_EN = [
|
||||||
/(?:back to|now about|regarding|let's (?:talk|discuss|look at))\s+(?:the\s+)?(\w[\w\s-]{3,40})/i,
|
/(?:back to|now about|regarding)\s+(\w[\w\s-]{2,30})/i,
|
||||||
];
|
];
|
||||||
|
|
||||||
const TOPIC_PATTERNS_DE = [
|
const TOPIC_PATTERNS_DE = [
|
||||||
/(?:zurück zu|jetzt zu|bzgl\.?|wegen|lass uns (?:über|mal))\s+(?:dem?|die|das)?\s*(\w[\w\s-]{3,40})/i,
|
/(?:zurück zu|jetzt zu|bzgl\.?|wegen)\s+(\w[\w\s-]{2,30})/i,
|
||||||
];
|
];
|
||||||
|
|
||||||
/** Words that should never be thread titles (noise filter) */
|
|
||||||
const TOPIC_BLACKLIST = new Set([
|
|
||||||
"it", "that", "this", "the", "them", "what", "which", "there",
|
|
||||||
"das", "die", "der", "es", "was", "hier", "dort",
|
|
||||||
"nothing", "something", "everything", "nichts", "etwas", "alles",
|
|
||||||
"me", "you", "him", "her", "us", "mir", "dir", "ihm", "uns",
|
|
||||||
"today", "tomorrow", "yesterday", "heute", "morgen", "gestern",
|
|
||||||
"noch", "schon", "jetzt", "dann", "also", "aber", "oder",
|
|
||||||
]);
|
|
||||||
|
|
||||||
const MOOD_PATTERNS: Record<Exclude<Mood, "neutral">, RegExp> = {
|
const MOOD_PATTERNS: Record<Exclude<Mood, "neutral">, RegExp> = {
|
||||||
frustrated: /(?:fuck|shit|mist|nervig|genervt|damn|wtf|argh|schon wieder|zum kotzen|sucks)/i,
|
frustrated: /(?:fuck|shit|mist|nervig|genervt|damn|wtf|argh|schon wieder|zum kotzen|sucks)/i,
|
||||||
excited: /(?:geil|nice|awesome|krass|boom|läuft|yes!|🎯|🚀|perfekt|brilliant|mega|sick)/i,
|
excited: /(?:geil|nice|awesome|krass|boom|läuft|yes!|🎯|🚀|perfekt|brilliant|mega|sick)/i,
|
||||||
|
|
@ -125,24 +115,6 @@ export function detectMood(text: string): Mood {
|
||||||
return lastMood;
|
return lastMood;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Check if a topic candidate is noise (too short, blacklisted, or garbage).
|
|
||||||
*/
|
|
||||||
export function isNoiseTopic(topic: string): boolean {
|
|
||||||
const trimmed = topic.trim();
|
|
||||||
if (trimmed.length < 4) return true;
|
|
||||||
// Single word that's in blacklist
|
|
||||||
const words = trimmed.toLowerCase().split(/\s+/);
|
|
||||||
if (words.length === 1 && TOPIC_BLACKLIST.has(words[0])) return true;
|
|
||||||
// All words are blacklisted
|
|
||||||
if (words.every(w => TOPIC_BLACKLIST.has(w) || w.length < 3)) return true;
|
|
||||||
// Looks like a sentence fragment (starts with pronoun or blacklisted word)
|
|
||||||
if (/^(ich|i|we|wir|du|er|sie|he|she|it|es|nichts|nothing|etwas|something)\s/i.test(trimmed)) return true;
|
|
||||||
// Contains line breaks or is too long for a title
|
|
||||||
if (trimmed.includes("\n") || trimmed.length > 60) return true;
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
/** High-impact keywords for decision impact inference */
|
/** High-impact keywords for decision impact inference */
|
||||||
export const HIGH_IMPACT_KEYWORDS = [
|
export const HIGH_IMPACT_KEYWORDS = [
|
||||||
"architecture", "architektur", "security", "sicherheit",
|
"architecture", "architektur", "security", "sicherheit",
|
||||||
|
|
|
||||||
|
|
@ -7,7 +7,7 @@ import type {
|
||||||
ThreadPriority,
|
ThreadPriority,
|
||||||
PluginLogger,
|
PluginLogger,
|
||||||
} from "./types.js";
|
} from "./types.js";
|
||||||
import { getPatterns, detectMood, HIGH_IMPACT_KEYWORDS, isNoiseTopic } from "./patterns.js";
|
import { getPatterns, detectMood, HIGH_IMPACT_KEYWORDS } from "./patterns.js";
|
||||||
import type { PatternLanguage } from "./patterns.js";
|
import type { PatternLanguage } from "./patterns.js";
|
||||||
import { loadJson, saveJson, rebootDir, ensureRebootDir } from "./storage.js";
|
import { loadJson, saveJson, rebootDir, ensureRebootDir } from "./storage.js";
|
||||||
|
|
||||||
|
|
@ -127,10 +127,9 @@ export class ThreadTracker {
|
||||||
this.sessionMood = data.session_mood ?? "neutral";
|
this.sessionMood = data.session_mood ?? "neutral";
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Create new threads from topic signals (with noise filtering). */
|
/** Create new threads from topic signals. */
|
||||||
private createFromTopics(topics: string[], sender: string, mood: string, now: string): void {
|
private createFromTopics(topics: string[], sender: string, mood: string, now: string): void {
|
||||||
for (const topic of topics) {
|
for (const topic of topics) {
|
||||||
if (isNoiseTopic(topic)) continue;
|
|
||||||
const exists = this.threads.some(
|
const exists = this.threads.some(
|
||||||
t => t.title.toLowerCase() === topic.toLowerCase() || matchesThread(t, topic),
|
t => t.title.toLowerCase() === topic.toLowerCase() || matchesThread(t, topic),
|
||||||
);
|
);
|
||||||
|
|
@ -144,52 +143,6 @@ export class ThreadTracker {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Apply LLM analysis results — creates threads, closes threads, adds decisions.
|
|
||||||
* Called from hooks when LLM enhance is enabled.
|
|
||||||
*/
|
|
||||||
applyLlmAnalysis(analysis: {
|
|
||||||
threads: Array<{ title: string; status: "open" | "closed"; summary?: string }>;
|
|
||||||
closures: string[];
|
|
||||||
mood: string;
|
|
||||||
}): void {
|
|
||||||
const now = new Date().toISOString();
|
|
||||||
|
|
||||||
// Create threads from LLM
|
|
||||||
for (const lt of analysis.threads) {
|
|
||||||
if (isNoiseTopic(lt.title)) continue;
|
|
||||||
const exists = this.threads.some(
|
|
||||||
t => t.title.toLowerCase() === lt.title.toLowerCase() || matchesThread(t, lt.title),
|
|
||||||
);
|
|
||||||
if (!exists) {
|
|
||||||
this.threads.push({
|
|
||||||
id: randomUUID(), title: lt.title, status: lt.status,
|
|
||||||
priority: inferPriority(lt.title), summary: lt.summary ?? "LLM-detected",
|
|
||||||
decisions: [], waiting_for: null, mood: analysis.mood ?? "neutral",
|
|
||||||
last_activity: now, created: now,
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Close threads from LLM closures
|
|
||||||
for (const closure of analysis.closures) {
|
|
||||||
for (const thread of this.threads) {
|
|
||||||
if (thread.status === "open" && matchesThread(thread, closure)) {
|
|
||||||
thread.status = "closed";
|
|
||||||
thread.last_activity = now;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Update session mood
|
|
||||||
if (analysis.mood && analysis.mood !== "neutral") {
|
|
||||||
this.sessionMood = analysis.mood;
|
|
||||||
}
|
|
||||||
|
|
||||||
this.dirty = true;
|
|
||||||
this.persist();
|
|
||||||
}
|
|
||||||
|
|
||||||
/** Close threads matching closure signals. */
|
/** Close threads matching closure signals. */
|
||||||
private closeMatching(content: string, closures: boolean[], now: string): void {
|
private closeMatching(content: string, closures: boolean[], now: string): void {
|
||||||
if (closures.length === 0) return;
|
if (closures.length === 0) return;
|
||||||
|
|
|
||||||
|
|
@ -245,14 +245,6 @@ export type CortexConfig = {
|
||||||
patterns: {
|
patterns: {
|
||||||
language: "en" | "de" | "both";
|
language: "en" | "de" | "both";
|
||||||
};
|
};
|
||||||
llm: {
|
|
||||||
enabled: boolean;
|
|
||||||
endpoint: string;
|
|
||||||
model: string;
|
|
||||||
apiKey: string;
|
|
||||||
timeoutMs: number;
|
|
||||||
batchSize: number;
|
|
||||||
};
|
|
||||||
};
|
};
|
||||||
|
|
||||||
// ============================================================
|
// ============================================================
|
||||||
|
|
|
||||||
|
|
@ -1,97 +0,0 @@
|
||||||
import { describe, it, expect } from "vitest";
|
|
||||||
import { resolveLlmConfig, LlmEnhancer, LLM_DEFAULTS } from "../src/llm-enhance.js";
|
|
||||||
|
|
||||||
const mockLogger = {
|
|
||||||
info: () => {},
|
|
||||||
warn: () => {},
|
|
||||||
error: () => {},
|
|
||||||
debug: () => {},
|
|
||||||
};
|
|
||||||
|
|
||||||
describe("resolveLlmConfig", () => {
|
|
||||||
it("returns defaults when no config provided", () => {
|
|
||||||
const config = resolveLlmConfig(undefined);
|
|
||||||
expect(config).toEqual(LLM_DEFAULTS);
|
|
||||||
expect(config.enabled).toBe(false);
|
|
||||||
});
|
|
||||||
|
|
||||||
it("returns defaults for empty object", () => {
|
|
||||||
const config = resolveLlmConfig({});
|
|
||||||
expect(config).toEqual(LLM_DEFAULTS);
|
|
||||||
});
|
|
||||||
|
|
||||||
it("merges partial config with defaults", () => {
|
|
||||||
const config = resolveLlmConfig({
|
|
||||||
enabled: true,
|
|
||||||
model: "qwen2.5:7b",
|
|
||||||
});
|
|
||||||
expect(config.enabled).toBe(true);
|
|
||||||
expect(config.model).toBe("qwen2.5:7b");
|
|
||||||
expect(config.endpoint).toBe(LLM_DEFAULTS.endpoint);
|
|
||||||
expect(config.timeoutMs).toBe(LLM_DEFAULTS.timeoutMs);
|
|
||||||
expect(config.batchSize).toBe(LLM_DEFAULTS.batchSize);
|
|
||||||
});
|
|
||||||
|
|
||||||
it("respects custom endpoint and apiKey", () => {
|
|
||||||
const config = resolveLlmConfig({
|
|
||||||
enabled: true,
|
|
||||||
endpoint: "https://api.openai.com/v1",
|
|
||||||
model: "gpt-4o-mini",
|
|
||||||
apiKey: "sk-test",
|
|
||||||
timeoutMs: 30000,
|
|
||||||
batchSize: 5,
|
|
||||||
});
|
|
||||||
expect(config.endpoint).toBe("https://api.openai.com/v1");
|
|
||||||
expect(config.apiKey).toBe("sk-test");
|
|
||||||
expect(config.timeoutMs).toBe(30000);
|
|
||||||
expect(config.batchSize).toBe(5);
|
|
||||||
});
|
|
||||||
|
|
||||||
it("ignores invalid types", () => {
|
|
||||||
const config = resolveLlmConfig({
|
|
||||||
enabled: "yes" as any,
|
|
||||||
model: 42 as any,
|
|
||||||
timeoutMs: "fast" as any,
|
|
||||||
});
|
|
||||||
expect(config.enabled).toBe(LLM_DEFAULTS.enabled);
|
|
||||||
expect(config.model).toBe(LLM_DEFAULTS.model);
|
|
||||||
expect(config.timeoutMs).toBe(LLM_DEFAULTS.timeoutMs);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
||||||
describe("LlmEnhancer", () => {
|
|
||||||
it("returns null when disabled", async () => {
|
|
||||||
const enhancer = new LlmEnhancer({ ...LLM_DEFAULTS, enabled: false }, mockLogger);
|
|
||||||
const result = await enhancer.addMessage("test message", "user1", "user");
|
|
||||||
expect(result).toBeNull();
|
|
||||||
});
|
|
||||||
|
|
||||||
it("buffers messages until batchSize", async () => {
|
|
||||||
const enhancer = new LlmEnhancer(
|
|
||||||
{ ...LLM_DEFAULTS, enabled: true, batchSize: 3 },
|
|
||||||
mockLogger,
|
|
||||||
);
|
|
||||||
// First two messages should buffer (no LLM call)
|
|
||||||
const r1 = await enhancer.addMessage("hello", "user1", "user");
|
|
||||||
expect(r1).toBeNull();
|
|
||||||
const r2 = await enhancer.addMessage("world", "assistant", "assistant");
|
|
||||||
expect(r2).toBeNull();
|
|
||||||
// Third would trigger LLM but will fail gracefully (no server)
|
|
||||||
const r3 = await enhancer.addMessage("test", "user1", "user");
|
|
||||||
// Returns null because localhost:11434 is not guaranteed
|
|
||||||
// The important thing is it doesn't throw
|
|
||||||
expect(r3 === null || typeof r3 === "object").toBe(true);
|
|
||||||
});
|
|
||||||
|
|
||||||
it("flush returns null when no messages buffered", async () => {
|
|
||||||
const enhancer = new LlmEnhancer({ ...LLM_DEFAULTS, enabled: true }, mockLogger);
|
|
||||||
const result = await enhancer.flush();
|
|
||||||
expect(result).toBeNull();
|
|
||||||
});
|
|
||||||
|
|
||||||
it("flush returns null when disabled", async () => {
|
|
||||||
const enhancer = new LlmEnhancer({ ...LLM_DEFAULTS, enabled: false }, mockLogger);
|
|
||||||
const result = await enhancer.flush();
|
|
||||||
expect(result).toBeNull();
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
@ -1,57 +0,0 @@
|
||||||
import { describe, it, expect } from "vitest";
|
|
||||||
import { isNoiseTopic } from "../src/patterns.js";
|
|
||||||
|
|
||||||
describe("isNoiseTopic", () => {
|
|
||||||
it("rejects short strings", () => {
|
|
||||||
expect(isNoiseTopic("foo")).toBe(true);
|
|
||||||
expect(isNoiseTopic("ab")).toBe(true);
|
|
||||||
expect(isNoiseTopic("")).toBe(true);
|
|
||||||
});
|
|
||||||
|
|
||||||
it("rejects single blacklisted words", () => {
|
|
||||||
expect(isNoiseTopic("that")).toBe(true);
|
|
||||||
expect(isNoiseTopic("this")).toBe(true);
|
|
||||||
expect(isNoiseTopic("nichts")).toBe(true);
|
|
||||||
expect(isNoiseTopic("alles")).toBe(true);
|
|
||||||
});
|
|
||||||
|
|
||||||
it("rejects all-blacklisted multi-word", () => {
|
|
||||||
expect(isNoiseTopic("das was es")).toBe(true);
|
|
||||||
expect(isNoiseTopic("the that it")).toBe(true);
|
|
||||||
});
|
|
||||||
|
|
||||||
it("rejects sentence fragments starting with pronouns", () => {
|
|
||||||
expect(isNoiseTopic("ich habe nichts gepostet")).toBe(true);
|
|
||||||
expect(isNoiseTopic("we should do something")).toBe(true);
|
|
||||||
expect(isNoiseTopic("er hat gesagt")).toBe(true);
|
|
||||||
expect(isNoiseTopic("I think maybe")).toBe(true);
|
|
||||||
});
|
|
||||||
|
|
||||||
it("rejects topics with newlines", () => {
|
|
||||||
expect(isNoiseTopic("line one\nline two")).toBe(true);
|
|
||||||
});
|
|
||||||
|
|
||||||
it("rejects topics longer than 60 chars", () => {
|
|
||||||
const long = "a".repeat(61);
|
|
||||||
expect(isNoiseTopic(long)).toBe(true);
|
|
||||||
});
|
|
||||||
|
|
||||||
it("accepts valid topic names", () => {
|
|
||||||
expect(isNoiseTopic("Auth Migration")).toBe(false);
|
|
||||||
expect(isNoiseTopic("Plugin-Repo Setup")).toBe(false);
|
|
||||||
expect(isNoiseTopic("NATS Event Store")).toBe(false);
|
|
||||||
expect(isNoiseTopic("Cortex Demo")).toBe(false);
|
|
||||||
expect(isNoiseTopic("Security Audit")).toBe(false);
|
|
||||||
expect(isNoiseTopic("Deployment Pipeline")).toBe(false);
|
|
||||||
});
|
|
||||||
|
|
||||||
it("accepts german topic names", () => {
|
|
||||||
expect(isNoiseTopic("Darkplex Analyse")).toBe(false);
|
|
||||||
expect(isNoiseTopic("Credential Rotation")).toBe(false);
|
|
||||||
expect(isNoiseTopic("Thread Tracking Qualität")).toBe(false);
|
|
||||||
});
|
|
||||||
|
|
||||||
it("rejects 'nichts gepostet habe' (real-world noise)", () => {
|
|
||||||
expect(isNoiseTopic("nichts gepostet habe")).toBe(true);
|
|
||||||
});
|
|
||||||
});
|
|
||||||
|
|
@ -260,10 +260,10 @@ describe("topic patterns", () => {
|
||||||
expect(anyMatch(topic, "just a random sentence")).toBe(false);
|
expect(anyMatch(topic, "just a random sentence")).toBe(false);
|
||||||
});
|
});
|
||||||
|
|
||||||
it("limits captured topic to 40 chars", () => {
|
it("limits captured topic to 30 chars", () => {
|
||||||
const topics = captureTopics(topic, "back to the very long topic name that exceeds forty characters limit here and keeps going");
|
const topics = captureTopics(topic, "back to the very long topic name that exceeds thirty characters limit here");
|
||||||
if (topics.length > 0) {
|
if (topics.length > 0) {
|
||||||
expect(topics[0].length).toBeLessThanOrEqual(41);
|
expect(topics[0].length).toBeLessThanOrEqual(31);
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
@ -274,7 +274,7 @@ describe("topic patterns", () => {
|
||||||
it("captures topic after 'zurück zu'", () => {
|
it("captures topic after 'zurück zu'", () => {
|
||||||
const topics = captureTopics(topic, "Zurück zu der Auth-Migration");
|
const topics = captureTopics(topic, "Zurück zu der Auth-Migration");
|
||||||
expect(topics.length).toBeGreaterThan(0);
|
expect(topics.length).toBeGreaterThan(0);
|
||||||
expect(topics[0]).toContain("Auth-Migration");
|
expect(topics[0]).toContain("der Auth-Migration");
|
||||||
});
|
});
|
||||||
|
|
||||||
it("captures topic after 'jetzt zu'", () => {
|
it("captures topic after 'jetzt zu'", () => {
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue