Skip to content

RLM (Recurrent Language Model) Integration for LCM Compaction#174

Open
viaramb wants to merge 1 commit intoMartian-Engineering:mainfrom
viaramb:main
Open

RLM (Recurrent Language Model) Integration for LCM Compaction#174
viaramb wants to merge 1 commit intoMartian-Engineering:mainfrom
viaramb:main

Conversation

@viaramb
Copy link
Copy Markdown

@viaramb viaramb commented Mar 24, 2026

Summary

This PR introduces the Recurrent Language Model (RLM) integration into the lossless-claw LCM (Lossless Context Management) compaction system. RLM enables pattern-based summarization that identifies recurring themes, topics, and entities across conversation summaries to produce more coherent and information-dense condensed summaries at higher compaction depths.

What is RLM?

RLM (Recurrent Language Model) is a specialized summarization engine designed for multi-level compaction scenarios. Unlike standard summarization that treats each compaction pass in isolation, RLM:

• Detects recurring patterns across multiple summaries (themes, topics, entities, sentiment shifts)
• Identifies cross-cutting concerns that span multiple conversation segments
• Generates pattern-aware summaries that reference detected patterns instead of repeating them
• Falls back gracefully to standard escalation summarization when patterns are insufficient

This is particularly valuable at depth >= 2 where summaries-of-summaries can become repetitive or lose important thematic connections.

Key Features

  1. Pattern-Based Summarization

The RLM engine analyzes RlmSummaryEntry objects (structured metadata extracted from summaries) to detect:

• Recurring themes - Topics that appear across multiple summaries
• Key entities - People, places, concepts mentioned repeatedly
• Sentiment progression - How sentiment evolves across conversation segments
Unresolvedquestions - Open threads that carry forward

  1. Depth-Aware Activation

RLM only activates when depth >= rlmMinDepth (default: 2), ensuring:

• Low-depth compaction uses fast standard summarization
• Higher-depth compaction benefits from pattern analysis
• Configurable threshold based on your use case

  1. Seamless Fallback

If RLM pattern detection fails or produces low-confidence results, the system automatically falls back to the standard three-level escalation (normal -> aggressive -> deterministic), ensuring robustness.

  1. Heuristic + LLM Hybrid

RLM operates in two modes:

• Heuristic mode (default): Pattern detection via token analysis and frequency counting
• LLM-assisted mode: Optional LLM-based pattern extraction when a completion function is provided

Configuration
Options

Add to your plugins.entries.lossless-claw.config or use environment variables:

Option Environment Variable Type Default Description
rlmEnabled LCM_RLM_ENABLED boolean false Enable RLM pattern-based summarization
rlmProvider LCM_RLM_PROVIDER string "" Provider for RLM LLM calls (e.g., "openai")
rlmModel LCM_RLM_MODEL string "" Model for RLM analysis (e.g., "gpt-4o-mini")
rlmMinDepth LCM_RLM_MIN_DEPTH number 2 Minimum depth before RLM activates
rlmPatternThreshold LCM_RLM_PATTERN_THRESHOLD number 0.7 Confidence threshold for pattern acceptance (0.0-1.0)

Example Configuration

{
"plugins": {
"entries": {
"lossless-claw": {
"config": {
"rlmEnabled": true,
"rlmProvider": "openai",
"rlmModel": "gpt-4o-mini",
"rlmMinDepth": 2,
"rlmPatternThreshold": 0.7
}
}
}
}
}

Files Changed

  • src/db/config.ts - Added RLM configuration fields with env var support
  • src/rlm/types.ts - Type definitions for RLM entries, patterns, and results
  • src/rlm/rlm.ts - Core RlmEngine class with pattern detection and summarization
  • src/rlm/index.ts - Module exports
  • src/compaction.ts - Integration: RLM instantiation in constructor, summarizeWithEscalation uses RLM for depth >= minDepth
  • test/rlm-integration.test.ts - Integration tests for CompactionEngine + RLM
  • test/rlm-patterns.test.ts - Unit tests for pattern detection logic

Testing Status

All tests pass:

✓ test/rlm-integration.test.ts (11 tests)
✓ test/rlm-patterns.test.ts (7 tests)

Test Files 2 passed (2)
Tests 18 passed

…compaction

This commit introduces the RLM integration into the lossless-claw LCM compaction
system. RLM enables pattern-based summarization that identifies recurring themes,
topics, and entities across conversation summaries.

Key features:
- Pattern-based summarization for depth >= 2 compaction
- Detects recurring themes, entities, sentiment progression, unresolved questions
- Seamless fallback to standard escalation when patterns insufficient
- Heuristic + optional LLM-assisted pattern extraction
- Fully backward compatible (disabled by default)

Configuration options (env vars or plugin config):
- LCM_RLM_ENABLED / rlmEnabled
- LCM_RLM_PROVIDER / rlmProvider
- LCM_RLM_MODEL / rlmModel
- LCM_RLM_MIN_DEPTH / rlmMinDepth (default: 2)
- LCM_RLM_PATTERN_THRESHOLD / rlmPatternThreshold (default: 0.7)

New files:
- src/rlm/types.ts - RLM type definitions
- src/rlm/rlm.ts - Core RlmEngine class
- src/rlm/index.ts - Module exports
- test/rlm-integration.test.ts - Integration tests
- test/rlm-patterns.test.ts - Pattern detection unit tests

Modified files:
- src/db/config.ts - Added RLM config fields
- src/compaction.ts - RLM integration in summarizeWithEscalation
- src/summarize.ts - RLM-aware summarization
- README.md - Documentation updates
- openclaw.plugin.json - Plugin metadata
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant