RLM (Recurrent Language Model) Integration for LCM Compaction#174
Open
viaramb wants to merge 1 commit intoMartian-Engineering:mainfrom
Open
RLM (Recurrent Language Model) Integration for LCM Compaction#174viaramb wants to merge 1 commit intoMartian-Engineering:mainfrom
viaramb wants to merge 1 commit intoMartian-Engineering:mainfrom
Conversation
…compaction This commit introduces the RLM integration into the lossless-claw LCM compaction system. RLM enables pattern-based summarization that identifies recurring themes, topics, and entities across conversation summaries. Key features: - Pattern-based summarization for depth >= 2 compaction - Detects recurring themes, entities, sentiment progression, unresolved questions - Seamless fallback to standard escalation when patterns insufficient - Heuristic + optional LLM-assisted pattern extraction - Fully backward compatible (disabled by default) Configuration options (env vars or plugin config): - LCM_RLM_ENABLED / rlmEnabled - LCM_RLM_PROVIDER / rlmProvider - LCM_RLM_MODEL / rlmModel - LCM_RLM_MIN_DEPTH / rlmMinDepth (default: 2) - LCM_RLM_PATTERN_THRESHOLD / rlmPatternThreshold (default: 0.7) New files: - src/rlm/types.ts - RLM type definitions - src/rlm/rlm.ts - Core RlmEngine class - src/rlm/index.ts - Module exports - test/rlm-integration.test.ts - Integration tests - test/rlm-patterns.test.ts - Pattern detection unit tests Modified files: - src/db/config.ts - Added RLM config fields - src/compaction.ts - RLM integration in summarizeWithEscalation - src/summarize.ts - RLM-aware summarization - README.md - Documentation updates - openclaw.plugin.json - Plugin metadata
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
This PR introduces the Recurrent Language Model (RLM) integration into the lossless-claw LCM (Lossless Context Management) compaction system. RLM enables pattern-based summarization that identifies recurring themes, topics, and entities across conversation summaries to produce more coherent and information-dense condensed summaries at higher compaction depths.
What is RLM?
RLM (Recurrent Language Model) is a specialized summarization engine designed for multi-level compaction scenarios. Unlike standard summarization that treats each compaction pass in isolation, RLM:
• Detects recurring patterns across multiple summaries (themes, topics, entities, sentiment shifts)
• Identifies cross-cutting concerns that span multiple conversation segments
• Generates pattern-aware summaries that reference detected patterns instead of repeating them
• Falls back gracefully to standard escalation summarization when patterns are insufficient
This is particularly valuable at depth >= 2 where summaries-of-summaries can become repetitive or lose important thematic connections.
Key Features
The RLM engine analyzes RlmSummaryEntry objects (structured metadata extracted from summaries) to detect:
• Recurring themes - Topics that appear across multiple summaries
• Key entities - People, places, concepts mentioned repeatedly
• Sentiment progression - How sentiment evolves across conversation segments
• Unresolvedquestions - Open threads that carry forward
RLM only activates when depth >= rlmMinDepth (default: 2), ensuring:
• Low-depth compaction uses fast standard summarization
• Higher-depth compaction benefits from pattern analysis
• Configurable threshold based on your use case
If RLM pattern detection fails or produces low-confidence results, the system automatically falls back to the standard three-level escalation (normal -> aggressive -> deterministic), ensuring robustness.
RLM operates in two modes:
• Heuristic mode (default): Pattern detection via token analysis and frequency counting
• LLM-assisted mode: Optional LLM-based pattern extraction when a completion function is provided
Configuration
Options
Add to your plugins.entries.lossless-claw.config or use environment variables:
Example Configuration
{
"plugins": {
"entries": {
"lossless-claw": {
"config": {
"rlmEnabled": true,
"rlmProvider": "openai",
"rlmModel": "gpt-4o-mini",
"rlmMinDepth": 2,
"rlmPatternThreshold": 0.7
}
}
}
}
}
Files Changed
src/db/config.ts- Added RLM configuration fields with env var supportsrc/rlm/types.ts- Type definitions for RLM entries, patterns, and resultssrc/rlm/rlm.ts- CoreRlmEngineclass with pattern detection and summarizationsrc/rlm/index.ts- Module exportssrc/compaction.ts- Integration: RLM instantiation in constructor,summarizeWithEscalationuses RLM for depth >= minDepthtest/rlm-integration.test.ts- Integration tests for CompactionEngine + RLMtest/rlm-patterns.test.ts- Unit tests for pattern detection logicTesting Status
All tests pass:
✓ test/rlm-integration.test.ts (11 tests)
✓ test/rlm-patterns.test.ts (7 tests)
Test Files 2 passed (2)
Tests 18 passed