Skip to content

feat(telegram): streaming output support (Edit + Draft modes)#276

Open
Youhai020616 wants to merge 1 commit intoRightNow-AI:mainfrom
Youhai020616:feat/telegram-streaming
Open

feat(telegram): streaming output support (Edit + Draft modes)#276
Youhai020616 wants to merge 1 commit intoRightNow-AI:mainfrom
Youhai020616:feat/telegram-streaming

Conversation

@Youhai020616
Copy link

@Youhai020616 Youhai020616 commented Mar 3, 2026

Summary

  • Add progressive streaming output for the Telegram adapter so AI responses display incrementally ("typing effect") instead of waiting for the full response
  • Implement two streaming strategies: Edit mode (sendMessage + editMessageText, works in groups and DMs) and Draft mode (sendMessageDraft Bot API 9.5, DM only, experimental)
  • Auto mode (default) selects the best strategy; stream_mode: off preserves existing behavior with zero impact

Changes

File What
openfang-types/config.rs TelegramStreamMode enum, TelegramStreamConfig struct, new fields on TelegramConfig
openfang-channels/types.rs ChannelStreamEvent enum, StreamSink trait, supports_streaming() / begin_stream() on ChannelAdapter
openfang-channels/formatter.rs escape_html_entities(), find_split_point() helpers
openfang-channels/telegram.rs TelegramStreamSink implementing StreamSink, Edit/Draft modes, rate-limit backoff, message splitting, cursor indicator
openfang-channels/bridge.rs send_message_streaming() on ChannelBridgeHandle, dispatch_streaming(), streaming fallback in dispatch
openfang-api/channel_bridge.rs send_message_streaming() impl with StreamEvent → ChannelStreamEvent conversion

+819 lines, -19 lines across 7 files (including Cargo.lock)

Key Features

  • Dual-threshold buffering: flushes when both time (500ms) AND character count (40 chars) thresholds are met
  • Cursor indicator shown during streaming, removed on finalize
  • 429 rate-limit adaptive backoff: doubles flush interval after 3 consecutive rate limits
  • Auto message splitting at 3900 chars with smart break points (paragraph > line > sentence > word)
  • Draft → Edit fallback: if Draft mode fails, automatically degrades to Edit
  • Full backward compatibility: all trait changes use default implementations

Config Example

[channels.telegram]
stream_mode = "auto"  # off | edit | draft | auto

[channels.telegram.stream_config]
flush_interval_ms = 500
min_chars_per_flush = 40
max_message_chars = 3900
cursor_indicator = ""

🤖 Generated with Claude Code

Add progressive message display for Telegram so AI responses appear
like typing instead of waiting for the full response. Implements two
strategies: Edit mode (sendMessage + editMessageText, works everywhere)
and Draft mode (sendMessageDraft, DM only, experimental). Auto mode
selects the best strategy. Includes rate-limit adaptive backoff,
cursor indicator, message splitting for long responses, and full
backward compatibility via default trait implementations.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
egargale pushed a commit to egargale/openfang that referenced this pull request Mar 5, 2026
@Youhai020616
Copy link
Author

🔍 Adversarial Review — PR #276

Size: large (1106 lines, 7 files) | Reviewers: skeptic, architect, minimalist

Verdict: REJECT

Six independent HIGH findings, three with cross-reviewer consensus — correctness, data integrity, and API contract failures make this unsafe to merge.


Findings

1. [HIGH | CONSENSUS ×3] UTF-8 byte-slice panic — formatter.rs:214
find_split_point slices &text[..max_chars] by raw byte offset. CJK/emoji input will land mid-codepoint and panic, crashing streaming for all non-ASCII users. Telegram's limit is code points, not bytes.
Fix: clamp via char_indices() / is_char_boundary; measure limits in chars.

2. [HIGH | CONSENSUS ×2] Partial-stream fallback creates ghost messages — bridge.rs:689
On dispatch_streaming failure, caller falls back to dispatch_message, but the frozen cursor-indicator message () is never deleted. User sees a stale ghost + a full duplicate reply. No abort-and-cleanup contract exists.
Fix: track "bytes emitted" flag; skip fallback and send a recovery note instead; delete or overwrite the partial message.

3. [HIGH] Abnormal stream close marked as successful delivery — bridge.rs:731
If the producer task panics or drops the channel without sending Complete, the loop exits normally, finalize() runs, and record_delivery(..., true) is called — truncated output is recorded as success.
Fix: require explicit Complete; treat channel close without it as Err; record failure metadata.

4. [HIGH | CONSENSUS ×2] retry_after parser is structurally broken — telegram.rs (handle_rate_limit)
The parser splits on the digits it is trying to extract, guaranteeing it never produces a valid backoff value. Under sustained 429s the adapter hammers Telegram at minimum interval, worsening rate-limit violations.
Fix: parse retry_after from the structured JSON response at the HTTP layer; discard the string-splitting path.

5. [HIGH] Auto mode permanently lies — telegram.rs:492
TelegramStreamMode::Auto unconditionally maps to StreamMode::Edit with no runtime selection logic. The variant is a hardcoded alias wearing a smart-routing name; user configs written against Auto will silently stay on Edit forever.
Fix: either remove Auto until selection logic exists, or document it as Edit alias explicitly in the schema.

6. [HIGH] Trait coupled to tokio::sync::mpsc::Receiverbridge.rs:130
send_message_streaming returns a concrete channel receiver, locking the interface to a single tokio primitive. Broadcast, futures::channel::mpsc, and Stream-based producers all require unnecessary adapters.
Fix: return Pin<Box<dyn Stream<Item=ChannelStreamEvent> + Send>> or a newtype at the trait boundary.

7. [MEDIUM | CONSENSUS ×2] message_id = 0 sentinel swallowed at all layers — telegram.rs:104, bridge.rs
api_send_message_with_id returns Ok(0) on chunk failure. TelegramStreamSink::flush stores 0 as self.message_id; all subsequent editMessageText calls silently fail with Telegram's "message not found." No layer surfaces or recovers the error.
Fix: return Err for any failed chunk; validate message_id > 0 before proceeding; propagate upward.

8. [MEDIUM | CONSENSUS ×2] Detached converter task with no error propagation — channel_bridge.rs:76
The spawned StreamEvent → ChannelStreamEvent relay task is detached. A crash or unexpected upstream close emits no ChannelStreamEvent::Error; downstream finalizes as success.
Fix: supervise with a join handle; explicitly send Error before drop, or inline the mapping to eliminate the extra channel entirely.

9. [MEDIUM] flush() swallows edit errors, returns Oktelegram.rs:531
Non-429 editMessageText failures are warned and dropped; flush() still returns Ok(()) and advances flush state. The caller believes the stream progressed while the Telegram UI is stale.
Fix: classify retryable vs fatal; return Err on persistent edit failure so the caller can abort deterministically.

10. [MEDIUM] Delivery record split-brain for partial streams — bridge.rs:756
If streaming sends N messages then fails before finalize(), no delivery is recorded and no orphan cleanup runs. If the non-streaming fallback also fires, it records its own delivery. What the user received and what the system believes was delivered diverge.
Fix: record partial-stream attempts with failure metadata; coordinate or suppress the fallback delivery record.


What Went Well

  • Layered decomposition (sink / formatter / bridge) is architecturally sound and testable in isolation.
  • Rate-limit awareness and the 429 backoff intent are correct operational thinking — the implementation just needs a structural fix.
  • Edit-mode streaming is the right UX pattern for Telegram's API constraints.

Lead Judgment

# Finding Judgment
1 UTF-8 panic Block — crashes on all CJK/emoji input; consensus ×3
2 Ghost message on fallback Block — data integrity; consensus ×2
3 Truncated stream marked success Block — delivery correctness
4 Broken retry_after parser Block — actively worsens API abuse under load; consensus ×2
5 Auto mode lies Block — schema contract violation baked into user configs
6 Trait coupled to mpsc receiver Block — extensibility wall; correct now before impls proliferate
7 message_id = 0 sentinel Must fix — silent multi-layer failure
8 Detached converter task Must fix — consensus ×2; eliminate with inline mapping
9 flush() swallows errors Must fix — breaks abort semantics
10 Delivery split-brain Must fix — operational integrity

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant