refactor: streamline usage reporting by consolidating record publishi…#2254
refactor: streamline usage reporting by consolidating record publishi…#2254clcc2019 wants to merge 1 commit intorouter-for-me:mainfrom
Conversation
…ng logic - Introduced a new method `buildRecord` in `usageReporter` to encapsulate record creation, improving code readability and maintainability. - Added latency tracking to usage records, ensuring accurate reporting of request latencies. - Updated tests to validate the inclusion of latency in usage records and ensure proper functionality of the new reporting structure.
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request refactors the core usage reporting mechanism to centralize record creation and integrate request latency tracking. The changes aim to improve the modularity and clarity of the usage reporting code while adding a critical performance metric to the collected data, enabling more comprehensive analysis of system performance. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request effectively refactors the usage reporting logic by introducing a buildRecord method, which improves code clarity and maintainability. The addition of latency tracking is a valuable enhancement, and it has been implemented thoughtfully across the different layers, including the necessary updates to data structures and tests. The new tests adequately cover the added functionality. I have one suggestion to improve the structure of a new test case to enhance its readability.
| func TestRequestStatisticsMergeSnapshotDedupIgnoresLatency(t *testing.T) { | ||
| stats := NewRequestStatistics() | ||
| timestamp := time.Date(2026, 3, 20, 12, 0, 0, 0, time.UTC) | ||
| first := StatisticsSnapshot{ | ||
| APIs: map[string]APISnapshot{ | ||
| "test-key": { | ||
| Models: map[string]ModelSnapshot{ | ||
| "gpt-5.4": { | ||
| Details: []RequestDetail{{ | ||
| Timestamp: timestamp, | ||
| LatencyMs: 0, | ||
| Source: "[email protected]", | ||
| AuthIndex: "0", | ||
| Tokens: TokenStats{ | ||
| InputTokens: 10, | ||
| OutputTokens: 20, | ||
| TotalTokens: 30, | ||
| }, | ||
| }}, | ||
| }, | ||
| }, | ||
| }, | ||
| }, | ||
| } | ||
| second := StatisticsSnapshot{ | ||
| APIs: map[string]APISnapshot{ | ||
| "test-key": { | ||
| Models: map[string]ModelSnapshot{ | ||
| "gpt-5.4": { | ||
| Details: []RequestDetail{{ | ||
| Timestamp: timestamp, | ||
| LatencyMs: 2500, | ||
| Source: "[email protected]", | ||
| AuthIndex: "0", | ||
| Tokens: TokenStats{ | ||
| InputTokens: 10, | ||
| OutputTokens: 20, | ||
| TotalTokens: 30, | ||
| }, | ||
| }}, | ||
| }, | ||
| }, | ||
| }, | ||
| }, | ||
| } | ||
|
|
||
| result := stats.MergeSnapshot(first) | ||
| if result.Added != 1 || result.Skipped != 0 { | ||
| t.Fatalf("first merge = %+v, want added=1 skipped=0", result) | ||
| } | ||
|
|
||
| result = stats.MergeSnapshot(second) | ||
| if result.Added != 0 || result.Skipped != 1 { | ||
| t.Fatalf("second merge = %+v, want added=0 skipped=1", result) | ||
| } | ||
|
|
||
| snapshot := stats.Snapshot() | ||
| details := snapshot.APIs["test-key"].Models["gpt-5.4"].Details | ||
| if len(details) != 1 { | ||
| t.Fatalf("details len = %d, want 1", len(details)) | ||
| } | ||
| } |
There was a problem hiding this comment.
The setup for first and second StatisticsSnapshot instances involves a lot of duplicated code, making the test harder to read and maintain. To improve this, you could create a helper function that builds the StatisticsSnapshot and takes the differing values (like LatencyMs) as parameters. This would reduce boilerplate and make the core logic of the test—what is actually different between the two snapshots—more apparent.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: c1bf298216
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| AuthID: r.authID, | ||
| AuthIndex: r.authIndex, | ||
| RequestedAt: r.requestedAt, | ||
| Latency: r.latency(), |
There was a problem hiding this comment.
Capture latency after stream completion
Latency is fixed at the first call to buildRecord via Latency: r.latency(), but several streaming executors publish usage as soon as an intermediate usage event arrives (before the stream is finished). Since publishWithOutcome is protected by once.Do, long streams keep that early value and systematically under-report end-to-end request latency, making the new metric misleading for operational monitoring.
Useful? React with 👍 / 👎.
…ng logic
buildRecordinusageReporterto encapsulate record creation, improving code readability and maintainability.