Skip to content

refactor: streamline usage reporting by consolidating record publishi…#2254

Open
clcc2019 wants to merge 1 commit intorouter-for-me:mainfrom
clcc2019:main
Open

refactor: streamline usage reporting by consolidating record publishi…#2254
clcc2019 wants to merge 1 commit intorouter-for-me:mainfrom
clcc2019:main

Conversation

@clcc2019
Copy link

…ng logic

  • Introduced a new method buildRecord in usageReporter to encapsulate record creation, improving code readability and maintainability.
  • Added latency tracking to usage records, ensuring accurate reporting of request latencies.
  • Updated tests to validate the inclusion of latency in usage records and ensure proper functionality of the new reporting structure.

…ng logic

- Introduced a new method `buildRecord` in `usageReporter` to encapsulate record creation, improving code readability and maintainability.
- Added latency tracking to usage records, ensuring accurate reporting of request latencies.
- Updated tests to validate the inclusion of latency in usage records and ensure proper functionality of the new reporting structure.
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refactors the core usage reporting mechanism to centralize record creation and integrate request latency tracking. The changes aim to improve the modularity and clarity of the usage reporting code while adding a critical performance metric to the collected data, enabling more comprehensive analysis of system performance.

Highlights

  • Refactored Usage Record Creation: Consolidated the logic for building usage records into a new buildRecord method within the usageReporter, enhancing code readability and maintainability.
  • Latency Tracking Implementation: Introduced the capability to track and report request latencies within usage records, providing valuable performance insights.
  • Enhanced Testing: Added new and updated existing tests to validate the correct inclusion of latency in usage records and ensure the robustness of the new reporting structure.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively refactors the usage reporting logic by introducing a buildRecord method, which improves code clarity and maintainability. The addition of latency tracking is a valuable enhancement, and it has been implemented thoughtfully across the different layers, including the necessary updates to data structures and tests. The new tests adequately cover the added functionality. I have one suggestion to improve the structure of a new test case to enhance its readability.

Comment on lines +35 to +96
func TestRequestStatisticsMergeSnapshotDedupIgnoresLatency(t *testing.T) {
stats := NewRequestStatistics()
timestamp := time.Date(2026, 3, 20, 12, 0, 0, 0, time.UTC)
first := StatisticsSnapshot{
APIs: map[string]APISnapshot{
"test-key": {
Models: map[string]ModelSnapshot{
"gpt-5.4": {
Details: []RequestDetail{{
Timestamp: timestamp,
LatencyMs: 0,
Source: "[email protected]",
AuthIndex: "0",
Tokens: TokenStats{
InputTokens: 10,
OutputTokens: 20,
TotalTokens: 30,
},
}},
},
},
},
},
}
second := StatisticsSnapshot{
APIs: map[string]APISnapshot{
"test-key": {
Models: map[string]ModelSnapshot{
"gpt-5.4": {
Details: []RequestDetail{{
Timestamp: timestamp,
LatencyMs: 2500,
Source: "[email protected]",
AuthIndex: "0",
Tokens: TokenStats{
InputTokens: 10,
OutputTokens: 20,
TotalTokens: 30,
},
}},
},
},
},
},
}

result := stats.MergeSnapshot(first)
if result.Added != 1 || result.Skipped != 0 {
t.Fatalf("first merge = %+v, want added=1 skipped=0", result)
}

result = stats.MergeSnapshot(second)
if result.Added != 0 || result.Skipped != 1 {
t.Fatalf("second merge = %+v, want added=0 skipped=1", result)
}

snapshot := stats.Snapshot()
details := snapshot.APIs["test-key"].Models["gpt-5.4"].Details
if len(details) != 1 {
t.Fatalf("details len = %d, want 1", len(details))
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The setup for first and second StatisticsSnapshot instances involves a lot of duplicated code, making the test harder to read and maintain. To improve this, you could create a helper function that builds the StatisticsSnapshot and takes the differing values (like LatencyMs) as parameters. This would reduce boilerplate and make the core logic of the test—what is actually different between the two snapshots—more apparent.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: c1bf298216

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

AuthID: r.authID,
AuthIndex: r.authIndex,
RequestedAt: r.requestedAt,
Latency: r.latency(),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Capture latency after stream completion

Latency is fixed at the first call to buildRecord via Latency: r.latency(), but several streaming executors publish usage as soon as an intermediate usage event arrives (before the stream is finished). Since publishWithOutcome is protected by once.Do, long streams keep that early value and systematically under-report end-to-end request latency, making the new metric misleading for operational monitoring.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant