-
Notifications
You must be signed in to change notification settings - Fork 0
Guacamole #164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Added export_sheet_to_tsv.py to export a specified worksheet from Google Sheets to a TSV file. - Updated test-jdk-25.sh to call the export script at the beginning, ensuring the latest plugin data is used for builds. - Aborts the build script
- Added export_sheet_to_tsv.py to export a specified worksheet from Google Sheets to a TSV file. - Updated test-jdk-25.sh to call the export script at the beginning, ensuring the latest plugin data is used for builds. - Aborts the build script
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughAdds a new CLI to export a Google Sheets worksheet to TSV, integrates that exporter and a Python venv into an overhauled test-jdk-25.sh with three-phase TSV/CSV-driven plugin processing and architecture handling, and reorganizes JUnit5 candidate metadata with many re-links and added "Ban JUnit 4 imports" entries. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant CLI as export_sheet_to_tsv.py
participant GSheets as Google Sheets (gspread)
participant FS as File System
User->>CLI: run [SPREADSHEET, WORKSHEET, OUTPUT_TSV]
CLI->>GSheets: load service-account creds
CLI->>GSheets: open spreadsheet (URL / key / title)
alt open success
CLI->>GSheets: get_all_values() from worksheet
CLI->>FS: write TSV (tab-delimited)
CLI-->>User: "Exported N rows to OUTPUT_TSV"
else open error
CLI-->>User: print error and exit non-zero
end
sequenceDiagram
autonumber
actor Invoker
participant SH as test-jdk-25.sh
participant CLI as export_sheet_to_tsv.py
participant Build as compile_plugin
participant TSV as TSV_RESULTS_FILE
participant CSV as CSV_RESULTS_FILE
Invoker->>SH: start
SH->>SH: create/activate venv & install deps
SH->>CLI: run export -> TSV_FILE (may continue on failure)
SH->>SH: Phase 1: build JDK25_TRUE_SET from TSV (is_merged TRUE)
loop per plugin row
alt plugin in TRUE set
SH->>TSV: append seeded build_status (skip build)
else
SH->>Build: clone + build (maven/gradle with timeout)
alt build success
SH->>TSV: append success
SH->>CSV: append CSV result
else
SH->>TSV: append failure/timeout
end
end
end
SH-->>Invoker: finalize results, cleanup, and debug logs
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @gounthar, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces a new Python utility designed to export data from Google Sheets into a TSV format. This utility is then integrated into the test-jdk-25.sh script, which appears to be used for testing Jenkins plugins against JDK 25. The integration allows the shell script to dynamically retrieve plugin information, including their JDK25 compatibility status, directly from a Google Sheet. A key impact of this change is the optimization of the testing process, as the script can now intelligently skip building plugins that are already identified as JDK25 compatible, streamlining the overall workflow.
Highlights
- New Google Sheet Export Utility: A new Python script (
export_sheet_to_tsv.py) has been added to programmatically export data from a specified Google Sheet and worksheet into a Tab Separated Value (TSV) file. This script requires Google service account credentials for authentication. test-jdk-25.shIntegration: Thetest-jdk-25.shscript now utilizes the newexport_sheet_to_tsv.pyutility to fetch plugin data from a Google Sheet named "Jenkins PR Tracker". This allows the script to dynamically retrieve plugin names, PR numbers, PR URLs, and JDK25 compatibility status.- Conditional Plugin Building: The
test-jdk-25.shscript has been updated to conditionally skip the build process for Jenkins plugins that are already marked as "TRUE" for JDK25 compatibility in the imported TSV data, optimizing the testing workflow by avoiding redundant builds.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new Python script to export Google Sheets data and integrates it into an existing shell script. While the functionality is a good addition, there are several critical issues that need to be addressed. The Python script contains a hardcoded credentials file, which is a major security concern. The shell script has a critical bug where it tries to read from an undefined variable, and another one that leads to a corrupted output CSV file due to inconsistent data formats. I've provided detailed comments and suggestions to fix these issues and improve the code's robustness and security.
export_sheet_to_tsv.py
Outdated
| if len(sys.argv) != 4: | ||
| print("Usage: python3 export_sheet_to_tsv.py <spreadsheet_name_or_id> <worksheet_name> <output_tsv>") | ||
| sys.exit(1) | ||
|
|
||
| SPREADSHEET = sys.argv[1] | ||
| WORKSHEET = sys.argv[2] | ||
| OUTPUT_TSV = sys.argv[3] | ||
|
|
||
| # Credentials | ||
| scope = ["https://spreadsheets.google.com/feeds", "https://www.googleapis.com/auth/drive"] | ||
| credentials_file = 'concise-complex-344219-062a255ca56f.json' # Update if needed | ||
| creds = Credentials.from_service_account_file(credentials_file, scopes=scope) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hardcoding the credentials file path is a significant security risk, as it could lead to leaked credentials if this code is committed to a public repository. It also makes the script less portable. The path should be provided at runtime, for example as a command-line argument. This requires updating the usage comment, the argument check, and how the credentials_file variable is set.
| if len(sys.argv) != 4: | |
| print("Usage: python3 export_sheet_to_tsv.py <spreadsheet_name_or_id> <worksheet_name> <output_tsv>") | |
| sys.exit(1) | |
| SPREADSHEET = sys.argv[1] | |
| WORKSHEET = sys.argv[2] | |
| OUTPUT_TSV = sys.argv[3] | |
| # Credentials | |
| scope = ["https://spreadsheets.google.com/feeds", "https://www.googleapis.com/auth/drive"] | |
| credentials_file = 'concise-complex-344219-062a255ca56f.json' # Update if needed | |
| creds = Credentials.from_service_account_file(credentials_file, scopes=scope) | |
| if len(sys.argv) != 5: | |
| print("Usage: python3 export_sheet_to_tsv.py <spreadsheet_name_or_id> <worksheet_name> <output_tsv> <credentials_file>") | |
| sys.exit(1) | |
| SPREADSHEET = sys.argv[1] | |
| WORKSHEET = sys.argv[2] | |
| OUTPUT_TSV = sys.argv[3] | |
| CREDENTIALS_FILE = sys.argv[4] | |
| # Credentials | |
| scope = ["https://spreadsheets.google.com/feeds", "https://www.googleapis.com/auth/drive"] | |
| creds = Credentials.from_service_account_file(CREDENTIALS_FILE, scopes=scope) |
test-jdk-25.sh
Outdated
| build_status=$(compile_plugin "$name") | ||
| fi | ||
| echo "Finished processing plugin '$name' from line $line_number with status: $build_status" >> "$DEBUG_LOG" | ||
| echo "$name,$pr_number,$pr_url,$jdk25_status,$build_status" >> "$RESULTS_FILE" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line appends data with 5 columns to $RESULTS_FILE. However, $RESULTS_FILE is initialized with a header for only 3 columns (plugin_name,popularity,build_status on line 75) and is also written to by a previous loop (line 246) which writes 3 columns. This will result in a corrupted CSV file. You should either use a different results file for this new logic or ensure the data format is consistent throughout the script.
test-jdk-25.sh
Outdated
| else | ||
| echo "Skipping header line $line_number" >> "$DEBUG_LOG" | ||
| fi | ||
| done 3< "$TSV_FILE" # Use file descriptor 3 for reading the TSV |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The script attempts to read from the variable $TSV_FILE, which is not defined. The variable holding the TSV filename is OUTPUT_TSV, defined at the top of the script. This will cause the script to fail.
| done 3< "$TSV_FILE" # Use file descriptor 3 for reading the TSV | |
| done 3< "$OUTPUT_TSV" # Use file descriptor 3 for reading the TSV |
export_sheet_to_tsv.py
Outdated
| except Exception as e: | ||
| print(f"Error opening sheet: {e}") | ||
| sys.exit(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Catching a generic Exception can hide underlying issues and make debugging harder. It's better to catch specific exceptions that you expect from the gspread library, such as gspread.exceptions.SpreadsheetNotFound, gspread.exceptions.WorksheetNotFound, and gspread.exceptions.APIError.
| except Exception as e: | |
| print(f"Error opening sheet: {e}") | |
| sys.exit(1) | |
| except (gspread.exceptions.SpreadsheetNotFound, gspread.exceptions.WorksheetNotFound, gspread.exceptions.APIError) as e: | |
| print(f"Error opening sheet: {e}") | |
| sys.exit(1) |
- Introduce TSV_RESULTS_FILE with header for detailed fields - Write per-plugin extended results to TSV_RESULTS_FILE, keep RESULTS_FILE for simplified CSV - Add TSV_FILE alias for OUTPUT_TSV and update completion log - Fix header/content mismatch in RESULTS_FILE by splitting schemas
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (5)
export_sheet_to_tsv.py (3)
6-13: Make credentials configurable (arg or env), and update usage accordingly.Hardcoded creds are a security and portability risk. Accept an optional 4th arg for the credentials file, falling back to the GOOGLE_APPLICATION_CREDENTIALS env var. Update the usage string to reflect this.
-# Usage: python3 export_sheet_to_tsv.py <spreadsheet_name_or_id> <worksheet_name> <output_tsv> -if len(sys.argv) != 4: - print("Usage: python3 export_sheet_to_tsv.py <spreadsheet_name_or_id> <worksheet_name> <output_tsv>") +# Usage: python3 export_sheet_to_tsv.py <spreadsheet_name_or_id_or_url> <worksheet_name> <output_tsv> [credentials_file] +if len(sys.argv) not in (4, 5): + print("Usage: python3 export_sheet_to_tsv.py <spreadsheet_name_or_id_or_url> <worksheet_name> <output_tsv> [credentials_file]") sys.exit(1) SPREADSHEET = sys.argv[1] WORKSHEET = sys.argv[2] OUTPUT_TSV = sys.argv[3]
15-19: Use least-privilege scopes and configurable credentials.Switch to read-only scopes and use the optional CLI arg or GOOGLE_APPLICATION_CREDENTIALS. Exit with a clear error if missing.
# Credentials -scope = ["https://spreadsheets.google.com/feeds", "https://www.googleapis.com/auth/drive"] -credentials_file = 'concise-complex-344219-062a255ca56f.json' # Update if needed -creds = Credentials.from_service_account_file(credentials_file, scopes=scope) +scope = [ + "https://www.googleapis.com/auth/spreadsheets.readonly", + "https://www.googleapis.com/auth/drive.readonly", +] +# Optional 4th arg, otherwise fall back to env var +credentials_file = sys.argv[4] if len(sys.argv) == 5 else os.getenv("GOOGLE_APPLICATION_CREDENTIALS") +if not credentials_file: + print("Error: Credentials file not provided and GOOGLE_APPLICATION_CREDENTIALS is not set.", file=sys.stderr) + sys.exit(2) +creds = Credentials.from_service_account_file(credentials_file, scopes=scope) client = gspread.authorize(creds)
28-31: Catch specific gspread exceptions and write errors to stderr.Catching Exception is too broad and hides root causes. Surface likely errors and print to stderr.
-except Exception as e: - print(f"Error opening sheet: {e}") - sys.exit(1) +except (gspread.exceptions.SpreadsheetNotFound, + gspread.exceptions.WorksheetNotFound, + gspread.exceptions.APIError) as e: + print(f"Error opening sheet: {e}", file=sys.stderr) + sys.exit(1)test-jdk-25.sh (2)
3-6: Make spreadsheet config overrideable via env vars.Improves portability and CI usage. Keeps current values as defaults.
-# Export Google Sheet to TSV before running the rest of the script -SPREADSHEET_ID_OR_NAME="Jenkins PR Tracker" # or use the Sheet ID -WORKSHEET_NAME="Sheet1" # Change to your worksheet name -OUTPUT_TSV="plugins-jdk25.tsv" +# Export Google Sheet to TSV before running the rest of the script +SPREADSHEET_ID_OR_NAME="${SPREADSHEET_ID_OR_NAME:-Jenkins PR Tracker}" # or use the Sheet ID +WORKSHEET_NAME="${WORKSHEET_NAME:-Sheet1}" # Change to your worksheet name +OUTPUT_TSV="${OUTPUT_TSV:-plugins-jdk25.tsv}"
253-275: Fix undefined $TSV_FILE; use $OUTPUT_TSV and align results schema.
- Bug: $TSV_FILE is not defined. Use $OUTPUT_TSV which you set in Lines 4-6.
- The TSV stage appends 5 columns to RESULTS_FILE, but the header (Line 75) declares 3 columns. This corrupts the CSV.
Apply these fixes in this block:
-while IFS=$'\t' read -r name pr_number pr_url jdk25_status <&3; do +while IFS=$'\t' read -r name pr_number pr_url jdk25_status <&3; do @@ - echo "$name,$pr_number,$pr_url,$jdk25_status,$build_status" >> "$RESULTS_FILE" + echo "$name,$pr_number,$pr_url,$jdk25_status,$build_status" >> "$RESULTS_FILE" @@ -done 3< "$TSV_FILE" # Use file descriptor 3 for reading the TSV +done 3< "$OUTPUT_TSV" # Use file descriptor 3 for reading the TSV @@ -echo "Finished reading $TSV_FILE after $line_number lines." >> "$DEBUG_LOG" +echo "Finished reading $OUTPUT_TSV after $line_number lines." >> "$DEBUG_LOG"To resolve the schema mismatch, either:
- Use a separate results file for the TSV stage (recommended for clarity), e.g., RESULTS_FILE_TSV="jdk-25-build-results-tsv.csv", or
- Update the header at Line 75 to a 5-column schema and ensure the CSV stage writes empty placeholders for pr_number, pr_url, jdk25_status.
Example header if unifying:
plugin_name,popularity,pr_number,pr_url,jdk25_status,build_statusI can draft the unified header change and adjust the CSV loop to write the extra placeholders if you want.
🧹 Nitpick comments (2)
export_sheet_to_tsv.py (1)
35-39: Ensure output directory exists and write UTF-8.Create parent directory if provided and specify encoding to avoid locale surprises.
-# Write to TSV -with open(OUTPUT_TSV, 'w', newline='') as f: +# Write to TSV +out_dir = os.path.dirname(OUTPUT_TSV) +if out_dir: + os.makedirs(out_dir, exist_ok=True) +with open(OUTPUT_TSV, 'w', newline='', encoding='utf-8') as f: writer = csv.writer(f, delimiter='\t') writer.writerows(rows)test-jdk-25.sh (1)
8-12: Use direct command negation instead of checking $?; optionally log failures.This is clearer and avoids race conditions if more commands are added later. If you want to log to DEBUG_LOG, define DEBUG_LOG before this block.
-python3 export_sheet_to_tsv.py "$SPREADSHEET_ID_OR_NAME" "$WORKSHEET_NAME" "$OUTPUT_TSV" -if [ $? -ne 0 ]; then - echo "Failed to export Google Sheet to TSV. Aborting." - exit 1 -fi +if ! python3 export_sheet_to_tsv.py "$SPREADSHEET_ID_OR_NAME" "$WORKSHEET_NAME" "$OUTPUT_TSV"; then + echo "Failed to export Google Sheet to TSV. Aborting." + exit 1 +fiNote: once export_sheet_to_tsv.py supports a credentials arg, you can pass it here (e.g., "$CREDENTIALS_FILE") or rely on GOOGLE_APPLICATION_CREDENTIALS.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
export_sheet_to_tsv.py(1 hunks)test-jdk-25.sh(2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (2)
export_sheet_to_tsv.py (1)
upload_to_sheets.py (1)
update(124-143)
test-jdk-25.sh (3)
test-pr-builds.sh (1)
test_pr(53-142)install-jdk-versions.sh (2)
update_path_for_jdk25(292-308)install_temurin_jdk25(229-265)upload_to_sheets.py (1)
process_top_250_plugins_data(417-435)
- Add plugins-jdk25.tsv for JDK 25 compatibility runs - Update test-jdk-25.sh and export_sheet_to_tsv.py for JDK 25 - Refresh junit5_candidate_prs.txt - Correct JUnit5 candidate metadata: fix swapped PRs, createdAt, and labels
- Add jdk-25-build-results-extended.csv for richer JDK 25 analysis - Update jdk-25-build-results.csv with latest build outcomes - Refresh plugin build logs (bouncycastle-api, credentials, display-url-api, mailer, scm-api, script-security, structs, workflow-step-api) - Prune verbose history from build-debug.log to reduce noise and size
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
- Remove leftover Git conflict markers from export_sheet_to_tsv.py and restore the usage header - Truncate build-debug.log to avoid committing verbose build output - Refresh jdk-25-build-results.csv with latest build statuses
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (5)
export_sheet_to_tsv.py (2)
30-52: Unify spreadsheet opening logic; handle URL/key/title and catch specific gspread exceptions.This block still contains conflict remnants and uses a broad
except Exception. Adopt a single, robust path: URL → key → title, and catch specific exceptions.-# Open spreadsheet and worksheet -try: -<<<<<<< HEAD - # Prefer opening by key; fall back to opening by title if that fails - try: - spreadsheet = client.open_by_key(SPREADSHEET) - except Exception: - spreadsheet = client.open(SPREADSHEET) -======= - if "docs.google.com" in SPREADSHEET: - spreadsheet = client.open_by_url(SPREADSHEET) - else: - try: - # Most IDs are 44 chars and may include '-' and '_' - spreadsheet = client.open_by_key(SPREADSHEET) - except gspread.exceptions.SpreadsheetNotFound: - # Fall back to opening by title - spreadsheet = client.open(SPREADSHEET) ->>>>>>> ca69a67105c41f07d643590bfdc1aa556d4f4631 - sheet = spreadsheet.worksheet(WORKSHEET) -except Exception as e: - print(f"Error opening sheet: {e}") - sys.exit(1) +# Open spreadsheet and worksheet +try: + if SPREADSHEET.startswith(("http://", "https://")) or "docs.google.com" in SPREADSHEET: + spreadsheet = client.open_by_url(SPREADSHEET) + else: + try: + spreadsheet = client.open_by_key(SPREADSHEET) + except gspread.exceptions.SpreadsheetNotFound: + # Fall back to opening by title + spreadsheet = client.open(SPREADSHEET) + sheet = spreadsheet.worksheet(WORKSHEET) +except (gspread.exceptions.SpreadsheetNotFound, + gspread.exceptions.WorksheetNotFound, + gspread.exceptions.APIError) as e: + print(f"Error opening sheet: {e}", file=sys.stderr) + sys.exit(1)
10-18: Remove hardcoded credentials; accept optional 4th CLI arg; use read-only scopes and fail clearly if missing.Defaulting to repo-local JSON is a credential leakage risk and brittle in CI. Prefer CLI arg or env var only, with explicit errors. Also scope auth to read-only.
-# Usage: python3 export_sheet_to_tsv.py <spreadsheet_name_or_id> <worksheet_name> <output_tsv> -if len(sys.argv) != 4: - print("Usage: python3 export_sheet_to_tsv.py <spreadsheet_name_or_id> <worksheet_name> <output_tsv>") - sys.exit(1) +# Usage: python3 export_sheet_to_tsv.py <spreadsheet_name_or_id_or_url> <worksheet_name> <output_tsv> [credentials_json] +if len(sys.argv) not in (4, 5): + print("Usage: python3 export_sheet_to_tsv.py <spreadsheet_name_or_id_or_url> <worksheet_name> <output_tsv> [credentials_json]", file=sys.stderr) + sys.exit(2) SPREADSHEET = sys.argv[1] WORKSHEET = sys.argv[2] OUTPUT_TSV = sys.argv[3] +CREDENTIALS_FILE = sys.argv[4] if len(sys.argv) == 5 else os.environ.get("GOOGLE_APPLICATION_CREDENTIALS") # Credentials -scope = ["https://spreadsheets.google.com/feeds", "https://www.googleapis.com/auth/drive"] -credentials_file = os.environ.get('GOOGLE_APPLICATION_CREDENTIALS', 'concise-complex-344219-062a255ca56f.json') -try: - creds = Credentials.from_service_account_file(credentials_file, scopes=scope) -except FileNotFoundError: - # Fallback used in CI or when a different path is set up - fallback_file = 'google-credentials.json' - creds = Credentials.from_service_account_file(fallback_file, scopes=scope) +scope = ["https://www.googleapis.com/auth/spreadsheets.readonly", + "https://www.googleapis.com/auth/drive.readonly"] +credentials_file = CREDENTIALS_FILE +if not credentials_file: + print("Error: credentials JSON not provided and GOOGLE_APPLICATION_CREDENTIALS is unset.", file=sys.stderr) + sys.exit(2) +try: + creds = Credentials.from_service_account_file(credentials_file, scopes=scope) +except FileNotFoundError: + print(f"Error: credentials file not found: {credentials_file}", file=sys.stderr) + sys.exit(2) client = gspread.authorize(creds)Also applies to: 19-28
test-jdk-25.sh (3)
4-8: Allow overriding Sheet config via environment variables.Make the script configurable without edits, keeping current values as defaults.
-SPREADSHEET_ID_OR_NAME="1_XHzakLNwA44cUnRsY01kQ1X1SymMcJGFxXzhr5s3_s" # or use the Sheet ID -WORKSHEET_NAME="Java 25 compatibility progress" # Change to your worksheet name -OUTPUT_TSV="plugins-jdk25.tsv" -TSV_FILE="$OUTPUT_TSV" +SPREADSHEET_ID_OR_NAME="${SPREADSHEET_ID_OR_NAME:-1_XHzakLNwA44cUnRsY01kQ1X1SymMcJGFxXzhr5s3_s}" # or use the Sheet ID/URL +WORKSHEET_NAME="${WORKSHEET_NAME:-Java 25 compatibility progress}" +OUTPUT_TSV="${OUTPUT_TSV:-plugins-jdk25.tsv}" +TSV_FILE="$OUTPUT_TSV"
7-7: TSV_FILE is now defined before use.This resolves the previously reported undefined variable issue. Good fix.
82-83: Separate TSV results file fixes the mixed-column corruption.Writing 5-column TSV-derived results to a dedicated
$TSV_RESULTS_FILEprevents corrupting$RESULTS_FILEwith a 3-column header. Nice cleanup.Also applies to: 276-276
🧹 Nitpick comments (13)
data/junit5/junit5_candidates.json (8)
2913-2931: Normalize labels for migration entries (optional).In this block, parameterized-trigger-plugin has labels: null while ansible-plugin has ["tests"]. Consider normalizing migration entries to a consistent labels convention (e.g., ["tests"]) for simpler downstream filtering.
If you prefer an automated transform rather than hand-editing:
#!/bin/bash # Set labels to ["tests"] when title is "Migrate tests to JUnit5" and labels is null tmp=$(mktemp) jq 'map(if .title=="Migrate tests to JUnit5" and (.labels==null) then . + {labels: ["tests"]} else . end)' \ data/junit5/junit5_candidates.json > "$tmp" && mv "$tmp" data/junit5/junit5_candidates.json
5611-5631: Good addition; consider tagging category to distinguish from pure migrations.“Ban JUnit 4 imports” entries are valuable, but they’re distinct from JUnit 5 migrations. To make analysis easier, consider adding a non-breaking category field like "category": "ban-junit4" to these entries. Consumers ignoring unknown fields will continue to work.
Example minimal change within this object:
"repository": "jenkinsci/database-sqlserver-plugin", "state": "MERGED", "author": "strangelookingnerd", + "category": "ban-junit4", "labels": [ "chore" ],
5746-5754: Add a label for consistency with other “ban” entries.Several “Ban JUnit 4 imports” entries use labels ["chore"] or ["tests"]. This one is null. Suggest adding ["chore"] for consistency.
Apply within this object:
"author": "strangelookingnerd", - "labels": null, + "labels": [ + "chore" + ],
5780-5786: Optional: align labels with adjacent “ban” entries.labels is null here while nearby entries use ["chore"] or ["tests"]. Consider adding ["chore"] for uniformity.
5801-5809: Optional: align labels with adjacent “ban” entries.Same consistency suggestion as above; labels is null. Consider ["chore"].
6131-6139: Optional: align labels with adjacent “ban” entries.labels is null for build-name-setter-plugin. Consider adding ["chore"] for uniformity.
6518-6526: Optional: align labels with adjacent “ban” entries.generic-webhook-trigger-plugin has labels: null. Consider adding ["chore"] for consistency across “ban” entries.
5609-6529: One-shot normalization (optional): standardize labels on “ban” entries.If you decide to harmonize labels across all “Ban JUnit 4 imports” entries, here’s a safe, reversible jq transform that only fills missing labels with ["chore"] (does not overwrite non-null labels):
#!/bin/bash set -euo pipefail f="data/junit5/junit5_candidates.json" tmp=$(mktemp) jq 'map(if .title=="Ban JUnit 4 imports" and (.labels==null) then . + {labels:["chore"]} else . end)' "$f" > "$tmp" && mv "$tmp" "$f"export_sheet_to_tsv.py (2)
57-61: Ensure output directory exists and write as UTF-8.Pre-create the parent directory and specify encoding for non-ASCII content.
-# Write to TSV -with open(OUTPUT_TSV, 'w', newline='') as f: +# Write to TSV +os.makedirs(os.path.dirname(OUTPUT_TSV) or ".", exist_ok=True) +with open(OUTPUT_TSV, 'w', newline='', encoding='utf-8') as f: writer = csv.writer(f, delimiter='\t') writer.writerows(rows)
10-13: Consider argparse for clearer CLI UX.Optional: swap manual argv parsing with argparse for better help text and validation.
test-jdk-25.sh (3)
9-13: Use python3 -m pip and guard missing requirements.This avoids pip/pip3 mismatch and fails gracefully when requirements.txt is absent.
-# Install required Python packages -pip install -r requirements.txt +# Install required Python packages +if command -v python3 >/dev/null 2>&1; then + if [ -f requirements.txt ]; then + python3 -m pip install -r requirements.txt + else + echo "requirements.txt not found; skipping Python deps install." >> "$DEBUG_LOG" + fi +else + echo "python3 not found; cannot install dependencies." >> "$DEBUG_LOG" +fi
262-279: Header handling and boolean parsing in TSV loop can be more robust.
- Relying on name != "plugin"/"name" is brittle; skip the first line explicitly.
- Normalize truthy values for jdk25_status (TRUE/true/Yes/1).
- while IFS=$'\t' read -r name pr_number pr_url jdk25_status <&3; do + while IFS=$'\t' read -r name pr_number pr_url jdk25_status <&3; do line_number=$((line_number + 1)) echo "Read line $line_number: name='$name', pr_number='$pr_number', pr_url='$pr_url', jdk25_status='$jdk25_status'" >> "$DEBUG_LOG" - # Skip the header row in the TSV file. - if [ "$name" != "plugin" ] && [ "$name" != "name" ]; then + # Skip header row explicitly + if [ "$line_number" -eq 1 ]; then + echo "Skipping header line $line_number" >> "$DEBUG_LOG" + continue + fi + + if [ -n "$name" ]; then echo "Processing plugin '$name' from line $line_number" >> "$DEBUG_LOG" - if [ "$jdk25_status" == "TRUE" ]; then - build_status="success" + norm="${jdk25_status^^}" + if [[ "$norm" == "TRUE" || "$norm" == "YES" || "$norm" == "Y" || "$norm" == "1" ]]; then + build_status="success" echo "Skipping build for '$name' (already JDK25)." >> "$DEBUG_LOG" else build_status=$(compile_plugin "$name") fi echo "Finished processing plugin '$name' from line $line_number with status: $build_status" >> "$DEBUG_LOG" echo "$name,$pr_number,$pr_url,$jdk25_status,$build_status" >> "$TSV_RESULTS_FILE" - else - echo "Skipping header line $line_number" >> "$DEBUG_LOG" fi
87-97: Preflight checks for required tools (jq, curl, timeout) recommended.You already check for Maven. Similar quick checks for
jq,curl, andtimeoutwould make failures clearer.# Add near the Maven check (illustrative) for tool in jq curl timeout; do if ! command -v "$tool" >/dev/null 2>&1; then echo "Error: Required tool '$tool' is not installed." >>"$DEBUG_LOG" exit 1 fi done
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (15)
build-debug.logis excluded by!**/*.logdata/plugin-build-logs/bouncycastle-api.logis excluded by!**/*.logdata/plugin-build-logs/credentials.logis excluded by!**/*.logdata/plugin-build-logs/display-url-api.logis excluded by!**/*.logdata/plugin-build-logs/mailer.logis excluded by!**/*.logdata/plugin-build-logs/scm-api.logis excluded by!**/*.logdata/plugin-build-logs/script-security.logis excluded by!**/*.logdata/plugin-build-logs/ssh-credentials.logis excluded by!**/*.logdata/plugin-build-logs/structs.logis excluded by!**/*.logdata/plugin-build-logs/workflow-api.logis excluded by!**/*.logdata/plugin-build-logs/workflow-scm-step.logis excluded by!**/*.logdata/plugin-build-logs/workflow-step-api.logis excluded by!**/*.logjdk-25-build-results-extended.csvis excluded by!**/*.csvjdk-25-build-results.csvis excluded by!**/*.csvplugins-jdk25.tsvis excluded by!**/*.tsv
📒 Files selected for processing (4)
data/junit5/junit5_candidates.json(14 hunks)export_sheet_to_tsv.py(1 hunks)junit5_candidate_prs.txt(9 hunks)test-jdk-25.sh(4 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
data/junit5/junit5_candidates.json (1)
cmd/find-junit5-prs/main.go (1)
isJUnit5MigrationPR(372-501)
test-jdk-25.sh (2)
test-pr-builds.sh (1)
test_pr(53-142)upload_to_sheets.py (1)
process_test_results_data(339-398)
junit5_candidate_prs.txt (2)
cmd/find-junit5-prs/main.go (2)
isJUnit5MigrationPR(372-501)generateCandidateURLsFile(519-549)analyze-junit5-prs.sh (1)
process_pr(29-59)
🪛 Ruff (0.12.2)
export_sheet_to_tsv.py
6-6: SyntaxError: Expected a statement
6-6: SyntaxError: Expected a statement
6-6: SyntaxError: Expected a statement
6-6: SyntaxError: Expected a statement
8-8: SyntaxError: Expected a statement
8-8: SyntaxError: Expected a statement
8-8: SyntaxError: Expected a statement
8-8: SyntaxError: Expected a statement
8-9: SyntaxError: Expected a statement
9-9: SyntaxError: Expected a statement
9-9: SyntaxError: Expected a statement
9-9: SyntaxError: Expected a statement
9-9: SyntaxError: Expected a statement
32-32: SyntaxError: Expected an indented block after try statement
32-32: SyntaxError: Expected a statement
32-32: SyntaxError: Expected a statement
32-32: SyntaxError: Expected a statement
34-34: SyntaxError: Unexpected indentation
38-38: SyntaxError: Expected a statement
38-38: SyntaxError: Expected a statement
38-38: SyntaxError: Expected a statement
38-38: SyntaxError: Expected a statement
38-39: SyntaxError: Expected a statement
39-39: SyntaxError: Unexpected indentation
48-48: SyntaxError: Expected a statement
48-48: SyntaxError: Expected a statement
48-48: SyntaxError: Expected a statement
48-48: SyntaxError: Expected a statement
49-49: SyntaxError: Unexpected indentation
50-50: SyntaxError: Expected a statement
50-50: SyntaxError: Expected a statement
50-51: SyntaxError: Expected an expression
51-51: SyntaxError: Unexpected indentation
55-55: SyntaxError: Expected a statement
🔇 Additional comments (25)
junit5_candidate_prs.txt (14)
796-798: Re-link to PR #301 looks correct and consistent.Good catch aligning the ansible-plugin “Migrate tests to JUnit5” entry with PR #301. No formatting issues spotted.
1483-1485: Confirm plugin naming to avoid confusion with sqlserver-api-plugin.We now have both “database-sqlserver-plugin” (this line) and “sqlserver-api-plugin” elsewhere. That’s expected (they are distinct repos), but consider double-checking the intended repository here to avoid cross-wiring between similarly named plugins.
1495-1497: Entry move/cleanup looks good.Formatting, casing, and state all match surrounding entries.
1519-1521: Consistent with other Ban JUnit 4 imports entries.No issues; repo/PR/state formatting is uniform.
1528-1530: LGTM.Correct two-line block, proper state and capitalization.
1534-1536: LGTM.Reads consistently with neighboring entries; no data/formatting concerns.
1621-1623: LGTM.Good consistency on title casing (“Ban JUnit 4 imports”) and state.
1630-1632: LGTM.No concerns on this repositioned sshd-plugin entry.
1648-1650: LGTM.Entry is well-formed and matches the file’s two-line pattern.
1654-1656: LGTM.The pipeline-npm-plugin ban entry aligns with earlier migration entry; no duplication of the same PR detected.
1681-1683: LGTM; cross-check the two Ansible entries point to different intents.This “Ban JUnit 4 imports” PR (#342) and the earlier migration PR (#301) are distinct and both valid to keep. No format issues.
1687-1689: LGTM.database-postgresql-plugin ban entry looks correct and consistent.
1723-1725: LGTM.generic-webhook-trigger-plugin ban entry matches conventions; no issues spotted.
664-666: Verification complete — no state drift or duplicates detected
All updated PRs in junit5_candidate_prs.txt are confirmed as MERGED and no duplicate URLs were found. Entry ordering and format remain consistent. LGTM.data/junit5/junit5_candidates.json (9)
6154-6162: labels change to ["tests"] looks good.This “ban” entry intentionally uses ["tests"], which is already present elsewhere in the dataset; fine to keep as-is.
6166-6174: Use of ["chore"] matches most “ban” entries.No changes requested; consistent labeling.
6232-6240: LGTM: consistent “tests” labeling and recent timestamp.This entry aligns with nearby additions; no issues spotted.
6255-6265: LGTM: consistent “tests” labeling and metadata.No concerns.
6360-6368: LGTM: “ban” categorized with ["chore"]; timestamp plausible.No changes requested.
6381-6391: LGTM: database-postgresql-plugin entry is consistent with others.No changes requested.
5609-6394: Bulk verification needed: please re-run and confirm PR states and timestampsThe previous script failed due to the missing
teeutility and produced no output. To ensure every new entry truly reflects a merged pull request with the correctcreated_attimestamp, please rerun the verification—either manually or with the adjusted script below—and share the results.• Verify all entries added between lines 5609-6394 and 6514-6526 in
data/junit5/junit5_candidates.json
• Confirm for each item:
- The pull request is merged (
merged_at≠ null)- The
created_atin GitHub matches the JSON’s timestampAdjusted verification script (no
tee):#!/usr/bin/env bash set -euo pipefail declare -a items=( "jenkinsci/database-sqlserver-plugin 77 2025-08-18T07:22:04Z" "jenkinsci/azure-cosmosdb-plugin 191 2025-08-18T07:22:31Z" "jenkinsci/ignore-committer-strategy-plugin 38 2025-08-18T09:17:04Z" "jenkinsci/promoted-builds-plugin 317 2025-08-18T09:05:03Z" "jenkinsci/gitlab-kubernetes-credentials-plugin 256 2025-08-18T07:22:09Z" "jenkinsci/conditional-buildstep-plugin 142 2025-08-18T09:16:55Z" "jenkinsci/build-name-setter-plugin 173 2025-08-15T13:20:41Z" "jenkinsci/build-history-metrics-plugin 74 2025-08-15T13:42:57Z" "jenkinsci/sshd-plugin 241 2025-08-15T13:21:53Z" "jenkinsci/next-executions-plugin 180 2025-08-15T13:21:56Z" "jenkinsci/pipeline-npm-plugin 219 2025-08-15T13:21:44Z" "jenkinsci/ansible-plugin 342 2025-08-15T12:39:38Z" "jenkinsci/database-postgresql-plugin 185 2025-08-15T13:43:08Z" "jenkinsci/generic-webhook-trigger-plugin 369 2025-08-15T10:56:49Z" "jenkinsci/parameterized-trigger-plugin 411 2025-03-05T14:07:33Z" "jenkinsci/ansible-plugin 301 2025-03-05T09:00:15Z" "jenkinsci/copy-data-to-workspace-plugin 10 2025-03-21T12:09:14Z" "jenkinsci/validating-yaml-parameter-plugin 145 2025-03-20T13:49:43Z" ) for item in "${items[@]}"; do read -r repo pr expected_created <<<"$item" echo "==> Checking $repo #$pr" pr_info=$(gh api "repos/$repo/pulls/$pr" --jq '{state: .state, merged_at: .merged_at, created_at: .created_at}') echo "$pr_info" gh_created=$(jq -r '.created_at' <<<"$pr_info") if [[ "$gh_created" != "$expected_created" ]]; then echo "✗ created_at mismatch for $repo#$pr: expected $expected_created, got $gh_created" >&2 exit 1 fi merged=$(jq -r '.merged_at != null' <<<"$pr_info") if [[ "$merged" != "true" ]]; then echo "✗ PR not merged: $repo#$pr" >&2 exit 1 fi done echo "✔ All entries validated."Please run this (or verify manually) and confirm that every PR is merged and its
created_atmatches the JSON.
5653-5664: It looks like the “Detecting duplicate URLs” step didn’t produce an output. Can you please rerun just that portion and share the result? For example:jq -r '.. | .url? // empty' data/junit5/junit5_candidates.json | sort | uniq -dThis will confirm whether any URLs are duplicated.
2417-2433: Verify swapped PR links and timestamps for correctnessI’ve confirmed that each PR URL correctly corresponds to its declared
repositoryfield. However, automated comparison of thecreatedAtvalues against GitHub’s recordedcreated_atrequires a live API check. Please manually verify that the timestamps below match the GitHub data:
jenkinsci/copy-data-to-workspace-plugin
URL: jenkinsci/copy-data-to-workspace-plugin#10
declaredcreatedAt: 2025-03-21T12:09:14Zjenkinsci/validating-yaml-parameter-plugin
URL: jenkinsci/validating-yaml-parameter-plugin#145
declaredcreatedAt: 2025-03-20T13:49:43ZPlease use the GitHub API (e.g.
gh api repos/…/pulls/… --jq .created_at) or the web UI to confirm that eachcreatedAtmatches GitHub’screated_atfield.export_sheet_to_tsv.py (1)
5-5: Import of os looks good.This addresses earlier feedback to read env vars and handle filesystem operations.
test-jdk-25.sh (1)
12-17: Continue-on-failure path: avoid logging to undefined file.This is resolved by initializing DEBUG_LOG early (see earlier comment). After that change, this block is fine.
Please re-run the script after applying the DEBUG_LOG fix to confirm no “ambiguous redirect” errors appear early in execution.
- Define and export DEBUG_LOG at the top of the script for consistent availability to subshells and early steps - Remove duplicate/redundant DEBUG_LOG definitions later in the script and update related comment - Initialize the log with >> instead of > to preserve existing content and avoid overwriting earlier logs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
test-jdk-25.sh (2)
9-12: DEBUG_LOG early init and non-clobbering writes look good—issue resolved.Initialization is now at the top and subsequent writes use append. This addresses earlier feedback about using DEBUG_LOG before initialization and truncation.
Also applies to: 85-85
82-83: Corrupted CSV risk is resolved by splitting outputs: RESULTS_FILE (3 cols) vs TSV_RESULTS_FILE (5 cols).Header and downstream writes are now consistent with their respective schemas. Thanks for addressing this.
Also applies to: 296-321
🧹 Nitpick comments (9)
test-jdk-25.sh (9)
4-7: Parameterize Google Sheets config via env vars; avoid hardcoding IDs.Make these overridable to improve portability and local testing flexibility.
-SPREADSHEET_ID_OR_NAME="1_XHzakLNwA44cUnRsY01kQ1X1SymMcJGFxXzhr5s3_s" # or use the Sheet ID -WORKSHEET_NAME="Java 25 compatibility progress" # Change to your worksheet name -OUTPUT_TSV="plugins-jdk25.tsv" -TSV_FILE="$OUTPUT_TSV" +SPREADSHEET_ID_OR_NAME="${SPREADSHEET_ID_OR_NAME:-"1_XHzakLNwA44cUnRsY01kQ1X1SymMcJGFxXzhr5s3_s"}" +WORKSHEET_NAME="${WORKSHEET_NAME:-"Java 25 compatibility progress"}" +OUTPUT_TSV="${OUTPUT_TSV:-"plugins-jdk25.tsv"}" +TSV_FILE="$OUTPUT_TSV"
13-21: Use python3 -m pip, resolve requirements path, and log installs.This avoids ambiguous pip resolution and ensures the correct requirements file is used when the script is invoked from elsewhere. Also keeps stdout clean by logging to DEBUG_LOG.
-# Install required Python packages -pip install -r requirements.txt +# Install required Python packages +echo "Installing Python dependencies from $script_dir/requirements.txt" >> "$DEBUG_LOG" +python3 -m pip install -r "$script_dir/requirements.txt" >> "$DEBUG_LOG" 2>&1Optionally guard the export with a python3 existence check if needed.
53-56: Avoid calling mvn -v before verifying Maven is installed.You log mvn -v at Line 56 before the availability check at Lines 88-97. Reorder to prevent noisy errors on hosts without Maven.
-echo "DEBUG: Output of 'mvn -v' after sourcing install-jdk-versions.sh (in test-jdk-25.sh):" >> "$DEBUG_LOG" -mvn -v >> "$DEBUG_LOG" 2>&1 +:if command -v mvn &>/dev/null; then # Log Maven installation details to the debug log. echo "Maven is installed and accessible." >>"$DEBUG_LOG" - echo "DEBUG: Output of 'mvn -v' before potential JDK 25 switch (in test-jdk-25.sh):" >> "$DEBUG_LOG" - mvn -v >>"$DEBUG_LOG" 2>&1 + echo "DEBUG: Output of 'mvn -v' (post JDK 25 setup) in test-jdk-25.sh:" >> "$DEBUG_LOG" + mvn -v >>"$DEBUG_LOG" 2>&1Also applies to: 88-97
200-206: Guard success messages after cd; current logs can be misleading when cd fails.When cd fails, you still log “Reached after cd command” and “Successfully changed directory…”. Wrap these under the success path.
- echo "Reached after cd command" >>"$DEBUG_LOG" - echo "Successfully changed directory to $plugin_dir" >>"$DEBUG_LOG" - if [ "$build_status" == "success" ]; then + if [ "$build_status" == "success" ]; then + echo "Reached after cd command" >>"$DEBUG_LOG" + echo "Successfully changed directory to $plugin_dir" >>"$DEBUG_LOG" if [ -f "pom.xml" ]; then
211-217: Scope maven_exit_code as local to avoid leaking into the outer shell.Minor hygiene in bash functions.
- timeout 20m mvn clean install -DskipTests >"$plugin_log_file" 2>&1 - maven_exit_code=$? + timeout 20m mvn clean install -DskipTests >"$plugin_log_file" 2>&1 + local maven_exit_code=$?
270-279: Skip CSV header using line_number like TSV to avoid a false positive on a plugin literally named “name”.Low risk, but aligning both loops avoids edge cases.
- # Skip header row in the CSV - if [ "$name" != "name" ]; then - echo "Processing plugin '$name' from CSV line $line_number" >> "$DEBUG_LOG" + # Skip header row in the CSV + if [ $line_number -eq 1 ] || [ "$name" = "name" ]; then + echo "Skipping CSV header line $line_number" >> "$DEBUG_LOG" + continue + fi + echo "Processing plugin '$name' from CSV line $line_number" >> "$DEBUG_LOG"(Remove the else-branch log on Line 290 accordingly.)
314-316: Exit AWK after the first match to avoid multi-line capture if duplicates occur.Safer even if the CSV accidentally contains duplicates.
- build_status=$(awk -F',' -v n="$name" 'NR>1 && $1==n {print $3; found=1} END{ if(!found) print "" }' "$RESULTS_FILE") + build_status=$(awk -F',' -v n="$name" 'NR>1 && $1==n {print $3; found=1; exit} END{ if(!found) print "" }' "$RESULTS_FILE")
335-336: Print extended TSV results message only when TSV was processed.Avoids confusing output when TSV export was skipped or failed.
-echo "Extended TSV build results have been saved to $TSV_RESULTS_FILE." +if [ -n "$TSV_FILE" ] && [ -f "$TSV_FILE" ]; then + echo "Extended TSV build results have been saved to $TSV_RESULTS_FILE." +fi
16-21: Minor robustness: check python3 availability before export; log failure reason.If python3 is missing, current flow sets TSV_FILE="" only when export fails, but the log won’t state why.
-echo "Running: python3 export_sheet_to_tsv.py \"$SPREADSHEET_ID_OR_NAME\" \"$WORKSHEET_NAME\" \"$OUTPUT_TSV\"" -python3 export_sheet_to_tsv.py "$SPREADSHEET_ID_OR_NAME" "$WORKSHEET_NAME" "$OUTPUT_TSV" +echo "Running: python3 export_sheet_to_tsv.py \"$SPREADSHEET_ID_OR_NAME\" \"$WORKSHEET_NAME\" \"$OUTPUT_TSV\"" >> "$DEBUG_LOG" +if command -v python3 >/dev/null 2>&1; then + python3 export_sheet_to_tsv.py "$SPREADSHEET_ID_OR_NAME" "$WORKSHEET_NAME" "$OUTPUT_TSV" >> "$DEBUG_LOG" 2>&1 +else + echo "python3 not found; skipping TSV export." >> "$DEBUG_LOG" + false +fi
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (2)
build-debug.logis excluded by!**/*.logjdk-25-build-results.csvis excluded by!**/*.csv
📒 Files selected for processing (2)
export_sheet_to_tsv.py(1 hunks)test-jdk-25.sh(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- export_sheet_to_tsv.py
🔇 Additional comments (1)
test-jdk-25.sh (1)
242-263: Nice: efficient skip set for already JDK25=TRUE plugins.Using mktemp + grep -Fxq keeps lookups simple and fast. Good guardrails and cleanup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
test-jdk-25.sh (1)
197-209: “Successfully changed directory” is logged even when cd fails.The success message is emitted unconditionally, which makes troubleshooting harder. Emit it only on success; keep failure in the error branch.
- cd "$plugin_dir" >>"$DEBUG_LOG" 2>&1 || { - echo "Failed to change directory to $plugin_dir" >>"$DEBUG_LOG" - build_status="cd_failed" - } - echo "Reached after cd command" >>"$DEBUG_LOG" - echo "Successfully changed directory to $plugin_dir" >>"$DEBUG_LOG" - if [ "$build_status" == "success" ]; then + if cd "$plugin_dir" >>"$DEBUG_LOG" 2>&1; then + echo "Successfully changed directory to $plugin_dir" >>"$DEBUG_LOG" if [ -f "pom.xml" ]; then ... fi + else + echo "Failed to change directory to $plugin_dir" >>"$DEBUG_LOG" + build_status="cd_failed" fi
♻️ Duplicate comments (4)
test-jdk-25.sh (4)
83-86: Good fix: separate extended results file prevents CSV shape corruption.Using a dedicated TSV_RESULTS_FILE with a 5-column header eliminates the prior issue of mixing 3- and 5-column rows in RESULTS_FILE. Nice.
88-89: Good fix: DEBUG_LOG is initialized early and only appended to.This addresses the earlier “used before set and later clobbered” problem. Consistent use of >> avoids log loss.
246-266: Resolved: TSV_FILE now defined before use and guarded.Defining TSV_FILE and checking for its existence prevents the earlier failure from reading an undefined path. The mktemp-backed set construction looks good.
6-11: Parameterize sheet config via environment with sensible defaults.Hardcoding the spreadsheet ID, worksheet name, and output path reduces portability. Allow overrides via env vars (keeping current values as defaults).
-SPREADSHEET_ID_OR_NAME="1_XHzakLNwA44cUnRsY01kQ1X1SymMcJGFxXzhr5s3_s" # or use the Sheet ID -WORKSHEET_NAME="Java 25 compatibility progress" # Change to your worksheet name -OUTPUT_TSV="plugins-jdk25.tsv" +SPREADSHEET_ID_OR_NAME="${SPREADSHEET_ID_OR_NAME:-1_XHzakLNwA44cUnRsY01kQ1X1SymMcJGFxXzhr5s3_s}" +WORKSHEET_NAME="${WORKSHEET_NAME:-Java 25 compatibility progress}" +OUTPUT_TSV="${OUTPUT_TSV:-plugins-jdk25.tsv}" TSV_FILE="$OUTPUT_TSV"
🧹 Nitpick comments (9)
test-jdk-25.sh (9)
3-4: Guard virtualenv activation; fall back gracefully if venv is absent.Sourcing a non-existent venv will cause the script to abort early. Make activation conditional and default to system Python if the venv isn’t present.
-# Source the virtual environment from the directory the script is in -source "$(dirname "$0")/venv/bin/activate" +# Resolve script directory once and (optionally) activate venv +script_dir="$(cd "$(dirname "$0")" && pwd)" +if [ -f "$script_dir/venv/bin/activate" ]; then + # Source the virtual environment from the script directory + # shellcheck disable=SC1091 + source "$script_dir/venv/bin/activate" +else + echo "No venv found at $script_dir/venv; using system Python." >&2 +fi
16-24: Make exporter call path-robust and simplify error handling.Use script_dir to resolve export_sheet_to_tsv.py reliably and use a negated command test instead of $? to avoid races if this block is edited later.
-# Install required Python packages -pip install -r requirements.txt - -echo "Running: python3 export_sheet_to_tsv.py \"$SPREADSHEET_ID_OR_NAME\" \"$WORKSHEET_NAME\" \"$OUTPUT_TSV\"" -python3 export_sheet_to_tsv.py "$SPREADSHEET_ID_OR_NAME" "$WORKSHEET_NAME" "$OUTPUT_TSV" -if [ $? -ne 0 ]; then +# Install required Python packages +pip install -r "$script_dir/requirements.txt" + +echo "Running: python3 \"$script_dir/export_sheet_to_tsv.py\" \"$SPREADSHEET_ID_OR_NAME\" \"$WORKSHEET_NAME\" \"$OUTPUT_TSV\"" +if ! python3 "$script_dir/export_sheet_to_tsv.py" "$SPREADSHEET_ID_OR_NAME" "$WORKSHEET_NAME" "$OUTPUT_TSV"; then echo "Failed to export Google Sheet to TSV. Continuing without TSV data." >> "$DEBUG_LOG" TSV_FILE="" fi
27-29: Fix misleading comment about “disabling” strict mode.The comment says “Disable strict error checking,” but -u and pipefail enable stricter behavior. Align the comment to reduce confusion.
-# Disable strict error checking and debug output for more reliable output handling +# Keep errexit off but enable undefined-var (-u) and pipefail for safer error handling
49-59: Ensure JDK 25 is installed and PATHed via the sourced helpers.You source install-jdk-versions.sh but never call its helpers. If JAVA_HOME is not already provisioned at $HOME/.jdk-25, later Java/Maven steps will fail. Prefer invoking the install/update helpers exposed by the sourced script.
-source "$script_dir/install-jdk-versions.sh" # Changed from direct execution to sourcing +source "$script_dir/install-jdk-versions.sh" # Changed from direct execution to sourcing + +# Ensure JDK 25 is installed and PATH is updated via helper functions +if type install_temurin_jdk25 >/dev/null 2>&1; then + install_temurin_jdk25 +fi +if type update_path_for_jdk25 >/dev/null 2>&1; then + update_path_for_jdk25 +fiIf auto-install is not desired in this script, at least call update_path_for_jdk25 and verify JAVA_HOME/bin/java exists before proceeding.
269-292: Avoid O(N) grep per CSV row for membership checks.For ~250 rows it’s fine, but scaling will suffer. Consider building an in-memory set once and doing O(1) lookups.
Example (outside this hunk for clarity):
# Build set declare -A jdk25_true=() while IFS=$'\t' read -r name _ _ is_merged; do # header/TRUE checks... jdk25_true["$name"]=1 done < "$TSV_FILE" in_jdk25_true_set() { [[ -n "${jdk25_true[$1]:-}" ]]; }
300-327: Joining TSV with CSV: reduce repeated awk scans.Each TSV row performs an awk scan over RESULTS_FILE. For large files this becomes O(N^2). Preload RESULTS_FILE into an associative map once, then use it during the TSV pass.
Illustrative refactor:
# Preload CSV results into an awk-generated map once declare -A build_map=() while IFS=, read -r name popularity status; do [ "$name" = "name" ] && continue build_map["$name"]="$status" done < "$RESULTS_FILE" # In the TSV loop: build_status="${build_map[$name]:-not_in_csv}"
61-69: Preflight checks for external tools (jq, git, curl, timeout).You validate Maven but use jq, git, curl, and timeout without checks. Add a small guard to fail fast with a clear message.
+# Preflight required tools +require_cmd() { command -v "$1" >/dev/null 2>&1 || { echo "Error: '$1' is required." >> "$DEBUG_LOG"; exit 1; }; } +require_cmd jq +require_cmd git +require_cmd curl +require_cmd timeout
110-118: Harden plugins.json freshness check.Using find with -mtime +0 is okay but can emit the filename to stdout and confuse tests. Consider -prune or checking modification time via stat for clarity.
Alternative:
if [ ! -f "$PLUGINS_JSON" ] || [ "$(date -r "$PLUGINS_JSON" +%s)" -lt "$(( $(date +%s) - 86400 ))" ]; then ... fi
276-296: CSV parsing is naive; acceptable here but be aware of quoting.IFS=, splitting will break if plugin names ever contain commas. If that’s a possibility, switch to a CSV-aware parser (awk with FPAT or python -c 'csv' reader). Otherwise fine as-is for the current dataset.
Minimal awk FPAT example:
awk -v FPAT='([^,]+)|(\"[^\"]+\")' -F, 'NR>1{print $1,$2}' "$CSV_FILE"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
test-jdk-25.sh(4 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
test-jdk-25.sh (3)
test-pr-builds.sh (1)
test_pr(53-142)upload_to_sheets.py (1)
process_top_250_plugins_data(417-435)install-jdk-versions.sh (2)
update_path_for_jdk25(292-308)install_temurin_jdk25(229-265)
🔇 Additional comments (1)
test-jdk-25.sh (1)
340-343: Nice UX: clear pointers to artifacts.Final messages clearly indicate where to find both results and logs. Helpful for CI logs and local runs.
- Define and export DEBUG_LOG at script start using script dir path - Remove later DEBUG_LOG redefinition - Add UTC timestamp to debug log header - Mirror key status messages to DEBUG_LOG (export run, downloads, generation, results) and log cleanup - Ensure early/consistent logging and easier troubleshooting
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
test-jdk-25.sh (1)
47-60: Refactor JDK setup in test-jdk-25.sh to use install-jdk-versions.sh helpersWe’ve inspected install-jdk-versions.sh and confirmed it explicitly defines all variables (JDK_INSTALL_DIR, ARCHITECTURE, DEBUG_LOG, script_dir) before use and temporarily disables
set -uaround SDKMAN initialization. No unguarded expansions were found, so it’s safe underset -u.Please replace the manual
JAVA_HOME/PATHexport in test-jdk-25.sh (lines 47–60) with calls to the helper functions:source "$script_dir/install-jdk-versions.sh" # Changed from direct execution to sourcing -# Ensure JDK 25 is used for all Java and Maven commands -export JAVA_HOME="$HOME/.jdk-25" -export PATH="$JAVA_HOME/bin:$PATH" -hash -r +# Ensure JDK 25 is installed and on PATH +if [[ ! -x "$HOME/.jdk-25/bin/java" ]]; then + install_temurin_jdk25 +fi +update_path_for_jdk25This guarantees Temurin JDK 25 is fetched if needed, installs the correct EA build, and updates both
PATHandJAVA_HOMEvia the tested helpers.
🧹 Nitpick comments (6)
test-jdk-25.sh (6)
10-13: Make spreadsheet configuration overridable via environment variables.Hard-coding limits reuse in CI and local runs. Allow env overrides with sane defaults.
Apply:
-SPREADSHEET_ID_OR_NAME="1_XHzakLNwA44cUnRsY01kQ1X1SymMcJGFxXzhr5s3_s" # or use the Sheet ID -WORKSHEET_NAME="Java 25 compatibility progress" # Change to your worksheet name -OUTPUT_TSV="plugins-jdk25.tsv" -TSV_FILE="$OUTPUT_TSV" +SPREADSHEET_ID_OR_NAME="${SPREADSHEET_ID_OR_NAME:-1_XHzakLNwA44cUnRsY01kQ1X1SymMcJGFxXzhr5s3_s}" # Sheet ID or name +WORKSHEET_NAME="${WORKSHEET_NAME:-Java 25 compatibility progress}" +OUTPUT_TSV="${OUTPUT_TSV:-plugins-jdk25.tsv}" +TSV_FILE="$OUTPUT_TSV"
91-101: Pre-flight checks: verify required tools exist (jq, git, curl, timeout).You already check Maven; add a few essentials to fail fast with a clear message.
Apply:
if command -v mvn &>/dev/null; then @@ else @@ fi + +# Additional required tools +for bin in jq git curl timeout; do + if ! command -v "$bin" >/dev/null 2>&1; then + echo "Error: Required tool '$bin' is not installed or not in PATH." >>"$DEBUG_LOG" + exit 1 + fi +done
112-121: Harden update-center download: fail on HTTP errors and log transfer details.Use curl’s
-fSand capture output in DEBUG_LOG to aid diagnostics.Apply:
- echo "Downloading $PLUGINS_JSON..." - echo "Downloading $PLUGINS_JSON..." >> "$DEBUG_LOG" - curl -L https://updates.jenkins.io/current/update-center.actual.json -o "$PLUGINS_JSON" + echo "Downloading $PLUGINS_JSON..." + echo "Downloading $PLUGINS_JSON..." >> "$DEBUG_LOG" + curl -fSL https://updates.jenkins.io/current/update-center.actual.json -o "$PLUGINS_JSON" >> "$DEBUG_LOG" 2>&1 || { + echo "Error: Failed to download $PLUGINS_JSON" >> "$DEBUG_LOG" + exit 1 + }
207-214: Avoid contradictory logs when cd fails; log “successfully changed dir” only on success.As written, lines 212–214 log success even if the
cdfallback branch setbuild_status="cd_failed".Apply:
- cd "$plugin_dir" >>"$DEBUG_LOG" 2>&1 || { + cd "$plugin_dir" >>"$DEBUG_LOG" 2>&1 || { echo "Failed to change directory to $plugin_dir" >>"$DEBUG_LOG" build_status="cd_failed" } - echo "Reached after cd command" >>"$DEBUG_LOG" - echo "Successfully changed directory to $plugin_dir" >>"$DEBUG_LOG" if [ "$build_status" == "success" ]; then + echo "Successfully changed directory to $plugin_dir" >>"$DEBUG_LOG" if [ -f "pom.xml" ]; then
31-45: Minor: verify sourced script behavior underset -u.
set -u(line 29) is in effect before sourcing install-jdk-versions.sh (line 50). If that script references unset variables without defaults, the current shell will exit. If that’s unintended, temporarily relax-uaround thesourcecall.Apply:
-script_dir=$(cd "$(dirname "$0")" && pwd) -source "$script_dir/install-jdk-versions.sh" # Changed from direct execution to sourcing +script_dir=$(cd "$(dirname "$0")" && pwd) +set +u +source "$script_dir/install-jdk-versions.sh" # Changed from direct execution to sourcing +set -uAlternatively, ensure the sourced script consistently uses parameter expansions with defaults (e.g., "${VAR:-}").
Also applies to: 50-51
228-232: Optional: ensure Gradle wrapper is executable before invoking Gradle path.Some repos commit gradlew without exec bit. Set it when present; harmless otherwise.
Apply:
- elif [ -f "./gradlew" ]; then + elif [ -f "./gradlew" ]; then + chmod +x ./gradlew 2>/dev/null || true # Run a Gradle build if a Gradle wrapper is found. echo "Running Gradle wrapper build for $plugin_name..." >>"$DEBUG_LOG"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (250)
data/plugin-build-logs/Office-365-Connector.logis excluded by!**/*.logdata/plugin-build-logs/active-directory.logis excluded by!**/*.logdata/plugin-build-logs/allure-jenkins-plugin.logis excluded by!**/*.logdata/plugin-build-logs/analysis-model-api.logis excluded by!**/*.logdata/plugin-build-logs/ansible.logis excluded by!**/*.logdata/plugin-build-logs/ansicolor.logis excluded by!**/*.logdata/plugin-build-logs/ant.logis excluded by!**/*.logdata/plugin-build-logs/antisamy-markup-formatter.logis excluded by!**/*.logdata/plugin-build-logs/apache-httpcomponents-client-4-api.logis excluded by!**/*.logdata/plugin-build-logs/apache-httpcomponents-client-5-api.logis excluded by!**/*.logdata/plugin-build-logs/artifactory.logis excluded by!**/*.logdata/plugin-build-logs/asm-api.logis excluded by!**/*.logdata/plugin-build-logs/authentication-tokens.logis excluded by!**/*.logdata/plugin-build-logs/authorize-project.logis excluded by!**/*.logdata/plugin-build-logs/aws-credentials.logis excluded by!**/*.logdata/plugin-build-logs/aws-java-sdk-cloudformation.logis excluded by!**/*.logdata/plugin-build-logs/aws-java-sdk-cloudfront.logis excluded by!**/*.logdata/plugin-build-logs/aws-java-sdk-codebuild.logis excluded by!**/*.logdata/plugin-build-logs/aws-java-sdk-codedeploy.logis excluded by!**/*.logdata/plugin-build-logs/aws-java-sdk-ec2.logis excluded by!**/*.logdata/plugin-build-logs/aws-java-sdk-ecr.logis excluded by!**/*.logdata/plugin-build-logs/aws-java-sdk-ecs.logis excluded by!**/*.logdata/plugin-build-logs/aws-java-sdk-efs.logis excluded by!**/*.logdata/plugin-build-logs/aws-java-sdk-elasticbeanstalk.logis excluded by!**/*.logdata/plugin-build-logs/aws-java-sdk-elasticloadbalancingv2.logis excluded by!**/*.logdata/plugin-build-logs/aws-java-sdk-iam.logis excluded by!**/*.logdata/plugin-build-logs/aws-java-sdk-kinesis.logis excluded by!**/*.logdata/plugin-build-logs/aws-java-sdk-lambda.logis excluded by!**/*.logdata/plugin-build-logs/aws-java-sdk-logs.logis excluded by!**/*.logdata/plugin-build-logs/aws-java-sdk-minimal.logis excluded by!**/*.logdata/plugin-build-logs/aws-java-sdk-organizations.logis excluded by!**/*.logdata/plugin-build-logs/aws-java-sdk-secretsmanager.logis excluded by!**/*.logdata/plugin-build-logs/aws-java-sdk-sns.logis excluded by!**/*.logdata/plugin-build-logs/aws-java-sdk-sqs.logis excluded by!**/*.logdata/plugin-build-logs/aws-java-sdk-ssm.logis excluded by!**/*.logdata/plugin-build-logs/aws-java-sdk.logis excluded by!**/*.logdata/plugin-build-logs/badge.logis excluded by!**/*.logdata/plugin-build-logs/bitbucket.logis excluded by!**/*.logdata/plugin-build-logs/blueocean-autofavorite.logis excluded by!**/*.logdata/plugin-build-logs/blueocean-bitbucket-pipeline.logis excluded by!**/*.logdata/plugin-build-logs/blueocean-commons.logis excluded by!**/*.logdata/plugin-build-logs/blueocean-config.logis excluded by!**/*.logdata/plugin-build-logs/blueocean-core-js.logis excluded by!**/*.logdata/plugin-build-logs/blueocean-dashboard.logis excluded by!**/*.logdata/plugin-build-logs/blueocean-display-url.logis excluded by!**/*.logdata/plugin-build-logs/blueocean-events.logis excluded by!**/*.logdata/plugin-build-logs/blueocean-git-pipeline.logis excluded by!**/*.logdata/plugin-build-logs/blueocean-github-pipeline.logis excluded by!**/*.logdata/plugin-build-logs/blueocean-i18n.logis excluded by!**/*.logdata/plugin-build-logs/blueocean-jira.logis excluded by!**/*.logdata/plugin-build-logs/blueocean-jwt.logis excluded by!**/*.logdata/plugin-build-logs/blueocean-personalization.logis excluded by!**/*.logdata/plugin-build-logs/blueocean-pipeline-api-impl.logis excluded by!**/*.logdata/plugin-build-logs/blueocean-pipeline-editor.logis excluded by!**/*.logdata/plugin-build-logs/blueocean-pipeline-scm-api.logis excluded by!**/*.logdata/plugin-build-logs/blueocean-rest-impl.logis excluded by!**/*.logdata/plugin-build-logs/blueocean-rest.logis excluded by!**/*.logdata/plugin-build-logs/blueocean-web.logis excluded by!**/*.logdata/plugin-build-logs/blueocean.logis excluded by!**/*.logdata/plugin-build-logs/bootstrap4-api.logis excluded by!**/*.logdata/plugin-build-logs/bootstrap5-api.logis excluded by!**/*.logdata/plugin-build-logs/bouncycastle-api.logis excluded by!**/*.logdata/plugin-build-logs/branch-api.logis excluded by!**/*.logdata/plugin-build-logs/build-name-setter.logis excluded by!**/*.logdata/plugin-build-logs/build-pipeline-plugin.logis excluded by!**/*.logdata/plugin-build-logs/build-timeout.logis excluded by!**/*.logdata/plugin-build-logs/build-user-vars-plugin.logis excluded by!**/*.logdata/plugin-build-logs/built-on-column.logis excluded by!**/*.logdata/plugin-build-logs/caffeine-api.logis excluded by!**/*.logdata/plugin-build-logs/checks-api.logis excluded by!**/*.logdata/plugin-build-logs/cloud-stats.logis excluded by!**/*.logdata/plugin-build-logs/cloudbees-bitbucket-branch-source.logis excluded by!**/*.logdata/plugin-build-logs/cloudbees-folder.logis excluded by!**/*.logdata/plugin-build-logs/cobertura.logis excluded by!**/*.logdata/plugin-build-logs/code-coverage-api.logis excluded by!**/*.logdata/plugin-build-logs/command-launcher.logis excluded by!**/*.logdata/plugin-build-logs/commons-compress-api.logis excluded by!**/*.logdata/plugin-build-logs/commons-httpclient3-api.logis excluded by!**/*.logdata/plugin-build-logs/commons-lang3-api.logis excluded by!**/*.logdata/plugin-build-logs/commons-text-api.logis excluded by!**/*.logdata/plugin-build-logs/conditional-buildstep.logis excluded by!**/*.logdata/plugin-build-logs/config-file-provider.logis excluded by!**/*.logdata/plugin-build-logs/configuration-as-code.logis excluded by!**/*.logdata/plugin-build-logs/copyartifact.logis excluded by!**/*.logdata/plugin-build-logs/coverage.logis excluded by!**/*.logdata/plugin-build-logs/credentials-binding.logis excluded by!**/*.logdata/plugin-build-logs/credentials.logis excluded by!**/*.logdata/plugin-build-logs/dark-theme.logis excluded by!**/*.logdata/plugin-build-logs/dashboard-view.logis excluded by!**/*.logdata/plugin-build-logs/data-tables-api.logis excluded by!**/*.logdata/plugin-build-logs/display-url-api.logis excluded by!**/*.logdata/plugin-build-logs/docker-commons.logis excluded by!**/*.logdata/plugin-build-logs/docker-java-api.logis excluded by!**/*.logdata/plugin-build-logs/docker-plugin.logis excluded by!**/*.logdata/plugin-build-logs/docker-workflow.logis excluded by!**/*.logdata/plugin-build-logs/durable-task.logis excluded by!**/*.logdata/plugin-build-logs/echarts-api.logis excluded by!**/*.logdata/plugin-build-logs/eddsa-api.logis excluded by!**/*.logdata/plugin-build-logs/email-ext.logis excluded by!**/*.logdata/plugin-build-logs/emailext-template.logis excluded by!**/*.logdata/plugin-build-logs/envinject-api.logis excluded by!**/*.logdata/plugin-build-logs/envinject.logis excluded by!**/*.logdata/plugin-build-logs/extended-choice-parameter.logis excluded by!**/*.logdata/plugin-build-logs/extended-read-permission.logis excluded by!**/*.logdata/plugin-build-logs/external-monitor-job.logis excluded by!**/*.logdata/plugin-build-logs/favorite.logis excluded by!**/*.logdata/plugin-build-logs/font-awesome-api.logis excluded by!**/*.logdata/plugin-build-logs/forensics-api.logis excluded by!**/*.logdata/plugin-build-logs/generic-webhook-trigger.logis excluded by!**/*.logdata/plugin-build-logs/git-client.logis excluded by!**/*.logdata/plugin-build-logs/git-parameter.logis excluded by!**/*.logdata/plugin-build-logs/git-server.logis excluded by!**/*.logdata/plugin-build-logs/git.logis excluded by!**/*.logdata/plugin-build-logs/github-api.logis excluded by!**/*.logdata/plugin-build-logs/github-branch-source.logis excluded by!**/*.logdata/plugin-build-logs/github.logis excluded by!**/*.logdata/plugin-build-logs/gitlab-api.logis excluded by!**/*.logdata/plugin-build-logs/gitlab-plugin.logis excluded by!**/*.logdata/plugin-build-logs/gradle.logis excluded by!**/*.logdata/plugin-build-logs/groovy-postbuild.logis excluded by!**/*.logdata/plugin-build-logs/groovy.logis excluded by!**/*.logdata/plugin-build-logs/gson-api.logis excluded by!**/*.logdata/plugin-build-logs/h2-api.logis excluded by!**/*.logdata/plugin-build-logs/handy-uri-templates-2-api.logis excluded by!**/*.logdata/plugin-build-logs/htmlpublisher.logis excluded by!**/*.logdata/plugin-build-logs/http_request.logis excluded by!**/*.logdata/plugin-build-logs/instance-identity.logis excluded by!**/*.logdata/plugin-build-logs/ionicons-api.logis excluded by!**/*.logdata/plugin-build-logs/jackson2-api.logis excluded by!**/*.logdata/plugin-build-logs/jacoco.logis excluded by!**/*.logdata/plugin-build-logs/jakarta-activation-api.logis excluded by!**/*.logdata/plugin-build-logs/jakarta-mail-api.logis excluded by!**/*.logdata/plugin-build-logs/javadoc.logis excluded by!**/*.logdata/plugin-build-logs/javax-activation-api.logis excluded by!**/*.logdata/plugin-build-logs/javax-mail-api.logis excluded by!**/*.logdata/plugin-build-logs/jaxb.logis excluded by!**/*.logdata/plugin-build-logs/jdk-tool.logis excluded by!**/*.logdata/plugin-build-logs/jenkins-design-language.logis excluded by!**/*.logdata/plugin-build-logs/jersey2-api.logis excluded by!**/*.logdata/plugin-build-logs/jira.logis excluded by!**/*.logdata/plugin-build-logs/jjwt-api.logis excluded by!**/*.logdata/plugin-build-logs/jnr-posix-api.logis excluded by!**/*.logdata/plugin-build-logs/job-dsl.logis excluded by!**/*.logdata/plugin-build-logs/jobConfigHistory.logis excluded by!**/*.logdata/plugin-build-logs/joda-time-api.logis excluded by!**/*.logdata/plugin-build-logs/jquery.logis excluded by!**/*.logdata/plugin-build-logs/jquery3-api.logis excluded by!**/*.logdata/plugin-build-logs/jsch.logis excluded by!**/*.logdata/plugin-build-logs/json-api.logis excluded by!**/*.logdata/plugin-build-logs/json-path-api.logis excluded by!**/*.logdata/plugin-build-logs/junit.logis excluded by!**/*.logdata/plugin-build-logs/kubernetes-client-api.logis excluded by!**/*.logdata/plugin-build-logs/kubernetes-credentials.logis excluded by!**/*.logdata/plugin-build-logs/kubernetes.logis excluded by!**/*.logdata/plugin-build-logs/ldap.logis excluded by!**/*.logdata/plugin-build-logs/locale.logis excluded by!**/*.logdata/plugin-build-logs/localization-support.logis excluded by!**/*.logdata/plugin-build-logs/localization-zh-cn.logis excluded by!**/*.logdata/plugin-build-logs/lockable-resources.logis excluded by!**/*.logdata/plugin-build-logs/mailer.logis excluded by!**/*.logdata/plugin-build-logs/mapdb-api.logis excluded by!**/*.logdata/plugin-build-logs/mask-passwords.logis excluded by!**/*.logdata/plugin-build-logs/matrix-auth.logis excluded by!**/*.logdata/plugin-build-logs/matrix-project.logis excluded by!**/*.logdata/plugin-build-logs/maven-plugin.logis excluded by!**/*.logdata/plugin-build-logs/mercurial.logis excluded by!**/*.logdata/plugin-build-logs/metrics.logis excluded by!**/*.logdata/plugin-build-logs/mina-sshd-api-common.logis excluded by!**/*.logdata/plugin-build-logs/mina-sshd-api-core.logis excluded by!**/*.logdata/plugin-build-logs/monitoring.logis excluded by!**/*.logdata/plugin-build-logs/msbuild.logis excluded by!**/*.logdata/plugin-build-logs/nodejs.logis excluded by!**/*.logdata/plugin-build-logs/nodelabelparameter.logis excluded by!**/*.logdata/plugin-build-logs/okhttp-api.logis excluded by!**/*.logdata/plugin-build-logs/oss-symbols-api.logis excluded by!**/*.logdata/plugin-build-logs/pam-auth.logis excluded by!**/*.logdata/plugin-build-logs/parameterized-scheduler.logis excluded by!**/*.logdata/plugin-build-logs/parameterized-trigger.logis excluded by!**/*.logdata/plugin-build-logs/pipeline-build-step.logis excluded by!**/*.logdata/plugin-build-logs/pipeline-github-lib.logis excluded by!**/*.logdata/plugin-build-logs/pipeline-graph-analysis.logis excluded by!**/*.logdata/plugin-build-logs/pipeline-graph-view.logis excluded by!**/*.logdata/plugin-build-logs/pipeline-groovy-lib.logis excluded by!**/*.logdata/plugin-build-logs/pipeline-input-step.logis excluded by!**/*.logdata/plugin-build-logs/pipeline-maven-api.logis excluded by!**/*.logdata/plugin-build-logs/pipeline-maven.logis excluded by!**/*.logdata/plugin-build-logs/pipeline-milestone-step.logis excluded by!**/*.logdata/plugin-build-logs/pipeline-model-api.logis excluded by!**/*.logdata/plugin-build-logs/pipeline-model-declarative-agent.logis excluded by!**/*.logdata/plugin-build-logs/pipeline-model-definition.logis excluded by!**/*.logdata/plugin-build-logs/pipeline-model-extensions.logis excluded by!**/*.logdata/plugin-build-logs/pipeline-rest-api.logis excluded by!**/*.logdata/plugin-build-logs/pipeline-stage-step.logis excluded by!**/*.logdata/plugin-build-logs/pipeline-stage-tags-metadata.logis excluded by!**/*.logdata/plugin-build-logs/pipeline-stage-view.logis excluded by!**/*.logdata/plugin-build-logs/pipeline-utility-steps.logis excluded by!**/*.logdata/plugin-build-logs/plain-credentials.logis excluded by!**/*.logdata/plugin-build-logs/plugin-util-api.logis excluded by!**/*.logdata/plugin-build-logs/popper-api.logis excluded by!**/*.logdata/plugin-build-logs/popper2-api.logis excluded by!**/*.logdata/plugin-build-logs/postbuild-task.logis excluded by!**/*.logdata/plugin-build-logs/powershell.logis excluded by!**/*.logdata/plugin-build-logs/prism-api.logis excluded by!**/*.logdata/plugin-build-logs/prometheus.logis excluded by!**/*.logdata/plugin-build-logs/publish-over-ssh.logis excluded by!**/*.logdata/plugin-build-logs/publish-over.logis excluded by!**/*.logdata/plugin-build-logs/pubsub-light.logis excluded by!**/*.logdata/plugin-build-logs/rebuild.logis excluded by!**/*.logdata/plugin-build-logs/resource-disposer.logis excluded by!**/*.logdata/plugin-build-logs/role-strategy.logis excluded by!**/*.logdata/plugin-build-logs/run-condition.logis excluded by!**/*.logdata/plugin-build-logs/saml.logis excluded by!**/*.logdata/plugin-build-logs/scm-api.logis excluded by!**/*.logdata/plugin-build-logs/script-security.logis excluded by!**/*.logdata/plugin-build-logs/simple-theme-plugin.logis excluded by!**/*.logdata/plugin-build-logs/slack.logis excluded by!**/*.logdata/plugin-build-logs/snakeyaml-api.logis excluded by!**/*.logdata/plugin-build-logs/sonar.logis excluded by!**/*.logdata/plugin-build-logs/sse-gateway.logis excluded by!**/*.logdata/plugin-build-logs/ssh-agent.logis excluded by!**/*.logdata/plugin-build-logs/ssh-credentials.logis excluded by!**/*.logdata/plugin-build-logs/ssh-slaves.logis excluded by!**/*.logdata/plugin-build-logs/ssh-steps.logis excluded by!**/*.logdata/plugin-build-logs/ssh.logis excluded by!**/*.logdata/plugin-build-logs/sshd.logis excluded by!**/*.logdata/plugin-build-logs/structs.logis excluded by!**/*.logdata/plugin-build-logs/subversion.logis excluded by!**/*.logdata/plugin-build-logs/theme-manager.logis excluded by!**/*.logdata/plugin-build-logs/thinBackup.logis excluded by!**/*.logdata/plugin-build-logs/throttle-concurrents.logis excluded by!**/*.logdata/plugin-build-logs/timestamper.logis excluded by!**/*.logdata/plugin-build-logs/token-macro.logis excluded by!**/*.logdata/plugin-build-logs/trilead-api.logis excluded by!**/*.logdata/plugin-build-logs/uno-choice.logis excluded by!**/*.logdata/plugin-build-logs/variant.logis excluded by!**/*.logdata/plugin-build-logs/warnings-ng.logis excluded by!**/*.logdata/plugin-build-logs/workflow-aggregator.logis excluded by!**/*.logdata/plugin-build-logs/workflow-api.logis excluded by!**/*.logdata/plugin-build-logs/workflow-basic-steps.logis excluded by!**/*.logdata/plugin-build-logs/workflow-cps-global-lib.logis excluded by!**/*.logdata/plugin-build-logs/workflow-cps.logis excluded by!**/*.logdata/plugin-build-logs/workflow-durable-task-step.logis excluded by!**/*.logdata/plugin-build-logs/workflow-job.logis excluded by!**/*.logdata/plugin-build-logs/workflow-multibranch.logis excluded by!**/*.logdata/plugin-build-logs/workflow-scm-step.logis excluded by!**/*.logdata/plugin-build-logs/workflow-step-api.logis excluded by!**/*.logdata/plugin-build-logs/workflow-support.logis excluded by!**/*.logdata/plugin-build-logs/ws-cleanup.logis excluded by!**/*.logjdk-25-build-results-extended.csvis excluded by!**/*.csvjdk-25-build-results.csvis excluded by!**/*.csv
📒 Files selected for processing (1)
test-jdk-25.sh(6 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
test-jdk-25.sh (2)
test-pr-builds.sh (1)
switch_jdk(46-50)install-jdk-versions.sh (4)
update_path_for_jdk25(292-308)install_temurin_jdk25(229-265)verify_jdk_installation(339-352)is_jdk25_up_to_date(156-214)
🔇 Additional comments (4)
test-jdk-25.sh (4)
2-3: Good fix: DEBUG_LOG is initialized early and all writes are append-only.This resolves the earlier “used before init and later truncated” issue and prevents clobbering prior logs.
Also applies to: 89-89
84-87: Nice separation of outputs: CSV vs extended TSV.Using a dedicated TSV_RESULTS_FILE for 5-column output avoids corrupting the 3-column CSV—this addresses the prior corruption concern from earlier reviews.
250-271: TSV handling looks solid and fixes prior variable usage issues.
TSV_FILEis now defined fromOUTPUT_TSVand cleared on export failure.- Header skipping covers multiple variants (“plugin|name|Name”) and first-line check.
Also applies to: 305-339
133-141: Fix get_github_url to index plugins by name key
The update-center JSON definespluginsas an object keyed by plugin name, not an array. Replace the implementation in test-jdk-25.sh:133–141 with the following diff to correctly retrieve the SCM URL:-get_github_url() { - local plugin_name="$1" - jq -r --arg name "$plugin_name" '.plugins[] | select(.name == $name) | .scm | select(. != null)' "$PLUGINS_JSON" -} +get_github_url() { + local plugin_name="$1" + jq -r --arg name "$plugin_name" '.plugins[$name].scm // empty' "$PLUGINS_JSON" +}This has been verified (e.g.,
gitreturnshttps://github.com/jenkinsci/git-plugin).
test-jdk-25.sh
Outdated
| # Install required Python packages | ||
| pip install -r requirements.txt | ||
|
|
||
| echo "Running: python3 export_sheet_to_tsv.py \"$SPREADSHEET_ID_OR_NAME\" \"$WORKSHEET_NAME\" \"$OUTPUT_TSV\"" | ||
| echo "Running: python3 export_sheet_to_tsv.py \"$SPREADSHEET_ID_OR_NAME\" \"$WORKSHEET_NAME\" \"$OUTPUT_TSV\"" >> "$DEBUG_LOG" | ||
| python3 export_sheet_to_tsv.py "$SPREADSHEET_ID_OR_NAME" "$WORKSHEET_NAME" "$OUTPUT_TSV" | ||
| if [ $? -ne 0 ]; then | ||
| echo "Failed to export Google Sheet to TSV. Continuing without TSV data." >> "$DEBUG_LOG" | ||
| TSV_FILE="" | ||
| fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Pin Python toolchain usage to the venv and capture logs; fail fast on errors.
- Use
python -m pipto guarantee pip matches the interpreter in the active venv. - Capture install/export logs into DEBUG_LOG.
- Use the same interpreter (
python) to run the exporter.
Apply:
-# Install required Python packages
-pip install -r requirements.txt
-
-echo "Running: python3 export_sheet_to_tsv.py \"$SPREADSHEET_ID_OR_NAME\" \"$WORKSHEET_NAME\" \"$OUTPUT_TSV\""
-echo "Running: python3 export_sheet_to_tsv.py \"$SPREADSHEET_ID_OR_NAME\" \"$WORKSHEET_NAME\" \"$OUTPUT_TSV\"" >> "$DEBUG_LOG"
-python3 export_sheet_to_tsv.py "$SPREADSHEET_ID_OR_NAME" "$WORKSHEET_NAME" "$OUTPUT_TSV"
-if [ $? -ne 0 ]; then
+# Install required Python packages
+echo "Installing Python dependencies from requirements.txt" >> "$DEBUG_LOG"
+python -m pip install --upgrade pip >> "$DEBUG_LOG" 2>&1
+python -m pip install -r requirements.txt >> "$DEBUG_LOG" 2>&1 || {
+ echo "pip install failed" >> "$DEBUG_LOG"; TSV_FILE="";
+}
+
+echo "Running: python export_sheet_to_tsv.py \"$SPREADSHEET_ID_OR_NAME\" \"$WORKSHEET_NAME\" \"$OUTPUT_TSV\"" >> "$DEBUG_LOG"
+if ! python export_sheet_to_tsv.py "$SPREADSHEET_ID_OR_NAME" "$WORKSHEET_NAME" "$OUTPUT_TSV" >> "$DEBUG_LOG" 2>&1; then
echo "Failed to export Google Sheet to TSV. Continuing without TSV data." >> "$DEBUG_LOG"
TSV_FILE=""
fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Install required Python packages | |
| pip install -r requirements.txt | |
| echo "Running: python3 export_sheet_to_tsv.py \"$SPREADSHEET_ID_OR_NAME\" \"$WORKSHEET_NAME\" \"$OUTPUT_TSV\"" | |
| echo "Running: python3 export_sheet_to_tsv.py \"$SPREADSHEET_ID_OR_NAME\" \"$WORKSHEET_NAME\" \"$OUTPUT_TSV\"" >> "$DEBUG_LOG" | |
| python3 export_sheet_to_tsv.py "$SPREADSHEET_ID_OR_NAME" "$WORKSHEET_NAME" "$OUTPUT_TSV" | |
| if [ $? -ne 0 ]; then | |
| echo "Failed to export Google Sheet to TSV. Continuing without TSV data." >> "$DEBUG_LOG" | |
| TSV_FILE="" | |
| fi | |
| # Install required Python packages | |
| echo "Installing Python dependencies from requirements.txt" >> "$DEBUG_LOG" | |
| python -m pip install --upgrade pip >> "$DEBUG_LOG" 2>&1 | |
| python -m pip install -r requirements.txt >> "$DEBUG_LOG" 2>&1 || { | |
| echo "pip install failed" >> "$DEBUG_LOG" | |
| TSV_FILE="" | |
| } | |
| echo "Running: python export_sheet_to_tsv.py \"$SPREADSHEET_ID_OR_NAME\" \"$WORKSHEET_NAME\" \"$OUTPUT_TSV\"" >> "$DEBUG_LOG" | |
| if ! python export_sheet_to_tsv.py "$SPREADSHEET_ID_OR_NAME" "$WORKSHEET_NAME" "$OUTPUT_TSV" >> "$DEBUG_LOG" 2>&1; then | |
| echo "Failed to export Google Sheet to TSV. Continuing without TSV data." >> "$DEBUG_LOG" | |
| TSV_FILE="" | |
| fi |
- Enable bash strict mode (set -euo pipefail, safe IFS) and keep it enforced - Resolve script_dir once; use it for DEBUG_LOG and venv paths - Initialize DEBUG_LOG early and log errors consistently (tee to file) - Create/activate venv if missing; validate python3 and activation script; exit on failure - Wrap export_sheet_to_tsv.py with direct failure check and continue without TSV on error
- In test-jdk-25.sh, replace header check (name != "name") with a line_number == 1 guard and continue, ensuring the header is skipped regardless of column label and avoiding a plugin literally named "name". - Move logging and result writing outside the header check for clearer flow and consistent debug output.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
test-jdk-25.sh (2)
80-86: Ensure JDK 25 is actually installed and selected; avoid pointing JAVA_HOME to a non-existent directory.Right now we source the installer but never call it. Exporting JAVA_HOME to ~/.jdk-25 may silently point to an empty dir. Invoke the install/update helpers, then verify.
-source "$script_dir/install-jdk-versions.sh" # Changed from direct execution to sourcing - -# Ensure JDK 25 is used for all Java and Maven commands -export JAVA_HOME="$HOME/.jdk-25" -export PATH="$JAVA_HOME/bin:$PATH" -hash -r +source "$script_dir/install-jdk-versions.sh" # Changed from direct execution to sourcing +# Ensure Temurin JDK 25 is installed and on PATH +install_temurin_jdk25 +update_path_for_jdk25
1-383: Add missing JDK-25 installer function callsThe script correctly finds and sources
install-jdk-versions.sh, but never invokes the helper functions to actually install or configure JDK 25. As a result, relying solely on settingJAVA_HOME="$HOME/.jdk-25"may fail if JDK 25 isn’t already present or the PATH isn’t updated correctly.• In
test-jdk-25.sh, immediately after:source "$script_dir/install-jdk-versions.sh"add calls to:
source "$script_dir/install-jdk-versions.sh" +install_temurin_jdk25 +update_path_for_jdk25• This ensures Temurin JDK 25 is installed (if missing) and
JAVA_HOME/PATHare updated via the shared helper functions.
♻️ Duplicate comments (3)
test-jdk-25.sh (3)
41-45: TSV_FILE now correctly set from OUTPUT_TSV — prior failure fixed.This addresses the earlier bug where
$TSV_FILEwas referenced but never defined. Good catch.
115-117: 3-column CSV vs 5-column TSV separation is now consistent.Previous corruption concern (mixing different column counts into one file) is resolved:
$RESULTS_FILEis 3 columns;$TSV_RESULTS_FILEis 5 columns populated in Phase 3. LGTM.Also applies to: 330-334, 336-369
311-319: Header skipping now uses line number — robust against header text changes.This implements the earlier suggestion to avoid relying on a literal header value. Good.
🧹 Nitpick comments (6)
test-jdk-25.sh (6)
34-36: Silence ShellCheck SC1090 for dynamic source paths.Add shellcheck source directives to acknowledge dynamic sourcing targets.
-if ! source "$activate_file"; then +# shellcheck source=/dev/null +if ! source "$activate_file"; then-source "$script_dir/install-jdk-versions.sh" # Changed from direct execution to sourcing +# shellcheck source=./install-jdk-versions.sh +source "$script_dir/install-jdk-versions.sh" # Changed from direct execution to sourcingAlso applies to: 80-81
58-60: Remove duplicate strict-mode reset.
set -euo pipefailis already set at the top; repeating is noise.-# Ensure strict error checking flags are set -set -euo pipefail
77-81: Avoid reassigning script_dir.
script_diris computed at Line 7. Recomputing adds confusion and risk.-# The script directory is determined and stored in the variable `script_dir`. -script_dir=$(cd "$(dirname "$0")" && pwd) source "$script_dir/install-jdk-versions.sh" # Changed from direct execution to sourcing
41-44: Make sheet parameters overrideable via environment variables.Hardcoding reduces flexibility for CI runs and forks. Use defaults with env overrides.
-SPREADSHEET_ID_OR_NAME="1_XHzakLNwA44cUnRsY01kQ1X1SymMcJGFxXzhr5s3_s" # or use the Sheet ID -WORKSHEET_NAME="Java 25 compatibility progress" # Change to your worksheet name -OUTPUT_TSV="plugins-jdk25.tsv" +SPREADSHEET_ID_OR_NAME="${SPREADSHEET_ID_OR_NAME:-1_XHzakLNwA44cUnRsY01kQ1X1SymMcJGFxXzhr5s3_s}" # or use the Sheet ID +WORKSHEET_NAME="${WORKSHEET_NAME:-Java 25 compatibility progress}" +OUTPUT_TSV="${OUTPUT_TSV:-plugins-jdk25.tsv}"
121-131: Preflight checks for required tools (jq, git, curl, timeout) before builds.Maven is checked, but
jq,git,curl, andtimeoutare also hard dependencies used later.if command -v mvn &>/dev/null; then + for tool in jq git curl timeout; do + if ! command -v "$tool" &>/dev/null; then + echo "Error: Required tool '$tool' is not installed or not in PATH." >>"$DEBUG_LOG" + exit 1 + fi + done # Log Maven installation details to the debug log.
245-256: Run Maven in batch mode and surface version/errors; preserve timeout behavior.
-B -V -eimproves reproducibility and diagnostics in logs.- echo "Executing: timeout 20m mvn clean install -DskipTests" >>"$DEBUG_LOG" - timeout 20m mvn clean install -DskipTests >"$plugin_log_file" 2>&1 + echo "Executing: timeout 20m mvn -B -V -e clean install -DskipTests" >>"$DEBUG_LOG" + timeout 20m mvn -B -V -e clean install -DskipTests >"$plugin_log_file" 2>&1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
test-jdk-25.sh(6 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
test-jdk-25.sh (2)
test-pr-builds.sh (2)
test_pr(53-142)switch_jdk(46-50)install-jdk-versions.sh (4)
update_path_for_jdk25(292-308)install_temurin_jdk25(229-265)verify_jdk_installation(339-352)is_jdk25_up_to_date(156-214)
🪛 Shellcheck (0.10.0)
test-jdk-25.sh
[warning] 35-35: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
🔇 Additional comments (1)
test-jdk-25.sh (1)
87-91: Verify that the selected Java is truly JDK 25.Since we now call the installer, consider asserting the version to fail fast if PATH/JAVA_HOME is misaligned.
You can add a quick check after
update_path_for_jdk25:echo "DEBUG: Output of 'java -version' after sourcing install-jdk-versions.sh (in test-jdk-25.sh):" >> "$DEBUG_LOG" java -version >> "$DEBUG_LOG" 2>&1 +if ! java -version 2>&1 | grep -qE 'version "25'; then + echo "Error: Active Java is not JDK 25." >> "$DEBUG_LOG" + exit 1 +fi
- Export VENV_DIR from $venv_dir and use it for all Python calls - Install requirements via "$VENV_DIR/bin/pip" using "$script_dir/requirements.txt" - Run export_sheet_to_tsv.py with "$VENV_DIR/bin/python" and redirect stdout/stderr to DEBUG_LOG - Remove noisy echo lines and reference files via $script_dir for robustness This ensures the script uses the correct virtualenv, avoids system Python/pip conflicts, and centralizes output for easier debugging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
test-jdk-25.sh (2)
80-93: Ensure JDK 25 is actually installed/selected before calling java -version; replace manual JAVA_HOME export with installer functionsRight now you source install-jdk-versions.sh but never call its installer/updater, then you force JAVA_HOME and immediately run java -version under set -e. If ~/.jdk-25 isn’t present, the script exits. Prefer invoking the provided functions to install and update PATH/JAVA_HOME consistently.
Apply:
-# The script directory is determined and stored in the variable `script_dir`. -script_dir=$(cd "$(dirname "$0")" && pwd) -source "$script_dir/install-jdk-versions.sh" # Changed from direct execution to sourcing - -# Ensure JDK 25 is used for all Java and Maven commands -export JAVA_HOME="$HOME/.jdk-25" -export PATH="$JAVA_HOME/bin:$PATH" -hash -r +# The script directory is determined and stored in the variable `script_dir`. +# (script_dir is already computed above; no need to recompute) +source "$script_dir/install-jdk-versions.sh" # bring in installer helpers + +# Harmonize with installer defaults and actually install/switch to JDK 25 +: "${JDK_INSTALL_DIR:=$HOME/.jdk-25}" +export JDK_INSTALL_DIR +install_temurin_jdk25 +update_path_for_jdk25
241-247: Don’t log “Successfully changed directory” when cd failedThese two log lines are unconditional and can be emitted even after a failed cd, which is misleading.
- echo "Reached after cd command" >>"$DEBUG_LOG" - echo "Successfully changed directory to $plugin_dir" >>"$DEBUG_LOG" + if [ "$build_status" = "success" ]; then + echo "Reached after cd command" >>"$DEBUG_LOG" + echo "Successfully changed directory to $plugin_dir" >>"$DEBUG_LOG" + fi
♻️ Duplicate comments (3)
test-jdk-25.sh (3)
318-335: Header skip logic for CSV is now robust; thanks for fixingSwitching to line_number == 1 avoids brittle header-name assumptions. This addresses the earlier review note.
44-58: TSV_FILE is now defined before use; prior failure mode resolvedTSV_FILE is assigned from OUTPUT_TSV and guarded when the export fails. This fixes the earlier undefined variable issue flagged in prior reviews.
106-120: Consistent column counts between headers and rows in both results filesRESULTS_FILE (3 cols) and TSV_RESULTS_FILE (5 cols) now match their write paths. This resolves the previous CSV corruption concern.
🧹 Nitpick comments (10)
test-jdk-25.sh (10)
83-83: Silence ShellCheck SC1090 for dynamic source pathShellCheck can’t follow a variable in source. Add an explicit directive so CI stays green.
Apply:
+# shellcheck source=./install-jdk-versions.sh source "$script_dir/install-jdk-versions.sh" # bring in installer helpers
61-63: Remove duplicate strict-mode invocationset -euo pipefail is already set on Line 3. The second call is redundant.
-set -euo pipefail
81-83: Avoid recomputing script_dirYou compute script_dir on Line 7 already. The second assignment risks drift if the script is sourced; keep a single source of truth.
-script_dir=$(cd "$(dirname "$0")" && pwd)
41-44: Let callers override Google Sheets params via env; keep current values as defaultsImproves flexibility in CI and local runs without editing the script.
-SPREADSHEET_ID_OR_NAME="1_XHzakLNwA44cUnRsY01kQ1X1SymMcJGFxXzhr5s3_s" # or use the Sheet ID -WORKSHEET_NAME="Java 25 compatibility progress" # Change to your worksheet name -OUTPUT_TSV="plugins-jdk25.tsv" -TSV_FILE="$OUTPUT_TSV" +SPREADSHEET_ID_OR_NAME="${SPREADSHEET_ID_OR_NAME:-1_XHzakLNwA44cUnRsY01kQ1X1SymMcJGFxXzhr5s3_s}" # or Sheet ID +WORKSHEET_NAME="${WORKSHEET_NAME:-Java 25 compatibility progress}" +OUTPUT_TSV="${OUTPUT_TSV:-plugins-jdk25.tsv}" +TSV_FILE="${TSV_FILE:-$OUTPUT_TSV}"
124-134: Verify required CLI tools (git, curl, jq, timeout) in addition to MavenYou gate on mvn, but compile and metadata paths also require git, curl, jq, and timeout. Under set -e, an unexpected “command not found” will hard-fail later.
# Check if Maven is installed and accessible -if command -v mvn &>/dev/null; then +require_cmd() { command -v "$1" >/dev/null 2>&1 || { echo "Error: '$1' not found in PATH." >>"$DEBUG_LOG"; exit 1; }; } +require_cmd git +require_cmd curl +require_cmd jq +require_cmd timeout +if command -v mvn &>/dev/null; then
235-236: Use shallow clone for speed and bandwidthCloning full history of large plugins is slow; shallow clone suffices for building head.
- git clone "$github_url" "$plugin_dir" >>"$DEBUG_LOG" 2>&1 || build_status="clone_failed" + git clone --depth 1 "$github_url" "$plugin_dir" >>"$DEBUG_LOG" 2>&1 || build_status="clone_failed"
250-259: Harden Maven invocation for CI reliability and speed
- Use -B for batch mode and show-transfer-progress=false to reduce noise.
- Skip tests consistently with -Dmaven.test.skip=true (covers more phases).
- Consider parallel builds with -T 1C.
- echo "Executing: timeout 20m mvn clean install -DskipTests" >>"$DEBUG_LOG" - timeout 20m mvn clean install -DskipTests >"$plugin_log_file" 2>&1 + echo "Executing: timeout 20m mvn -B -T 1C -Dorg.slf4j.simpleLogger.showDateTime=true -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn -Dorg.slf4j.simpleLogger.showThreadName=false -Dorg.slf4j.simpleLogger.showShortLogName=true -Dorg.slf4j.simpleLogger.level=info -Dorg.slf4j.simpleLogger.showLogName=false --no-transfer-progress clean install -Dmaven.test.skip=true" >>"$DEBUG_LOG" + timeout 20m mvn -B --no-transfer-progress -T 1C clean install -Dmaven.test.skip=true >"$plugin_log_file" 2>&1
334-334: Mitigate CSV/Excel formula injection in resultsIf a plugin name begins with =, +, -, or @, opening the CSV in spreadsheet tools may evaluate it as a formula. Sanitize fields on write.
[security]
- echo "$name,$popularity,$build_status" >> "$RESULTS_FILE" + safe_name=${name//\"/\"\"} + case "$safe_name" in + [=\+\-@]*) safe_name="'$safe_name" ;; + esac + echo "\"$safe_name\",$popularity,$build_status" >> "$RESULTS_FILE"- echo "$name,$install_count,$pr_url,$is_merged,$build_status" >> "$TSV_RESULTS_FILE" + safe_name=${name//\"/\"\"} + case "$safe_name" in + [=\+\-@]*) safe_name="'$safe_name" ;; + esac + echo "\"$safe_name\",$install_count,$pr_url,$is_merged,$build_status" >> "$TSV_RESULTS_FILE"Also applies to: 366-367
90-93: Minor: align debug messages with actual flowOn Lines 128-129 you print “before potential JDK 25 switch,” but by then you’ve already exported JAVA_HOME/updated PATH (or after the proposed installer call). Tweak wording to avoid confusion.
-echo "DEBUG: Output of 'mvn -v' before potential JDK 25 switch (in test-jdk-25.sh):" >> "$DEBUG_LOG" +echo "DEBUG: Output of 'mvn -v' after JDK 25 setup (in test-jdk-25.sh):" >> "$DEBUG_LOG"Also applies to: 128-129
171-175: get_github_url: consider fallback to .issueTracker or .wiki as last resortSome plugins might not have scm but have meaningful URLs elsewhere. Optional: fall back with priority order.
If you want, I can propose a jq filter that tries scm, then urls.url, then issueTracker.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
test-jdk-25.sh(6 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
test-jdk-25.sh (2)
install-jdk-versions.sh (5)
update_path_for_jdk25(292-308)install_temurin_jdk25(229-265)verify_jdk_installation(339-352)is_jdk25_up_to_date(156-214)get_latest_jdk25_version(124-138)test-pr-builds.sh (2)
test_pr(53-142)switch_jdk(46-50)
🪛 Shellcheck (0.10.0)
test-jdk-25.sh
[warning] 35-35: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
🔇 Additional comments (1)
test-jdk-25.sh (1)
9-14: Good: DEBUG_LOG initialized early and used consistentlyEarly init/export avoids clobbering and “append to empty path” pitfalls seen earlier. Nice cleanup.
- Drop set -euo pipefail and its comment - Script handles failures explicitly (e.g., export_sheet_to_tsv fallback) and may rely on optional/unset vars; strict mode caused unwanted aborts - Ensure architecture detection and later steps still run even if optional inputs are missing
Define require_cmd() to verify git, curl, jq, and timeout are in PATH. Log a clear error to DEBUG_LOG and exit if a tool is missing. Fail fast to prevent opaque set -e aborts; keep Maven check as-is.
- Add require_cmd helper to fail fast with clear messages when tools are missing - Check for git, curl, jq, and timeout before executing the rest of the script
Add --depth 1 to git clone in compile_plugin to fetch only the latest commit. This reduces network usage and clone time and keeps DEBUG_LOG smaller, without changing failure handling.
- run-maven-build.sh: apply default mvn flags (-B --no-transfer-progress -T 1C -Dmaven.test.skip=true), log the effective command, and document defaults in usage for quieter, more reliable CI builds. - test-jdk-25.sh: invoke plugin builds with the same flags (batch mode, no transfer progress, 1C, skip tests) to reduce log noise and standardize behavior.
Update the debug echo in test-jdk-25.sh to state "after JDK 25 setup" instead of "before potential JDK 25 switch" so the logs accurately reflect when mvn -v runs, improving clarity during debugging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
test-jdk-25.sh (1)
76-85: Sourcing install script without invoking install leaves JDK 25 unset; manual JAVA_HOME may be invalid.You source
install-jdk-versions.shbut never callinstall_temurin_jdk25. Then you forceJAVA_HOME="$HOME/.jdk-25"regardless of whether JDK 25 was actually installed there. This can result in running the systemjavainstead of 25 (PATH falls through), or failing tasks expecting JDK 25.Minimal fix: call the installer and let it set JAVA_HOME/PATH. Optionally set a default JDK installation dir.
-# Call the script to install JDK versions -# The script directory is determined and stored in the variable `script_dir`. -script_dir=$(cd "$(dirname "$0")" && pwd) -source "$script_dir/install-jdk-versions.sh" # Changed from direct execution to sourcing - -# Ensure JDK 25 is used for all Java and Maven commands -export JAVA_HOME="$HOME/.jdk-25" -export PATH="$JAVA_HOME/bin:$PATH" -hash -r +# Call the script to install JDK versions and ensure JDK 25 is active +# shellcheck source=/dev/null +source "$script_dir/install-jdk-versions.sh" +export JDK_INSTALL_DIR="${JDK_INSTALL_DIR:-$HOME/.jdk-25}" +# ARCHITECTURE already computed above; ensure it is exported for the sourced functions +export ARCHITECTURE +install_temurin_jdk25 +hash -rThis aligns with install-jdk-versions.sh’s update_path_for_jdk25/verify routines and guarantees the intended Java toolchain.
♻️ Duplicate comments (1)
test-jdk-25.sh (1)
41-45: Make Google Sheets parameters overridable via environment (prev. review requested).Hard-coding reduces flexibility. Use env defaults so CI or local runs can swap targets without editing the script.
-SPREADSHEET_ID_OR_NAME="1_XHzakLNwA44cUnRsY01kQ1X1SymMcJGFxXzhr5s3_s" # or use the Sheet ID -WORKSHEET_NAME="Java 25 compatibility progress" # Change to your worksheet name -OUTPUT_TSV="plugins-jdk25.tsv" -TSV_FILE="$OUTPUT_TSV" +SPREADSHEET_ID_OR_NAME="${SPREADSHEET_ID_OR_NAME:-1_XHzakLNwA44cUnRsY01kQ1X1SymMcJGFxXzhr5s3_s}" +WORKSHEET_NAME="${WORKSHEET_NAME:-Java 25 compatibility progress}" +OUTPUT_TSV="${OUTPUT_TSV:-plugins-jdk25.tsv}" +TSV_FILE="$OUTPUT_TSV"
🧹 Nitpick comments (13)
check-env.sh (1)
19-25: Early tool checks are good; consider consolidating and adding macOS timeout fallback.
- The
require_cmdguard is solid and avoids crypticset -efailures.- There’s a redundant jq check later (Lines 26-33) that will never be reached if jq is missing because
require_cmd jqexits first. Consider removing or folding the install-hints into this early block.- On macOS,
timeouttypically isn’t present;gtimeout(coreutils) is. Offer a fallback or a clearer error.Proposed refinement within this block:
-# Verify required CLI tools early to avoid set -e failures -require_cmd() { command -v "$1" >/dev/null 2>&1 || { error "'$1' not found in PATH."; exit 1; }; } -require_cmd git -require_cmd curl -require_cmd jq -require_cmd timeout +# Verify required CLI tools early to avoid set -e failures +require_cmd() { command -v "$1" >/dev/null 2>&1 || { error "'$1' not found in PATH."; exit 1; }; } +for c in git curl jq; do require_cmd "$c"; done +# timeout: support macOS coreutils +if command -v timeout >/dev/null 2>&1; then + : +elif command -v gtimeout >/dev/null 2>&1; then + info "Using 'gtimeout' as timeout implementation (macOS coreutils)." + alias timeout='gtimeout' +else + error "'timeout' (or 'gtimeout') not found in PATH." + exit 1 +fiOutside this hunk: remove or convert the later jq-only check (Lines 26-33) to avoid duplication, or change
require_cmd jqto a soft check that prints the same install hint. I can propose a precise patch if you want to keep the install hints here.run-maven-build.sh (2)
24-26: Nice hardened defaults; add minor quality-of-life improvements.
- Consider allowing users to override defaults via an env var (e.g., DEFAULT_MVN_FLAGS from the environment) without editing the script.
- For the log line, use shell-escaped echo to reflect the exact command that will run (handles spaces/quotes in args).
-DEFAULT_MVN_FLAGS=(-B --no-transfer-progress -T 1C -Dmaven.test.skip=true) -echo "Executing: mvn ${DEFAULT_MVN_FLAGS[*]} $*" >> "$LOG_FILE" +DEFAULT_MVN_FLAGS=(${DEFAULT_MVN_FLAGS[@]:- -B --no-transfer-progress -T 1C -Dmaven.test.skip=true}) +printf 'Executing: mvn ' >> "$LOG_FILE" +printf '%q ' "${DEFAULT_MVN_FLAGS[@]}" >> "$LOG_FILE" +printf '%q ' "$@" >> "$LOG_FILE" +printf '\n' >> "$LOG_FILE"Also consider creating the log directory if needed (before writing Line 21):
mkdir -p "$(dirname "$LOG_FILE")"
33-33: Quote the exit code (minor).Harmless in practice, but consistent quoting avoids accidental globbing/word-splitting if the variable were ever unset or altered.
-exit $EXIT_CODE +exit "$EXIT_CODE"test-jdk-25.sh (10)
7-8: Duplicate script_dir computation; keep a single source of truth.You compute
script_dirat Lines 7-8 and again at Lines 78-80. The second assignment is redundant and risks drift.-# The script directory is determined and stored in the variable `script_dir`. -script_dir=$(cd "$(dirname "$0")" && pwd)Also applies to: 78-80
35-38: ShellCheck SC1090: dynamic source; annotate to silence false positive.Add a directive before sourcing the venv to appease linting.
-# Source the virtual environment; exit if activation fails -if ! source "$activate_file"; then +# Source the virtual environment; exit if activation fails +# shellcheck source=/dev/null +if ! source "$activate_file"; thenSimilar for Line 79 when sourcing install-jdk-versions.sh.
51-58: Pin to the venv’s pip/python is good; add small robustness.
- Consider upgrading pip and failing fast on dependency install errors (they will already cause
set -eto exit, but logging why helps).- Optionally guard for missing requirements file.
-"$VENV_DIR/bin/pip" install -r "$script_dir/requirements.txt" >> "$DEBUG_LOG" 2>&1 +if [ -f "$script_dir/requirements.txt" ]; then + "$VENV_DIR/bin/pip" install --upgrade pip >> "$DEBUG_LOG" 2>&1 + "$VENV_DIR/bin/pip" install -r "$script_dir/requirements.txt" >> "$DEBUG_LOG" 2>&1 +else + echo "No requirements.txt at $script_dir; skipping pip install." >> "$DEBUG_LOG" +fi
120-126: Duplicate require_cmd logic across scripts; centralize to avoid drift.
check-env.shimplements a similarrequire_cmd. DRY it up to keep behavior/messages consistent (including macOSgtimeoutfallback if you add it there).
- Option A: source
check-env.shearly and reuse itsrequire_cmdfunction (be mindful it performs additional checks).- Option B: move
require_cmdintolog-utils.shand source it everywhere.
247-250: Extraneous post-cd debug lines.These two lines add noise without additional value and can be removed.
- if [ "$build_status" = "success" ]; then - echo "Reached after cd command" >>"$DEBUG_LOG" - echo "Successfully changed directory to $plugin_dir" >>"$DEBUG_LOG" - fi + # (cd success is implied; failures are already handled and logged)
255-257: Log message doesn’t match the executed Maven command; align them.The echoed command includes extra
-Dorg.slf4j.simpleLogger.*flags that are not in the actual invocation, which is misleading when debugging.-echo "Executing: timeout 20m mvn -B -T 1C -Dorg.slf4j.simpleLogger.showDateTime=true -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn -Dorg.slf4j.simpleLogger.showThreadName=false -Dorg.slf4j.simpleLogger.showShortLogName=true -Dorg.slf4j.simpleLogger.level=info -Dorg.slf4j.simpleLogger.showLogName=false --no-transfer-progress clean install -Dmaven.test.skip=true" >>"$DEBUG_LOG" -timeout 20m mvn -B --no-transfer-progress -T 1C clean install -Dmaven.test.skip=true >"$plugin_log_file" 2>&1 +echo "Executing: timeout 20m mvn -B --no-transfer-progress -T 1C clean install -Dmaven.test.skip=true" >>"$DEBUG_LOG" +timeout 20m mvn -B --no-transfer-progress -T 1C clean install -Dmaven.test.skip=true >"$plugin_log_file" 2>&1Alternatively, include the
-Dorg.slf4j.simpleLogger.*flags in the actual command if that’s desired.
287-306: TSV parsing: normalize CRLF and be robust to value casing.If the TSV comes from Sheets with CRLF,
is_mergedmay include\r. You already upper-case; also strip\rto avoid false negatives.- status_upper=$(echo "$is_merged" | tr '[:lower:]' '[:upper:]') + status_upper=$(echo "${is_merged%$'\r'}" | tr '[:lower:]' '[:upper:]')
311-314: Membership check reads the file for every plugin; consider an in-memory set.For O(n) queries,
grepper item is fine for 250 rows, but an associative array speeds it up and avoids repeated I/O.Sketch:
declare -A JDK25_TRUE while read -r name; do JDK25_TRUE["$name"]=1; done < "$JDK25_TRUE_SET_FILE" in_jdk25_true_set() { [[ -n "${JDK25_TRUE[$1]-}" ]]; }
320-339: CSV loop: solid header skip; add quoting on CSV writes (optional).If any plugin name contains commas (unlikely), CSV may break. Consider quoting fields when writing RESULTS_FILE.
- echo "$name,$popularity,$build_status" >> "$RESULTS_FILE" + printf '"%s",%s,"%s"\n' "$name" "$popularity" "$build_status" >> "$RESULTS_FILE"
343-372: Extended TSV join is correct; add CRLF normalization and quoting.Mirror the CRLF strip for
is_mergedhere too, and consider quoting fields for CSV safety.- status_upper=$(echo "$is_merged" | tr '[:lower:]' '[:upper:]') + status_upper=$(echo "${is_merged%$'\r'}" | tr '[:lower:]' '[:upper:]') - echo "$name,$install_count,$pr_url,$is_merged,$build_status" >> "$TSV_RESULTS_FILE" + printf '"%s","%s","%s","%s","%s"\n' "$name" "$install_count" "$pr_url" "$is_merged" "$build_status" >> "$TSV_RESULTS_FILE"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
check-env.sh(1 hunks)run-maven-build.sh(2 hunks)test-jdk-25.sh(8 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
check-env.sh (1)
log-utils.sh (1)
error(19-21)
test-jdk-25.sh (2)
install-jdk-versions.sh (5)
install_temurin_jdk25(229-265)update_path_for_jdk25(292-308)verify_jdk_installation(339-352)is_jdk25_up_to_date(156-214)get_latest_jdk25_version(124-138)test-pr-builds.sh (2)
test_pr(53-142)switch_jdk(46-50)
🪛 Shellcheck (0.10.0)
test-jdk-25.sh
[warning] 35-35: ShellCheck can't follow non-constant source. Use a directive to specify location.
(SC1090)
🔇 Additional comments (1)
run-maven-build.sh (1)
11-11: Usage string update is accurate and helpful.The defaults are clearly communicated in the usage. LGTM.
This pull request introduces a new Python script to export Google Sheets data and integrates it into an existing shell script.
Summary by CodeRabbit
New Features
Chores
Tests