Skip to content

@myanvoos: Extend web.py and add local hot reload #922

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open

Conversation

DonggeLiu
Copy link
Collaborator

Running experiment for #884, contributed by @myanvoos

@DonggeLiu
Copy link
Collaborator Author

/gcbrun exp -n mv -ag

1 similar comment
@DonggeLiu
Copy link
Collaborator Author

/gcbrun exp -n mv -ag

@myanvoos
Copy link
Contributor

Looks like we're missing the watchdog import. I'll quickly fix this

@DonggeLiu
Copy link
Collaborator Author

Looks like we're missing the watchdog import. I'll quickly fix this

Thanks! Let's do this in a new PR based on this branch (exp-884)

DonggeLiu pushed a commit that referenced this pull request Mar 24, 2025
Should fix the lint error in #922 

cc: @DonggeLiu
@DonggeLiu
Copy link
Collaborator Author

/gcbrun exp -n mv1 -ag

DonggeLiu and others added 4 commits March 24, 2025 12:51
Related discussion #862 

This PR extends the command line parser in
[`web.py`](https://github.com/google/oss-fuzz-gen/blob/main/report/web.py)
to take in some additional inputs as outlined
[here](#862 (comment)).
It also adds an optional server-side hot-reloading functionality with
the `watchdog` library.

Happy to iterate upon this @DonggeLiu, and whenever you're ready

---------

Co-authored-by: Dongge Liu <[email protected]>
Should fix the lint error in #922 

cc: @DonggeLiu
@DonggeLiu
Copy link
Collaborator Author

/gcbrun exp -n mv -ag

@DonggeLiu
Copy link
Collaborator Author

Hmm... the report is still empty, suggesting something wrong with the experiment.
The GKE job was lost along with the log, I will rerun to reproduce the issue.

@DonggeLiu
Copy link
Collaborator Author

/gcbrun exp -n mv -ag

@myanvoos
Copy link
Contributor

Hey @DonggeLiu, I'm just revisiting this watcher now and noticed that I should probably update the README to reference the -wf flag, not -w, and sync it with the main branch.

I will also add some extra logic to the watcher so that it makes sure results-report reflects exactly what's in results. Currently the watcher regenerates reports with every new benchmark added but doesn't clear it if we remove the benchmark from results, so results-report will still be populated with old 'ghost' benchmarks (that won't show up on reports, but they bloat up the folder)

This is more of a folder organisation/tidying up thing and isn't related to the contents of the reports, which should be updated regardless of if we add or remove benchmarks.

Also, were you able to reproduce the issue? The hot reloading is working locally for me but I'm keen to fix it!

@DonggeLiu
Copy link
Collaborator Author

The cloud log did not show much useful information, unfortunately.
image

Here are relevant lines:

echo "Uploading the report."
BUCKET_PATH="gs://oss-fuzz-gcb-experiment-run-logs/Result-reports/${GCS_DIR:?}"
# Upload HTMLs.
gsutil -q -m -h "Content-Type:text/html" \
-h "Cache-Control:public, max-age=3600" \
cp -r . "$BUCKET_PATH"
# Find all JSON files and upload them, removing the leading './'
find . -name '*json' | while read -r file; do
file_path="${file#./}" # Remove the leading "./".
gsutil -q -m -h "Content-Type:application/json" \
-h "Cache-Control:public, max-age=3600" cp "$file" "$BUCKET_PATH/$file_path"
done

QQ: Do we need to modify this?

$PYTHON -m report.web -r "${RESULTS_DIR:?}" -b "${BENCHMARK_SET:?}" -m "$MODEL" -o results-report

@myanvoos
Copy link
Contributor

myanvoos commented Apr 14, 2025

The cloud log did not show much useful information, unfortunately. image

Here are relevant lines:

echo "Uploading the report."
BUCKET_PATH="gs://oss-fuzz-gcb-experiment-run-logs/Result-reports/${GCS_DIR:?}"
# Upload HTMLs.
gsutil -q -m -h "Content-Type:text/html" \
-h "Cache-Control:public, max-age=3600" \
cp -r . "$BUCKET_PATH"
# Find all JSON files and upload them, removing the leading './'
find . -name '*json' | while read -r file; do
file_path="${file#./}" # Remove the leading "./".
gsutil -q -m -h "Content-Type:application/json" \
-h "Cache-Control:public, max-age=3600" cp "$file" "$BUCKET_PATH/$file_path"
done

QQ: Do we need to modify this?

$PYTHON -m report.web -r "${RESULTS_DIR:?}" -b "${BENCHMARK_SET:?}" -m "$MODEL" -o results-report

Oh right I think I might know what's happening.

def main():
  args = _parse_arguments()
  logging.getLogger().setLevel(os.environ.get('LOGLEVEL', 'INFO').upper())

  watcher = ReportWatcher(args)
  watcher.start()

  try:
    should_continue = args.serve or args.watch_filesystem or args.watch_template

    while should_continue:
      generate_report(args)
      time.sleep(args.interval_seconds)

  except KeyboardInterrupt:
    watcher.stop()
    logging.info('Exiting.')
    os._exit(0)

Without a --serve or watcher flags in the args should_continue will be false so we won't even reach generate_report(). So the upload report Bash script will be looking for files in an output folder that don't exist.

I'll put in an initial call to generate the report before initialising the watcher, that should fix it.

Edit for QQ: I don't think we need to modify

$PYTHON -m report.web -r "${RESULTS_DIR:?}" -b "${BENCHMARK_SET:?}" -m "$MODEL" -o results-report

because we already re-generate the report periodically here

(Unless you meant like for debugging -- oops)

@DonggeLiu
Copy link
Collaborator Author

/gcbrun exp -n mv -ag

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants