Skip to content

Conversation

pctablet505
Copy link
Collaborator

there was a feature request for non cumulative model.predict, where we don't store the outputs and return it directly.
#21642 In case of very large datasets and limited memory, it is more feasible to retrieve the outputs using a callback and store to storage, instead of storing everything in memory.

This PR fixes #21642.

pctablet505 and others added 3 commits September 16, 2025 13:18
Introduces an 'accumulate' argument to the predict method in all backend trainers and the base Trainer class. When set to False, predictions are not accumulated in memory and must be handled via callbacks, helping to avoid memory issues with large datasets. Updates method signatures, docstrings, and internal logic accordingly.
Standardizes the handling of outputs in the predict methods across all backend trainers by always initializing outputs as None and returning None when no outputs are accumulated. This simplifies the logic and ensures consistent behavior when accumulate is False or when no predictions are made.
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @pctablet505, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request implements a highly requested feature for non-cumulative prediction within the model.predict function. By adding an accumulate boolean parameter, it allows users to prevent the accumulation of all prediction outputs in memory. This is particularly beneficial for scenarios involving very large datasets or environments with limited memory, as it enables processing of outputs through callbacks without storing the entire result set, thereby enhancing the scalability and efficiency of the prediction process.

Highlights

  • Non-cumulative Prediction: Introduced an accumulate parameter to the predict method across all backend trainers (JAX, NumPy, OpenVINO, TensorFlow, Torch) and the base trainer.py.
  • Memory Optimization: When accumulate is set to False, the predict method will no longer store all batch outputs in memory, addressing memory limitations for very large datasets.
  • Callback Integration: Users can now handle prediction outputs via callbacks when accumulate=False, allowing for custom storage or processing without accumulating results in the main return value.
  • Docstring Updates: The docstring for the predict method in keras/src/trainers/trainer.py has been updated to clearly explain the new accumulate parameter and its effect on the method's return value.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new accumulate parameter to the model.predict() method across all backends. This is a valuable feature for users working with very large datasets, as it allows them to process predictions batch by batch using callbacks without storing all outputs in memory, thus preventing potential out-of-memory errors. The implementation is consistent across the JAX, NumPy, TensorFlow, and Torch backends. However, I've found a critical issue in the OpenVINO backend implementation that needs to be addressed.

Refactored the predict method signatures in all backend trainer classes and the base Trainer to use one argument per line. Also reformatted long return statements for better readability. No functional changes were made.
@codecov-commenter
Copy link

codecov-commenter commented Sep 17, 2025

Codecov Report

❌ Patch coverage is 56.41026% with 17 lines in your changes missing coverage. Please review.
✅ Project coverage is 82.51%. Comparing base (d19fece) to head (6b32547).

Files with missing lines Patch % Lines
keras/src/backend/openvino/trainer.py 0.00% 9 Missing ⚠️
keras/src/backend/jax/trainer.py 71.42% 1 Missing and 1 partial ⚠️
keras/src/backend/numpy/trainer.py 71.42% 1 Missing and 1 partial ⚠️
keras/src/backend/tensorflow/trainer.py 75.00% 1 Missing and 1 partial ⚠️
keras/src/backend/torch/trainer.py 75.00% 1 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #21673      +/-   ##
==========================================
- Coverage   82.53%   82.51%   -0.02%     
==========================================
  Files         571      571              
  Lines       57900    57925      +25     
  Branches     9056     9071      +15     
==========================================
+ Hits        47785    47797      +12     
- Misses       7800     7809       +9     
- Partials     2315     2319       +4     
Flag Coverage Δ
keras 82.32% <56.41%> (-0.02%) ⬇️
keras-jax 63.46% <12.82%> (-0.03%) ⬇️
keras-numpy 57.83% <12.82%> (-0.01%) ⬇️
keras-openvino 34.37% <0.00%> (-0.02%) ⬇️
keras-tensorflow 64.18% <15.38%> (-0.03%) ⬇️
keras-torch 63.66% <15.38%> (-0.03%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Collaborator

@fchollet fchollet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR!

Can you explain the advantages of calling predict() (with a callback) if it returns nothing? Why not simply call predict(batch) or predict_on_batch in a loop?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Request for non-cumulative prediction options
6 participants