Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Avoid KV Cache and offload Model weights in RL workloads #11638

Closed
1 task done
PeterSH6 opened this issue Dec 30, 2024 · 5 comments
Closed
1 task done

[Feature]: Avoid KV Cache and offload Model weights in RL workloads #11638

PeterSH6 opened this issue Dec 30, 2024 · 5 comments
Labels
feature request New feature or request

Comments

@PeterSH6
Copy link

PeterSH6 commented Dec 30, 2024

🚀 The feature, motivation and pitch

Thanks for the awesome inference library! I'm writing to request two features that would be beneficial to RL post-training workloads.

In online PPO (or GRPO, online DPO), the policy model will perform auto-regressive generation (using vLLM or other inference engines) and fwd + bwd computation with training infrastructure. Therefore, in the training stage, we hope to free the KVCache and even offload the model parameter stored in the vLLM (as the model parallel strategies during generation and training could be different).

Therefore, we propose two sets of APIs in the Worker, GPUExecutor, LLMEngine, and LLM classes and one model init choice:

  • free_cache_engine() and init_cache_engine(): The users can call the free_cache_engine from an instance of LLM and the calling chain could be LLM.free_cache_engine() -> LLMEngine.free_cache_engine() -> GPUExecutor.free_cache_engine() -> Worker.free_cache_engine(). A similar calling chain applies to init_cache_engine() while the Worker.init_cache_engine() will simply call the _init_cache_engine() in the Worker class.
    After generation, the RL framework can call the llm.free_cache_engine() to release KVCache and after update_policy, it will call llm.init_cache_egine(). We have implemented an example in the veRL framework. See veRL, which utilize a SPMD version of veRL ([RFC]: Fully SPMD Execution for Offline Inference #11400)
  • offload_model_weights(): We maintain a self.cpu_model in the Worker and the calling chain is similar to above. After generation, the RL framework will call the llm.offload_model_weights() to offload the weight to CPU and reload it back in the next iteration
  • Model init choice: Currently, the vLLM Engine will initialize the model from the AutoModel.from_pretrain(). However, in RL workloads, we hope vLLM can provide an option that only initializes the model without downloading the pre-trained weights. Instead, we will later synchronize the model with an HF model outside the vLLM Engine.

Potential Issues:
When using free_cache_engine and offload_model_weights, we have to disable the CUDAGraph, which could reduce the generation throughput.
One issue in SGLang observes a similar problem: sgl-project/sglang#2542
Currently, in veRL, we simply set enforce_eager=True in all settings.
It would be better to use CUDAGraph in generation and avoid KVCache and model weights in training!

Looking forward to your responses and thanks for any help!

CC

@comaniac @WoosukKwon @youkaichao @happierpig

Alternatives

No response

Additional context

No response

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@samsja
Copy link

samsja commented Feb 27, 2025

This feature seems very needed for RL workload. Would love to see it in vllm.

Do you know what the status is, and if some work is going in that direction?

Code visualization diagram

@youkaichao
Copy link
Member

@PeterSH6 I think this is solved by the sleep mode, right? Any remaining issue here?

@PeterSH6
Copy link
Author

PeterSH6 commented Mar 4, 2025

@youkaichao Yes, this is solved by the sleep mode. It provides verl with significant rollout speedup. Feel free to close this issue :)

Recently, we investigated that vllm == 0.7.3 could result in some CPU memory leakage (probably with sleep mode), while vllm 0.6.3 works fine. We shall open a new issue to discuss this problem.

@youkaichao
Copy link
Member

we investigated that vllm == 0.7.3 could result in some CPU memory leakage (probably with sleep mode), while vllm 0.6.3 works fine. We shall open a new issue to discuss this problem.

feel free to open a new issue for it!

@PeterSH6
Copy link
Author

PeterSH6 commented Mar 4, 2025

Hi @samsja , You can try this feature using verl with vllm == 0.7.3

Nice profile! What a laugh tale!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants