File tree Expand file tree Collapse file tree
Expand file tree Collapse file tree Original file line number Diff line number Diff line change @@ -32,14 +32,10 @@ pip install ".[cuda-serve]" --verbose
3232
3333# Replace [cuda-train] in the training command above with [ascend-train]
3434# on Huawei Ascend, or [musa-train] on Moore Threads MUSA.
35-
36- # Additional GR00T N1.5 runtime dependencies
37- pip install " peft==0.18.1" torchcodec
3835```
3936
40- GR00T N1.5 also requires:
41- - ` flash-attn ` built against the current ` torch ` version
42- - FFmpeg runtime libraries for ` torchcodec `
37+ GR00T N1.5 requires FFmpeg runtime libraries for ` torchcodec ` .
38+ Install FFmpeg before running training or dataset preprocessing.
4339
4440Install FFmpeg using one of the following options:
4541
@@ -52,13 +48,13 @@ sudo apt-get update
5248sudo apt-get install -y ffmpeg
5349```
5450
55- Install additional dependencies for downloading models/ datasets:
51+ Install the optional dependencies below only if you want to download models or datasets from HuggingFace Hub or ModelScope :
5652
5753``` sh
5854# For HuggingFace Hub
5955pip install huggingface_hub
6056
61- # For ModelScope (optional)
57+ # For ModelScope
6258pip install modelscope
6359```
6460
Original file line number Diff line number Diff line change 22
33-r ./base.txt
44
5+ # for robotics
6+ peft==0.18.1
7+
58# FlagTree for Ascend (extra index so PyPI remains available for build deps)
69# [--extra-index-url https://resource.flagos.net/repository/flagos-pypi-hosted/simple]
710flagtree==0.4.0+ascend3.2
Original file line number Diff line number Diff line change 11# serve-specific dependencies
22
33-r ./base.txt
4+
5+ # for robotics
6+ peft==0.18.1
7+
48vllm @ https://resource.flagos.net/repository/flagos-pypi-hosted/packages/vllm/0.13.0%2Bfl.0.1.cu128.g72506c983/vllm-0.13.0%2Bfl.0.1.cu128.g72506c983-cp312-cp312-linux_x86_64.whl
59
610# support 0.5b_multiple_instance ci test
Original file line number Diff line number Diff line change 44sentencepiece==0.2.1
55transformers==4.57.6
66tiktoken==0.12.0
7+
8+ # for robotics
9+ peft==0.18.1
10+ torchcodec==0.8.1
11+ flash-attn @ https://github.com/Dao-AILab/flash-attention/releases/download/v2.8.1/flash_attn-2.8.1+cu12torch2.9cxx11abiTRUE-cp312-cp312-linux_x86_64.whl
12+
713megatron_core @ https://resource.flagos.net/repository/flagos-pypi-hosted/packages/megatron-core/0.16.0rc0/megatron_core-0.16.0rc0-cp312-cp312-linux_x86_64.whl
You can’t perform that action at this time.
0 commit comments