-
Notifications
You must be signed in to change notification settings - Fork 40
Pull requests: flagos-ai/vllm-plugin-FL
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
upgrade vllm to 0.18.1
benchmarks
ci
core
docs
examples
tests
#112
opened Mar 31, 2026 by
ceci3
Loading…
3 tasks
supoort bge-rerank-v2-m3 for BgeM3EmbeddingModel
core
#107
opened Mar 26, 2026 by
Realmhang
Loading…
feat: add GLM-4.7-Flash (glm4_moe_lite) model support
core
#104
opened Mar 24, 2026 by
Realmhang
Loading…
[bugfix]The function is_cuda() should return False on the hygon platform
core
#78
opened Mar 7, 2026 by
aistream69
Loading…
3 tasks
Add multi-vendor C++ extension build system with CMake
build
core
#73
opened Mar 3, 2026 by
ceci3
Loading…
3 tasks
Tsingmicro add txda backend to vllm-fl plugin.
core
#52
opened Feb 16, 2026 by
tsingmicro-public-e
Loading…
Decouple op implementations from Backend classes and add descriptor-based method dispatch
core
docs
#46
opened Feb 11, 2026 by
xin2an
Loading…
Add attention-backend flash_attn and flashinfer in reference when run…
benchmarks
core
#40
opened Feb 7, 2026 by
luoyc123
Loading…
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.