Neural Magic
Neural Magic (Acquired by Red Hat) empowers developers to optimize & deploy LLMs at scale. Our model compression & acceleration enable top performance with vLLM
Pinned Loading
Repositories
Showing 10 of 78 repositories
- compressed-tensors Public
A safetensors extension to efficiently store sparse quantized tensors on disk
neuralmagic/compressed-tensors’s past year of commit activity - speculators Public
neuralmagic/speculators’s past year of commit activity - DeepGEMM Public Forked from deepseek-ai/DeepGEMM
DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling
neuralmagic/DeepGEMM’s past year of commit activity - arena-hard-auto Public Forked from lmarena/arena-hard-auto
Arena-Hard-Auto: An automatic LLM benchmark.
neuralmagic/arena-hard-auto’s past year of commit activity - model-validation-configs Public
neuralmagic/model-validation-configs’s past year of commit activity - collective_op_benchmarks Public
neuralmagic/collective_op_benchmarks’s past year of commit activity